Re: [GENERAL] main log encoding problem

2012-07-19 Thread Alexander Law

Hello,


Implementing any of these isn't trivial - especially making sure 
messages emitted to stderr from things like segfaults and dynamic 
linker messages are always correct. Ensuring that the logging 
collector knows when setlocale() has been called to change the 
encoding and translation of system messages, handling the different 
logging output methods, etc - it's going to be fiddly.


I have some performance concerns about the transcoding required for 
(b) or (c), but realistically it's already the norm to convert all the 
data sent to and from clients. Conversion for logging should not be a 
significant additional burden. Conversion can be short-circuited out 
when source and destination encodings are the same for the common case 
of logging in utf-8 or to a dedicated file.


The initial issue was that log file contains messages in different 
encodings. So transcoding is performed already, but it's not consistent 
and in my opinion this is the main problem.



I suspect the eventual choice will be "all of the above":

- Default to (b) or (c), both have pros and cons. I favour (c) with a 
UTF-8 BOM to warn editors, but (b) is nice for people whose DBs are 
all in the system locale.
As I understand UTF-8 is the default encoding for databases. And even 
when a database is in  the system encoding, translated postgres messages 
still come in UTF-8 and will go through UTF-8 -> System locale 
conversion within gettext.


- Allow (a) for people who have many different DBs in many different 
encodings, do high volume logging, and want to avoid conversion 
overhead. Let them deal with the mess, just provide an additional % 
code for the encoding so they can name their per-DB log files to 
indicate the encoding.


I think that (a) solution can be an evolvement of the logging mechanism 
if there will be a need for it.
The main issue is just that code needs to be prototyped, cleaned up, 
and submitted. So far nobody's cared enough to design it, build it, 
and get it through patch review. I've just foolishly volunteered 
myself to work on an automated crash-test system for virtual plug-pull 
testing, so I'm not stepping up.


I see you point and I can prepare a prototype if the proposed (c) 
solution seems reasonable enough and can be accepted.


Best regards,
Alexander


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Tatsuo Ishii
>> I am thinking about variant of C.
>>
>> Problem with C is, converting from other encoding to UTF-8 is not
>> cheap because it requires huge conversion tables. This may be a
>> serious problem with busy server. Also it is possible some information
>> is lossed while in this conversion. This is because there's no
>> gualntee that there is one-to-one-mapping between UTF-8 and other
>> encodings. Other problem with UTF-8 is, you have to choose *one*
>> locale when using your editor. This may or may not affect handling of
>> string in your editor.
>>
>> My idea is using mule-internal encoding for the log file instead of
>> UTF-8. There are several advantages:
>>
>> 1) Converion to mule-internal encoding is cheap because no conversion
>> table is required. Also no information loss happens in this
>> conversion.
>>
>> 2) Mule-internal encoding can be handled by emacs, one of the most
>> popular editors in the world.
>>
>> 3) No need to worry about locale. Mule-internal encoding has enough
>> information about language.
>> --
>>
> I believe that postgres has such conversion functions anyway. And they
> used for data conversion when we have clients (and databases) with
> different encodings. So if they can be used for data, why not to use
> them for relatively little amount of log messages?

Frontend/Backend encoding conversion only happens when they are
different. While conversion for logs *always* happens. A busy database
could produce tons of logs (i is not unusual that log all SQLs for
auditing purpose).

> And regarding mule internal encoding - reading about Mule
> http://www.emacswiki.org/emacs/UnicodeEncoding I found:
> /In future (probably Emacs 22), Mule will use an internal encoding
> which is a UTF-8 encoding of a superset of Unicode. /
> So I still see UTF-8 as a common denominator for all the encodings.
> I am not aware of any characters absent in Unicode. Can you please
> provide some examples of these that can results in lossy conversion?

You can google by "encoding "EUC_JP" has no equivalent in "UTF8"" or
some such to find such an example. In this case PostgreSQL just throw
an error. For frontend/backend encoding conversion this is fine. But
what should we do for logs? Apparently we cannot throw an error here.

"Unification" is another problem. Some kanji characters of CJK are
"unified" in Unicode. The idea of unification is, if kanji A in China,
B in Japan, C in Korea looks "similar" unify ABC to D. This is a great
space saving:-) The price of this is inablity of
round-trip-conversion. You can convert A, B or C to D, but you cannot
convert D to A/B/C.

BTW, I'm not stick with mule-internal encoding. What we need here is a
"super" encoding which could include any existing encodings without
information loss. For this purpose, I think we can even invent a new
encoding(maybe something like very first prposal of ISO/IEC
10646?). However, using UTF-8 for this purpose seems to be just a
disaster to me.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Tatsuo Ishii
> Hello,
>>
>> Implementing any of these isn't trivial - especially making sure
>> messages emitted to stderr from things like segfaults and dynamic
>> linker messages are always correct. Ensuring that the logging
>> collector knows when setlocale() has been called to change the
>> encoding and translation of system messages, handling the different
>> logging output methods, etc - it's going to be fiddly.
>>
>> I have some performance concerns about the transcoding required for
>> (b) or (c), but realistically it's already the norm to convert all the
>> data sent to and from clients. Conversion for logging should not be a
>> significant additional burden. Conversion can be short-circuited out
>> when source and destination encodings are the same for the common case
>> of logging in utf-8 or to a dedicated file.
>>
> The initial issue was that log file contains messages in different
> encodings. So transcoding is performed already, but it's not

This is not true. Transcoding happens only when PostgreSQL is built
with --enable-nls option (default is no nls).

> consistent and in my opinion this is the main problem.
> 
>> I suspect the eventual choice will be "all of the above":
>>
>> - Default to (b) or (c), both have pros and cons. I favour (c) with a
>> - UTF-8 BOM to warn editors, but (b) is nice for people whose DBs are
>> - all in the system locale.
> As I understand UTF-8 is the default encoding for databases. And even
> when a database is in the system encoding, translated postgres
> messages still come in UTF-8 and will go through UTF-8 -> System
> locale conversion within gettext.

Again, this is not always true.

>> - Allow (a) for people who have many different DBs in many different
>> - encodings, do high volume logging, and want to avoid conversion
>> - overhead. Let them deal with the mess, just provide an additional %
>> - code for the encoding so they can name their per-DB log files to
>> - indicate the encoding.
>>
> I think that (a) solution can be an evolvement of the logging
> mechanism if there will be a need for it.
>> The main issue is just that code needs to be prototyped, cleaned up,
>> and submitted. So far nobody's cared enough to design it, build it,
>> and get it through patch review. I've just foolishly volunteered
>> myself to work on an automated crash-test system for virtual plug-pull
>> testing, so I'm not stepping up.
>>
> I see you point and I can prepare a prototype if the proposed (c)
> solution seems reasonable enough and can be accepted.
> 
> Best regards,
> Alexander
> 
> 
> -- 
> Sent via pgsql-bugs mailing list (pgsql-b...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-bugs

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] GENERATED columns

2012-07-19 Thread Daniel McGreal
Hi,

In my searching I found several references (in pg-hackers, circa 2007)
concerning the implementation of the SQL:2003 GENERATED column features.
This does not seem to have made it into release? Any plans, etc?

Dan.


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Alexander Law

The initial issue was that log file contains messages in different
encodings. So transcoding is performed already, but it's not

This is not true. Transcoding happens only when PostgreSQL is built
with --enable-nls option (default is no nls).

I'll restate the initial issue as I see it.
I have Windows and I'm installing PostgreSQL for Windows (latest 
version, downloaded from enterprise.db). Then I create a database with 
default settings (with UTF-8 encoding), do something wrong in my DB and 
get such a log file with the two different encodings (UTF-8 and 
Windows-1251 (ANSI)) and with localized postgres messages.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Alexander Law



And regarding mule internal encoding - reading about Mule
http://www.emacswiki.org/emacs/UnicodeEncoding I found:
/In future (probably Emacs 22), Mule will use an internal encoding
which is a UTF-8 encoding of a superset of Unicode. /
So I still see UTF-8 as a common denominator for all the encodings.
I am not aware of any characters absent in Unicode. Can you please
provide some examples of these that can results in lossy conversion?

You can google by "encoding "EUC_JP" has no equivalent in "UTF8"" or
some such to find such an example. In this case PostgreSQL just throw
an error. For frontend/backend encoding conversion this is fine. But
what should we do for logs? Apparently we cannot throw an error here.

"Unification" is another problem. Some kanji characters of CJK are
"unified" in Unicode. The idea of unification is, if kanji A in China,
B in Japan, C in Korea looks "similar" unify ABC to D. This is a great
space saving:-) The price of this is inablity of
round-trip-conversion. You can convert A, B or C to D, but you cannot
convert D to A/B/C.

BTW, I'm not stick with mule-internal encoding. What we need here is a
"super" encoding which could include any existing encodings without
information loss. For this purpose, I think we can even invent a new
encoding(maybe something like very first prposal of ISO/IEC
10646?). However, using UTF-8 for this purpose seems to be just a
disaster to me.

Ok, maybe the time of real universal encoding has not yet come. Then we 
maybe just should add a new parameter "log_encoding" (UTF-8 by default) 
to postgresql.conf. And to use this encoding consistently within 
logging_collector.

If this encoding is not available then fall back to 7-bit ASCII.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Tatsuo Ishii
>> You can google by "encoding "EUC_JP" has no equivalent in "UTF8"" or
>> some such to find such an example. In this case PostgreSQL just throw
>> an error. For frontend/backend encoding conversion this is fine. But
>> what should we do for logs? Apparently we cannot throw an error here.
>>
>> "Unification" is another problem. Some kanji characters of CJK are
>> "unified" in Unicode. The idea of unification is, if kanji A in China,
>> B in Japan, C in Korea looks "similar" unify ABC to D. This is a great
>> space saving:-) The price of this is inablity of
>> round-trip-conversion. You can convert A, B or C to D, but you cannot
>> convert D to A/B/C.
>>
>> BTW, I'm not stick with mule-internal encoding. What we need here is a
>> "super" encoding which could include any existing encodings without
>> information loss. For this purpose, I think we can even invent a new
>> encoding(maybe something like very first prposal of ISO/IEC
>> 10646?). However, using UTF-8 for this purpose seems to be just a
>> disaster to me.
>>
> Ok, maybe the time of real universal encoding has not yet come. Then
> we maybe just should add a new parameter "log_encoding" (UTF-8 by
> default) to postgresql.conf. And to use this encoding consistently
> within logging_collector.
> If this encoding is not available then fall back to 7-bit ASCII.

What do you mean by "not available"?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Alexander Law

Ok, maybe the time of real universal encoding has not yet come. Then
we maybe just should add a new parameter "log_encoding" (UTF-8 by
default) to postgresql.conf. And to use this encoding consistently
within logging_collector.
If this encoding is not available then fall back to 7-bit ASCII.

What do you mean by "not available"?
Sorry, it was inaccurate phrase. I mean "if the conversion to this 
encoding is not avaliable". For example, when we have database in EUC_JP 
and log_encoding set to Latin1. I think that we can even fall back to 
UTF-8 as we can convert all encodings to it (with some exceptions that 
you noticed).



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Tatsuo Ishii
> Sorry, it was inaccurate phrase. I mean "if the conversion to this
> encoding is not avaliable". For example, when we have database in
> EUC_JP and log_encoding set to Latin1. I think that we can even fall
> back to UTF-8 as we can convert all encodings to it (with some
> exceptions that you noticed).

So, what you wanted to say here is:

"If the conversion to this encoding is not avaliable then fall back to
UTF-8"

Am I correct?

Also is it possible to completely disable the feature?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Alban Hertroys
On 19 July 2012 10:40, Alexander Law  wrote:
>>> Ok, maybe the time of real universal encoding has not yet come. Then
>>> we maybe just should add a new parameter "log_encoding" (UTF-8 by
>>> default) to postgresql.conf. And to use this encoding consistently
>>> within logging_collector.
>>> If this encoding is not available then fall back to 7-bit ASCII.
>>
>> What do you mean by "not available"?
>
> Sorry, it was inaccurate phrase. I mean "if the conversion to this encoding
> is not avaliable". For example, when we have database in EUC_JP and
> log_encoding set to Latin1. I think that we can even fall back to UTF-8 as
> we can convert all encodings to it (with some exceptions that you noticed).

I like Craig's idea of adding the client encoding to the log lines. A
possible problem with that (I'm not an encoding expert) is that a log
line like that will contain data about the database server meta-data
(log time, client encoding, etc) in the database default encoding and
database data (the logged query and user-supplied values) in the
client encoding. One option would be to use the client encoding for
the entire log line, but would that result in legible meta-data in
every encoding?

It appears that the primarly here is that SQL statements and
user-supplied data are being logged, while the log-file is a text file
in a fixed encoding.
Perhaps another solution would be to add the ability to log certain
types of information (not the core database server log info, of
course!) to a database/table so that each record can be stored in its
own encoding?
That way the transcoding doesn't have to take place until someone is
reading the log, you'd know what to transcode the data to (namely the
client_encoding of the reading session) and there isn't any issue of
transcoding errors while logging statements.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Alban Hertroys
Yikes, messed up my grammar a bit I see!

On 19 July 2012 10:58, Alban Hertroys  wrote:

> I like Craig's idea of adding the client encoding to the log lines. A
> possible problem with that (I'm not an encoding expert) is that a log
> line like that will contain data about the database server meta-data
> (log time, client encoding, etc) in the database default encoding and

...will contain meta-data about the database server (log time...

> It appears that the primarly here is that SQL statements and

It appears the primary issue here...

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Alexander Law



Sorry, it was inaccurate phrase. I mean "if the conversion to this
encoding is not avaliable". For example, when we have database in
EUC_JP and log_encoding set to Latin1. I think that we can even fall
back to UTF-8 as we can convert all encodings to it (with some
exceptions that you noticed).

So, what you wanted to say here is:

"If the conversion to this encoding is not avaliable then fall back to
UTF-8"

Am I correct?

Also is it possible to completely disable the feature?

Yes, you're.  I think it could be disabled by setting log_encoding='', 
but if the parameter is missing then the feature should be enabled (with 
UTF-8).




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Synchronization Master -> Slave (on slow connetion)

2012-07-19 Thread Edson Richter
I've serious problems with slow link between continents, and twice a 
week I have to manually reestablish the slave, by running the following 
script:


--

psql -c "SELECT pg_start_backup('standby_sync', true)"
rsync --progress -a ${PGDATA}/* 
r...@server2.mycompany.com:/opt/pgsql/slave1/ --exclude postmaster.pid 
--exclude postgresql.conf --exclude pg_hba.conf --exclude pg_log 
--exclude pg_xlog

psql -c "SELECT pg_stop_backup()"

--

Point is not to analyze the connection issues between the continents, 
but if there is a way better to synchronize these servers, and avoid to 
transport 8Gb of data between them when syncronizing.
The rsync above can be optimized? Both servers are CentOS 5 with OpenVPN 
established between them (this is one of the problems: the VPN keep 
falling several times a day, and then finally, the sender and receiver 
process just vanish in the air - and I had no time to investigate the 
vanishment).


Thanks in advance for your help,

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Alexander Law

I like Craig's idea of adding the client encoding to the log lines. A
possible problem with that (I'm not an encoding expert) is that a log
line like that will contain data about the database server meta-data
(log time, client encoding, etc) in the database default encoding and
database data (the logged query and user-supplied values) in the
client encoding. One option would be to use the client encoding for
the entire log line, but would that result in legible meta-data in
every encoding?
I think then we get non-human readable logs. We will need one more tool 
to open and convert the log (and omit excessive encoding specification 
in each line).

It appears that the primarly here is that SQL statements and
user-supplied data are being logged, while the log-file is a text file
in a fixed encoding.
Yes, and in in my opinion there is nothing unusual about it. XML/HTML 
are examples of a text files with fixed encoding that can contain 
multi-language strings. UTF-8 is the default encoding for XML. And when 
it's not good enough (as Tatsou noticed), you still can switch to another.

Perhaps another solution would be to add the ability to log certain
types of information (not the core database server log info, of
course!) to a database/table so that each record can be stored in its
own encoding?
That way the transcoding doesn't have to take place until someone is
reading the log, you'd know what to transcode the data to (namely the
client_encoding of the reading session) and there isn't any issue of
transcoding errors while logging statements.
I don't think it would be the simplest solution of the existing problem. 
It can be another branch of evolution, but it doesn't answer the 
question - what encoding to use for the core database server log?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Alban Hertroys
On 19 July 2012 13:50, Alexander Law  wrote:
>> I like Craig's idea of adding the client encoding to the log lines. A
>> possible problem with that (I'm not an encoding expert) is that a log
>> line like that will contain data about the database server meta-data
>> (log time, client encoding, etc) in the database default encoding and
>> database data (the logged query and user-supplied values) in the
>> client encoding. One option would be to use the client encoding for
>> the entire log line, but would that result in legible meta-data in
>> every encoding?
>
> I think then we get non-human readable logs. We will need one more tool to
> open and convert the log (and omit excessive encoding specification in each
> line).

Only the parts that contain user-supplied data in very different
encodings would not be "human readable", similar to what we already
have.

>> It appears that the primarly here is that SQL statements and
>> user-supplied data are being logged, while the log-file is a text file
>> in a fixed encoding.
>
> Yes, and in in my opinion there is nothing unusual about it. XML/HTML are
> examples of a text files with fixed encoding that can contain multi-language
> strings. UTF-8 is the default encoding for XML. And when it's not good
> enough (as Tatsou noticed), you still can switch to another.

Yes, but in those examples it is acceptable that the application fails
to write the output. That, and the output needs to be converted to
various different client encodings (namely that of the visitor's
browser) anyway, so it does not really add any additional overhead.

This doesn't hold true for database server log files. Ideally, writing
those has to be reliable (how are you going to catch errors
otherwise?) and should not impact the performance of the database
server in a significant way (the less the better). The end result will
probably be somewhere in the middle.

>> Perhaps another solution would be to add the ability to log certain
>> types of information (not the core database server log info, of
>> course!) to a database/table so that each record can be stored in its
>> own encoding?
>> That way the transcoding doesn't have to take place until someone is
>> reading the log, you'd know what to transcode the data to (namely the
>> client_encoding of the reading session) and there isn't any issue of
>> transcoding errors while logging statements.
>
> I don't think it would be the simplest solution of the existing problem. It
> can be another branch of evolution, but it doesn't answer the question -
> what encoding to use for the core database server log?

It makes that problem much easier. If you need the "human-readable"
logs, you can write those to a different log (namely one in the
database). The result is that the server can use pretty much any
encoding (or a mix of multiple!) to write its log files.

You'll need a query to read the human-readable logs of course, but
since they're in the database, all the tools you need are already
available to you.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronization Master -> Slave (on slow connetion)

2012-07-19 Thread Sergey Konoplev
On Thu, Jul 19, 2012 at 3:45 PM, Edson Richter  wrote:
> The rsync above can be optimized? Both servers are CentOS 5 with OpenVPN

Yes, it can be optimized. You can turn compression on by specifying
-z. The compression level 1 is the one that performs best for my
needs. You can find out yours best by experimenting.

rsync -av --delete -z --progress --compress-level=1 \
--exclude pg_xlog --exclude *.conf --exclude postgresql.pid \
/db/data db2:/db/

But it is not the end. The replication's stream is not compressed
itself and because of the nature of xlog it is quite bloat (if it can
be said so). If you have a problem with the bandwidth it can be a
reason of replication lagging and breaking.

I usually start the following sniplet via screen utility on a master
db. It will forward the localhost:5432 on the master db to the
localhost:2345 on the replica.

while [ ! -f /tmp/stop ]; do
 ssh -C -o ExitOnForwardFailure=yes -R 2345:localhost:5432 replica_address \
   "while nc -zv localhost 2345; do sleep 5; done";
 sleep 5;
done

The stream will be forwarded then. Direct your replica to
localhost:2345 in the recovery.conf file.

-- 
Sergey Konoplev

a database architect, software developer at PostgreSQL-Consulting.com
http://www.postgresql-consulting.com

Jabber: gray...@gmail.com Skype: gray-hemp Phone: +79160686204

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronization Master -> Slave (on slow connetion)

2012-07-19 Thread Sergey Konoplev
On Thu, Jul 19, 2012 at 4:27 PM, Sergey Konoplev
 wrote:
> On Thu, Jul 19, 2012 at 3:45 PM, Edson Richter  
> wrote:
>> The rsync above can be optimized? Both servers are CentOS 5 with OpenVPN
>
> Yes, it can be optimized. You can turn compression on by specifying
> -z. The compression level 1 is the one that performs best for my
> needs. You can find out yours best by experimenting.
>
> rsync -av --delete -z --progress --compress-level=1 \
> --exclude pg_xlog --exclude *.conf --exclude postgresql.pid \
> /db/data db2:/db/
>
> But it is not the end. The replication's stream is not compressed
> itself and because of the nature of xlog it is quite bloat (if it can
> be said so). If you have a problem with the bandwidth it can be a
> reason of replication lagging and breaking.
>
> I usually start the following sniplet via screen utility on a master
> db. It will forward the localhost:5432 on the master db to the
> localhost:2345 on the replica.
>
> while [ ! -f /tmp/stop ]; do
>  ssh -C -o ExitOnForwardFailure=yes -R 2345:localhost:5432 replica_address \
>"while nc -zv localhost 2345; do sleep 5; done";
>  sleep 5;
> done
>
> The stream will be forwarded then. Direct your replica to
> localhost:2345 in the recovery.conf file.

Sorry, here I meant "the stream will be compressed then". See the -C
flag of the ssh utility.

>
> --
> Sergey Konoplev
>
> a database architect, software developer at PostgreSQL-Consulting.com
> http://www.postgresql-consulting.com
>
> Jabber: gray...@gmail.com Skype: gray-hemp Phone: +79160686204



-- 
Sergey Konoplev

a database architect, software developer at PostgreSQL-Consulting.com
http://www.postgresql-consulting.com

Jabber: gray...@gmail.com Skype: gray-hemp Phone: +79160686204

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Craig Ringer

On 07/19/2012 03:24 PM, Tatsuo Ishii wrote:

BTW, I'm not stick with mule-internal encoding. What we need here is a
"super" encoding which could include any existing encodings without
information loss. For this purpose, I think we can even invent a new
encoding(maybe something like very first prposal of ISO/IEC
10646?). However, using UTF-8 for this purpose seems to be just a
disaster to me.


Good point re unified chars. That was always a bad idea, and that's just 
one of the issues it causes.


I think these difficult encodings are where logging to dedicated file 
per-database is useful.


I'm not convinced that a weird and uncommon encoding is the answer. I 
guess as an alternative for people for whom it's useful if it's low cost 
in terms of complexity/maintenance/etc...


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [BUGS] main log encoding problem

2012-07-19 Thread Craig Ringer

On 07/19/2012 04:58 PM, Alban Hertroys wrote:

On 19 July 2012 10:40, Alexander Law  wrote:

Ok, maybe the time of real universal encoding has not yet come. Then
we maybe just should add a new parameter "log_encoding" (UTF-8 by
default) to postgresql.conf. And to use this encoding consistently
within logging_collector.
If this encoding is not available then fall back to 7-bit ASCII.

What do you mean by "not available"?

Sorry, it was inaccurate phrase. I mean "if the conversion to this encoding
is not avaliable". For example, when we have database in EUC_JP and
log_encoding set to Latin1. I think that we can even fall back to UTF-8 as
we can convert all encodings to it (with some exceptions that you noticed).

I like Craig's idea of adding the client encoding to the log lines.

Nonono! Log *file* *names* when one-file-per-database is in use.

Encoding as a log line prefix is a terrible idea for all sorts of reasons.
--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem with dblink and "drop owned by"

2012-07-19 Thread James W. Wilson
OK, understood, thanks.

On Wed, Jul 18, 2012 at 10:15 PM, Tom Lane  wrote:
> "James W. Wilson"  writes:
>> I'm confused. I thought foreign data wrappers were required to create
>> database links from one Postgresql server to another.
>
> contrib/dblink doesn't require them.  It does have an option to use an
> FDW instead of a libpq connection string to specify the connection
> target.  I had forgotten that that option got added in 8.4, before we
> had any other working use for FDWs --- so maybe we should rethink the
> decision not to backport this fix?
>
> Author: Heikki Linnakangas 
> Branch: master Release: REL9_1_BR [e356743f3] 2010-11-12 15:29:23 +0200
> Branch: REL9_0_STABLE Release: REL9_0_2 [533073cf2] 2010-11-12 15:30:19 +0200
>
> Add missing support for removing foreign data wrapper / server privileges
> belonging to a user at DROP OWNED BY. Foreign data wrappers and servers
> don't do anything useful yet, which is why no-one has noticed, but since 
> we
> have them, seems prudent to fix this. Per report from Chetan Suttraway.
> Backpatch to 9.0, 8.4 has the same problem but this patch didn't apply
> there so I'm not going to bother.
>
> In the meantime, though, you do not *need* an FDW.  dblink worked fine
> for many releases before FDWs existed, and it still does work without
> 'em.
>
> regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Handling error during async query

2012-07-19 Thread Daniele Varrazzo
Hello,

I'm trying to fix psycopg2 issue #113: network disconnection not
handled correctly in async mode.

If I'm in the middle between the PQsendQuery and the PQgetResult and
an error is detected (let's say we don't know yet if
application-related or connection-related), is there a way to abort
the query and put it back in a status where PQsendQuery can be
executed again (I believe transaction status from ACTIVE to IDLE)
without fetching the result?

Psycopg currently tries to fetch (blocking) and discard the result
after an error:

while (NULL != (res = PQgetResult(conn->pgconn))) {
PQclear(res);
}

but if the network connection has gone down it will result in waiting
forever. Not to mention the fact the operation is blocking when the
user wanted nonblocking operations.


-- Daniele

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with NEW

2012-07-19 Thread Bob Pawley

The function is too long to copy.

I separated it into another trigger function with just the update statement.

Here is the error -

ERROR: record "new" has no field "fluid_id"
SQL state: 42703
Context: SQL statement "update p_id.fluids
   set fluid_short =
   (select shape.text
   from  shape, num_search
   where (select st_within(shape.wkb_geometry, 
st_geometryn(num_search.the_geom4, 1)) = 'true')

   and text !~ '[0-9]')
-
Here is the trigger function -

CREATE OR REPLACE FUNCTION fluid_name()
 RETURNS trigger AS $$

   begin
   update p_id.fluids
   set fluid_short =
   (select shape.text
   from  shape, num_search
   where (select st_within(shape.wkb_geometry, 
st_geometryn(num_search.the_geom4, 1)) = 'true')

   and text !~ '[0-9]')
   where p_id.fluids.fluid_id = NEW.fluid_id;


 RETURN NULL;
   End;

$$  LANGUAGE plpgsql;

   create trigger r_fluidname after insert on project.project
   for each row execute procedure fluid_name();


Here is the trigger script -

-- Trigger: r_fluidname on project.project

-- DROP TRIGGER r_fluidname ON project.project;

CREATE TRIGGER r_fluidname
 AFTER INSERT
 ON project.project
 FOR EACH ROW
 EXECUTE PROCEDURE public.fluid_name();


Here is the function script -

-- Function: public.fluid_name()

-- DROP FUNCTION public.fluid_name();

CREATE OR REPLACE FUNCTION public.fluid_name()
 RETURNS trigger AS
$BODY$

   begin
   update p_id.fluids
   set fluid_short =
   (select shape.text
   from  shape, num_search
   where (select st_within(shape.wkb_geometry, 
st_geometryn(num_search.the_geom4, 1)) = 'true')

   and text !~ '[0-9]')
   where p_id.fluids.fluid_id = NEW.fluid_id;


 RETURN NULL;
   End;

$BODY$
 LANGUAGE plpgsql VOLATILE
 COST 100;
ALTER FUNCTION public.fluid_name()
 OWNER TO postgres;


Here is the triggering table -

-- Table: project.project

-- DROP TABLE project.project;

CREATE TABLE project.project
(
 p_id_id serial NOT NULL,
 p_id_name character varying(75),
 project_name character varying(75),
 project_id integer,
 number_of_processes integer,
 p_id_number character varying(30),
 CONSTRAINT project_pkey PRIMARY KEY (p_id_id ),
 CONSTRAINT name_ UNIQUE (p_id_name )
)
WITH (
 OIDS=FALSE
);
ALTER TABLE project.project
 OWNER TO postgres;

-- Trigger: pidnum on project.project

-- DROP TRIGGER pidnum ON project.project;


Help would be appreciated.

Bob



-Original Message- 
From: Adrian Klaver

Sent: Wednesday, July 18, 2012 5:47 PM
To: Bob Pawley
Cc: Alan Hodgson ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Trouble with NEW

On 07/18/2012 03:20 PM, Bob Pawley wrote:

When I substitute new.fluid_id for the actual fluid)id the expression
returns the right value.


Huh? I thought that was what was causing the problem. From your original
post:

"
where p_id.fluids.fluid_id = NEW.fluid_id;

I receive an error ”record “new” has no field “fluid_id” “.

"
Per previous suggestion can we see?:

1)The trigger definition

2) The function.



--
Adrian Klaver
adrian.kla...@gmail.com 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with NEW

2012-07-19 Thread Adrian Klaver

On 07/19/2012 06:43 AM, Bob Pawley wrote:

The function is too long to copy.

I separated it into another trigger function with just the update
statement.

Here is the error -

ERROR: record "new" has no field "fluid_id"
SQL state: 42703
Context: SQL statement "update p_id.fluids
set fluid_short =
(select shape.text
from  shape, num_search
where (select st_within(shape.wkb_geometry,
st_geometryn(num_search.the_geom4, 1)) = 'true')
and text !~ '[0-9]')
-
Here is the trigger function -

CREATE OR REPLACE FUNCTION fluid_name()
  RETURNS trigger AS $$

begin
update p_id.fluids
set fluid_short =
(select shape.text
from  shape, num_search
where (select st_within(shape.wkb_geometry,
st_geometryn(num_search.the_geom4, 1)) = 'true')
and text !~ '[0-9]')
where p_id.fluids.fluid_id = NEW.fluid_id;


  RETURN NULL;
End;

$$  LANGUAGE plpgsql;

create trigger r_fluidname after insert on project.project
for each row execute procedure fluid_name();


Here is the trigger script -

-- Trigger: r_fluidname on project.project

-- DROP TRIGGER r_fluidname ON project.project;

CREATE TRIGGER r_fluidname
  AFTER INSERT
  ON project.project
  FOR EACH ROW
  EXECUTE PROCEDURE public.fluid_name();


Here is the function script -

-- Function: public.fluid_name()

-- DROP FUNCTION public.fluid_name();

CREATE OR REPLACE FUNCTION public.fluid_name()
  RETURNS trigger AS
$BODY$

begin
update p_id.fluids
set fluid_short =
(select shape.text
from  shape, num_search
where (select st_within(shape.wkb_geometry,
st_geometryn(num_search.the_geom4, 1)) = 'true')
and text !~ '[0-9]')
where p_id.fluids.fluid_id = NEW.fluid_id;


  RETURN NULL;
End;

$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION public.fluid_name()
  OWNER TO postgres;


Here is the triggering table -

-- Table: project.project

-- DROP TABLE project.project;

CREATE TABLE project.project
(
  p_id_id serial NOT NULL,
  p_id_name character varying(75),
  project_name character varying(75),
  project_id integer,
  number_of_processes integer,
  p_id_number character varying(30),
  CONSTRAINT project_pkey PRIMARY KEY (p_id_id ),
  CONSTRAINT name_ UNIQUE (p_id_name )
)
WITH (
  OIDS=FALSE
);
ALTER TABLE project.project
  OWNER TO postgres;

-- Trigger: pidnum on project.project

-- DROP TRIGGER pidnum ON project.project;


Help would be appreciated.


There is no fluid_id in the project.project table. The trigger NEW 
record only pulls from the table the trigger is attached to. You will 
have to find some other way of relating the project table to the fluids 
table.




Bob






--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GENERATED columns

2012-07-19 Thread Tom Lane
Daniel McGreal  writes:
> In my searching I found several references (in pg-hackers, circa 2007)
> concerning the implementation of the SQL:2003 GENERATED column features.
> This does not seem to have made it into release? Any plans, etc?

AFAIK nobody is working on such a feature currently.  If memory serves,
the previous patch was rejected partly because of implementation
problems and partly because we weren't entirely sure what the standard
is asking for with that syntax.  It's possible that the new 2011 text
clarifies matters, but I've not gone and looked.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Synchronization Master -> Slave

2012-07-19 Thread Edson Richter

  
  
I've serious problems with slow link between continents, and twice a
week I have to manually reestablish the slave, by running the
following script:

--

psql -c "SELECT pg_start_backup('standby_sync', true)"
rsync --progress -a ${PGDATA}/*
r...@server2.mycompany.com:/opt/pgsql/slave1/ --exclude
postmaster.pid --exclude postgresql.conf --exclude pg_hba.conf
--exclude pg_log --exclude pg_xlog
psql -c "SELECT pg_stop_backup()"

--

Point is not to analyze the connection issues between the
continents, but if there is a way better to synchronize these
servers, and avoid to transport 8Gb of data between them when
syncronizing.
The rsync above can be optimized? Both servers are CentOS 5 with
OpenVPN established between them (this is one of the problems: the
VPN keep falling several times a day, and then finally, the sender
and receiver process just vanish in the air - and I had no time to
investigate the vanishment).

Thanks in advance for your help,

-- 
  
  

  

Edson Carlos Ericksson Richter
  
  SimKorp Informática Ltda

  
  
Fone:
(51) 3366-7964
  
  
Celular:
(51)9318-9766/(51)
8585-0796
  
  

  
  

  
  

  



Re: [GENERAL] How to stop a query

2012-07-19 Thread younus
Hi,

First :
   ps -ef | grep postgres 
and  kill -9 (PID of your query)

Sec :
select procpid, datname, usename, client_addr,  current_query from
pg_stat_activity where current_query!='';

and 

SELECT pg_cancel_backend(procpid);



younus,

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-stop-a-query-tp1924086p5717227.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Segmentation fault

2012-07-19 Thread Amod Pandey
Thank you Craig for explaining in such a detail. I am adding more
information and would see what more I can add,

$ulimit -a
core file size  (blocks, -c) 0

So I assume there to be no core dump file.

If I set 'ulimit -c unlimited' will it generate core dump if there is
another occurrence. Do I need to restart postgres for this to take effect.

Linux distros
---
Linux ip-xx-xx-xx-xx 2.6.35.11-83.9.amzn1.x86_64 #1 SMP Sat Feb 19 23:42:04
UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

I will see if there are queries which I can share.

Regards
Amod

On Thu, Jul 19, 2012 at 9:20 AM, Craig Ringer  wrote:

>  On 07/19/2012 12:37 AM, Amod Pandey wrote:
>
> Server stopped due to Segmentation Fault. Server was running successfully
> for an year.
>
> PostgreSQL: 9.0.3
>
> from /var/log/messages
>
> Jul 18 19:00:03 ip-10-136-22-193 kernel: [18643442.660032] postgres[6818]:
> segfault at 170a8c6f ip 0044c94d sp 7fff9fee5b80 error 4 in
> postgres[40+495000]
>
> from pg log
>
> LOG:  server process (PID 6818) was terminated by signal 11: Segmentation
> fault
> LOG:  terminating any other active server processes
>
> Please suggest if there is a way to find out the issue.
>
>
> Did the crash produce a core file ?
>
> You haven't mentioned what Linux distro or kernel version you're on, and
> defaults vary. Look in your PostgreSQL datadir and see if there are any
> files with "core" in the name.
>
> Unfortunately most Linux distros default to not producing core files.
> Without a core file it'll be nearly impossible because the segfault message
> reported by the kernel only contains the instruction pointer and stack
> pointer. The stack pointer is invalid and useless without a core file, and
> with address space layout randomisation active the instruction pointer
> offsets are all randomised for each execution, so the ip doesn't tell you
> much on ASLR systems either.
>
> If you can show more of the PostgreSQL logs from around the incident that
> would possibly be helpful.
>
> --
> Craig Ringer
>


Re: [GENERAL] Synchronization Master -> Slave

2012-07-19 Thread Sergey Konoplev
Check your hotmail inbox. You have an answer there.

On Thu, Jul 19, 2012 at 3:35 PM, Edson Richter wrote:

>  I've serious problems with slow link between continents, and twice a week
> I have to manually reestablish the slave, by running the following script:
>
>
> --
>
> psql -c "SELECT pg_start_backup('standby_sync', true)"
> rsync --progress -a ${PGDATA}/*
> r...@server2.mycompany.com:/opt/pgsql/slave1/ --exclude postmaster.pid
> --exclude postgresql.conf --exclude pg_hba.conf --exclude pg_log --exclude
> pg_xlog
> psql -c "SELECT pg_stop_backup()"
>
>
> --
>
> Point is not to analyze the connection issues between the continents, but
> if there is a way better to synchronize these servers, and avoid to
> transport 8Gb of data between them when syncronizing.
> The rsync above can be optimized? Both servers are CentOS 5 with OpenVPN
> established between them (this is one of the problems: the VPN keep falling
> several times a day, and then finally, the sender and receiver process just
> vanish in the air - and I had no time to investigate the vanishment).
>
> Thanks in advance for your help,
>
> --
>
>   *Edson Carlos Ericksson Richter*
> *SimKorp Informática Ltda*  Fone: (51) 3366-7964  Celular: (51)9318-9766/(51)
> 8585-0796  [image: Embedded Image]
>



-- 
Sergey Konoplev

a database and software architect
http://www.linkedin.com/in/grayhemp

Jabber: gray...@gmail.com Skype: gray-hemp Phone: +79160686204


Re: [GENERAL] Trouble with NEW

2012-07-19 Thread Bob Pawley

In all my reading of new and old I never made that connection.

Thanks Adrian

Bob

-Original Message- 
From: Adrian Klaver 
Sent: Thursday, July 19, 2012 6:50 AM 
To: Bob Pawley 
Cc: Alan Hodgson ; pgsql-general@postgresql.org 
Subject: Re: [GENERAL] Trouble with NEW 


On 07/19/2012 06:43 AM, Bob Pawley wrote:

The function is too long to copy.

I separated it into another trigger function with just the update
statement.

Here is the error -

ERROR: record "new" has no field "fluid_id"
SQL state: 42703
Context: SQL statement "update p_id.fluids
set fluid_short =
(select shape.text
from  shape, num_search
where (select st_within(shape.wkb_geometry,
st_geometryn(num_search.the_geom4, 1)) = 'true')
and text !~ '[0-9]')
-
Here is the trigger function -

CREATE OR REPLACE FUNCTION fluid_name()
  RETURNS trigger AS $$

begin
update p_id.fluids
set fluid_short =
(select shape.text
from  shape, num_search
where (select st_within(shape.wkb_geometry,
st_geometryn(num_search.the_geom4, 1)) = 'true')
and text !~ '[0-9]')
where p_id.fluids.fluid_id = NEW.fluid_id;


  RETURN NULL;
End;

$$  LANGUAGE plpgsql;

create trigger r_fluidname after insert on project.project
for each row execute procedure fluid_name();


Here is the trigger script -

-- Trigger: r_fluidname on project.project

-- DROP TRIGGER r_fluidname ON project.project;

CREATE TRIGGER r_fluidname
  AFTER INSERT
  ON project.project
  FOR EACH ROW
  EXECUTE PROCEDURE public.fluid_name();


Here is the function script -

-- Function: public.fluid_name()

-- DROP FUNCTION public.fluid_name();

CREATE OR REPLACE FUNCTION public.fluid_name()
  RETURNS trigger AS
$BODY$

begin
update p_id.fluids
set fluid_short =
(select shape.text
from  shape, num_search
where (select st_within(shape.wkb_geometry,
st_geometryn(num_search.the_geom4, 1)) = 'true')
and text !~ '[0-9]')
where p_id.fluids.fluid_id = NEW.fluid_id;


  RETURN NULL;
End;

$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION public.fluid_name()
  OWNER TO postgres;


Here is the triggering table -

-- Table: project.project

-- DROP TABLE project.project;

CREATE TABLE project.project
(
  p_id_id serial NOT NULL,
  p_id_name character varying(75),
  project_name character varying(75),
  project_id integer,
  number_of_processes integer,
  p_id_number character varying(30),
  CONSTRAINT project_pkey PRIMARY KEY (p_id_id ),
  CONSTRAINT name_ UNIQUE (p_id_name )
)
WITH (
  OIDS=FALSE
);
ALTER TABLE project.project
  OWNER TO postgres;

-- Trigger: pidnum on project.project

-- DROP TRIGGER pidnum ON project.project;


Help would be appreciated.


There is no fluid_id in the project.project table. The trigger NEW 
record only pulls from the table the trigger is attached to. You will 
have to find some other way of relating the project table to the fluids 
table.




Bob






--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop a query

2012-07-19 Thread Atri Sharma
On Thu, Jul 19, 2012 at 2:47 PM, younus  wrote:
> Hi,
>
> First :
>ps -ef | grep postgres
> and  kill -9 (PID of your query)
>
> Sec :
> select procpid, datname, usename, client_addr,  current_query from
> pg_stat_activity where current_query!='';
>
> and
>
> SELECT pg_cancel_backend(procpid);
>
>
>
> younus,
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/How-to-stop-a-query-tp1924086p5717227.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

I am not too sure if it is applicable,but have you tried Control-C?

Atri


-- 
Regards,

Atri
l'apprenant

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop a query

2012-07-19 Thread younus
Hi, 

Yes, I'm sure, it's work.

if you execute query by another program (program java), you must use the
first solution [ps -ef | grep postgres and  kill -9 (PID of your query)].

if you use pgsql terminal and you're connecting with postgres you can use 
select procpid, datname, usename, client_addr,  current_query from
pg_stat_activity where current_query!='';
SELECT pg_cancel_backend (procpid);




Younus.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-stop-a-query-tp1924086p5717297.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop a query

2012-07-19 Thread Younus
Hi Scott,

thank you for your comment


2012/7/19 Scott Marlowe 

> On Thu, Jul 19, 2012 at 3:17 AM, younus  wrote:
> > Hi,
> >
> > First :
> >ps -ef | grep postgres
> > and  kill -9 (PID of your query)
>
> NEVER kill -9 a postgres process unless you've exhausted all other
> possibilities, as it forces a restart of all the other backends as
> well.  A plain kill (no -9) is usually all you need, and it doesn't
> cause all the other backends to restart and flush all shared memory.
>
> > Sec :
> > select procpid, datname, usename, client_addr,  current_query from
> > pg_stat_activity where current_query!='';
> >
> > and
> >
> > SELECT pg_cancel_backend(procpid);
>
> MUCH better way of doing things.
>


Re: [GENERAL] Trouble with NEW

2012-07-19 Thread Adrian Klaver

On 07/19/2012 08:41 AM, Bob Pawley wrote:

In all my reading of new and old I never made that connection.


It makes more sense if you know what NEW and OLD represent.

What follows is a simplification:
1)Postgres uses Multiversion Concurrency Control(MVCC). See here for 
brief intro:

http://www.postgresql.org/docs/9.0/interactive/mvcc-intro.html

2) As a result on:
A) INSERT. For each record you have only a corresponding NEW record 
that holds the values you are inserting.
b) UPDATE. In Postgres an update is a two part process where you delete 
the old record and insert the new record. In that case you have both an 
OLD and a NEW record representing the respective values.
c) DELETE. You are getting rid of a record and all you have is the OLD 
record representing the record you are deleting.


3) When you use a trigger or rule that uses the OLD, NEW records it only 
has access to those records for the table it is attached to.


4) If you want to pull information from another table, you either need 
to set up a FOREIGN KEY relationship that you can leverage or you need 
to do a query in the trigger function that pulls in the necessary 
information.




Thanks Adrian

Bob







--
Adrian Klaver
adrian.kla...@gmail.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Default value of serial fields changes after restore

2012-07-19 Thread Luiz Damim
Today I found a strange behavior after restoring a PostgreSQL database: the
schema of all serialfields default values are trimmed out.

For example:

CREATE TABLE testschema.testtable
(
   id serial,
   name character varying(255),
   CONSTRAINT pk_testtable PRIMARY KEY (id)
)
WITH (
  OIDS = FALSE
)
;


SELECT a.attnum, n.nspname, c.relname, d.adsrc AS default_value
FROM pg_attribute AS a
JOIN pg_class AS c ON a.attrelid = c.oid
JOIN pg_namespace AS n ON c.relnamespace = n.oid
LEFT OUTER JOIN pg_attrdef AS d ON d.adrelid = c.oid AND d.adnum = a.attnum
WHERE a.attnum > 0
  AND n.nspname = 'testschema'
  AND c.relname = 'testtable'

The id's default_value is nextval('testschema.testtable_id_seq'::regclass).

After restore, default_value changes to
nextval('testtable_id_seq'::regclass) and INSERT's start to fail as the
sequence can´t be found on it's schema.

*Backup*
$ pg_dump -F c -Z 9 -b -h localhost -U postgres -f backup dbname

*Restore*
$ pg_restore -U postgres -h localhost -l backup > backup.list
$ pg_restore -U postgres -h localhost --disable-triggers -O -d dbname -S
postgres -Fc -L backup.list backup

Is this some backup/restore problem? What am I doing wrong?

BTW, PostgreSQL version is 9.1.3 x64 running on Windows 7 (dev machine),
but can be reproduced on Linux x64 too.


Re: [GENERAL] Trouble with NEW

2012-07-19 Thread David Johnston
> 
> 4) If you want to pull information from another table, you either need to set
> up a FOREIGN KEY relationship that you can leverage or you need to do a
> query in the trigger function that pulls in the necessary information.
> 

I do not get where the "OR" comes from.  There is nothing magical about FOREIGN 
KEY that causes NEW magically contain additional values.  Foreign Key or not 
you still have to "...do a query in the trigger function...".

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with NEW

2012-07-19 Thread Adrian Klaver

On 07/19/2012 11:26 AM, David Johnston wrote:


4) If you want to pull information from another table, you either need to set
up a FOREIGN KEY relationship that you can leverage or you need to do a
query in the trigger function that pulls in the necessary information.



I do not get where the "OR" comes from.  There is nothing magical about FOREIGN KEY that 
causes NEW magically contain additional values.  Foreign Key or not you still have to "...do a 
query in the trigger function...".


I think there is:)
If you have a FK between the table you have the trigger on a table you 
want to update you already have the NEW or OLD values in the trigger 
table with which to find the appropriate record in the other table. This 
covers the OP case. Without a FK you have to create your own relationship.




David J.





--
Adrian Klaver
adrian.kla...@gmail.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default value of serial fields changes after restore

2012-07-19 Thread Tom Lane
Luiz Damim  writes:
> Today I found a strange behavior after restoring a PostgreSQL database: the
> schema of all serialfields default values are trimmed out.

I don't think anything's being "trimmed out".  It's the normal behavior
of regclass literals to not print the schema if the table (or sequence
in this case) is visible in the current search_path.

> After restore, default_value changes to
> nextval('testtable_id_seq'::regclass) and INSERT's start to fail as the
> sequence can´t be found on it's schema.

This claim is utter nonsense.  If you are having a problem it's not due
to the way regclass literals print.  Please show a complete example of
something failing.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default value of serial fields changes after restore

2012-07-19 Thread Scott Marlowe
On Thu, Jul 19, 2012 at 1:13 PM, Tom Lane  wrote:
> Luiz Damim  writes:
>> After restore, default_value changes to
>> nextval('testtable_id_seq'::regclass) and INSERT's start to fail as the
>> sequence can´t be found on it's schema.
>
> This claim is utter nonsense.  If you are having a problem it's not due
> to the way regclass literals print.  Please show a complete example of
> something failing.

Is it possible the OP has an "alter user set search_path=..." in their
original db that's not getting backed up here because they're not
backing up globals?  Seems a more reasonable explanation to me.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop a query

2012-07-19 Thread Scott Marlowe
On Thu, Jul 19, 2012 at 3:17 AM, younus  wrote:
> Hi,
>
> First :
>ps -ef | grep postgres
> and  kill -9 (PID of your query)

NEVER kill -9 a postgres process unless you've exhausted all other
possibilities, as it forces a restart of all the other backends as
well.  A plain kill (no -9) is usually all you need, and it doesn't
cause all the other backends to restart and flush all shared memory.

> Sec :
> select procpid, datname, usename, client_addr,  current_query from
> pg_stat_activity where current_query!='';
>
> and
>
> SELECT pg_cancel_backend(procpid);

MUCH better way of doing things.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Default value of serial fields changes after restore

2012-07-19 Thread Tom Lane
Scott Marlowe  writes:
> On Thu, Jul 19, 2012 at 1:13 PM, Tom Lane  wrote:
>> This claim is utter nonsense.  If you are having a problem it's not due
>> to the way regclass literals print.  Please show a complete example of
>> something failing.

> Is it possible the OP has an "alter user set search_path=..." in their
> original db that's not getting backed up here because they're not
> backing up globals?  Seems a more reasonable explanation to me.

That might explain why the literals look different than they used to.
But I still say that if he's getting errors on INSERT, they're coming
from some other cause.  A regclass literal is just an OID under the hood
--- it cannot result in a lookup error, because the lookup's already
been done.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres 9 bind address for replication

2012-07-19 Thread Adam Crews
Hello,

I’m using pg 9.1.3 on CentOS 5 and have a few slave databases setup
using the built in streaming replication.

On the slaves I set the “listen_addresses” config option to an ip
address for a virtual alias on my network interfaces.  The host has an
address of 10.1.1.10, and there is a virtual alias of 10.1.1.40 that
the slave postmaster binds to.

When the slave makes it connection to the master to start replication
the source address for the connection is the host address, not the
virtual alias address.  Connections appear to come from 10.1.1.10,
instead of the slave postmaster address of 10.1.1.40.

This seems like a bug to me.  I could understand that if the
postmaster is listening on all interfaces, then it should use whatever
the IP is for the for the host, but in an instance where the
postmaster has been configured to listen to a specific address it
seems like the call to start the replication should be passed that
address so connections come from the slave postmaster’s IP, instead of
the host.

Is there a config option that can be used to adjust this?  I've looked
in the docs, but haven't found one yet.

Is this perhaps a bug, or lack of feature?

Thanks
-Adam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem setting environmental variables for postgresql or plpythonu

2012-07-19 Thread Mark Wynter
I''m hoping someone can help me out.   I'm wanting to run GRASS GIS from within 
a plpythonu function block.  But to run GRASS GIS externally, the following 
environmental variables need to be available to the Postgresql server...

GISBASE='/usr/local/grass-6.4.3svn'
PATH='$PATH:$GISBASE/bin:$GISBASE/scripts:$GISBASE/lib'
PYTHONPATH='${PYTHONPATH}:$GISBASE/etc/python/'
PYTHONPATH='${PYTHONPATH}:$GISBASE/etc/python/grass'
PYTHONPATH='${PYTHONPATH}:$GISBASE/etc/python/grass/script'
LD_LIBRARY_PATH='$LD_LIBRARY_PATH:$GISBASE/lib'
GIS_LOCK='$$'
GISRC='$HOME/.grassrc6'

Reference:  http://grass.osgeo.org/wiki/GRASS_and_Python

So far I've added these variables to /etc/bash.bashrc  
which makes them available to the postgres user (but not the server or 
pl/python).

And I've had no success adding them to...
/etc/postgresql/9.1/main/environment
#Shell commands are not evaluated.

Any suggestions would be appreciated...

Thanks
Mark
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High CPU on read-only streaming replication server

2012-07-19 Thread Nick
On Jul 19, 1:23 pm, Nick  wrote:
> My read-only streaming replication servers are showing a much higher
> CPU load than I would expect.
>
> None of my queries are longer than 10ms.
>
> My application server is connecting to this server via pgbouncer.
>
> Other than that, its a pretty standard setup. I haven't toyed with too
> many settings besides the standard optimization settings.
>
> Any ideas what could be going on?

One more thing to point out, the top process that shows high CPU is...
COMMAND: postgres: dbuser mydatabase X.XX.XXX.XXX() idle
CPU%: 68%

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general