On Wed, Jul 18, 2012 at 9:42 PM, Brian McNally bmcna...@uw.edu wrote:
I don't get any rows returned back from that query. I'm running it while
connected to the DB in question. Am I supposed to substitute values for any
Try this one. Are there any COPY queries (current_query) or long
lasting
On 07/19/2012 01:52 PM, Amod Pandey wrote:
Thank you Craig for explaining in such a detail. I am adding more
information and would see what more I can add,
$ulimit -a
core file size (blocks, -c) 0
So I assume there to be no core dump file.
Quite likely. Limits are inherited down
Hello,
C. We have one logfile with UTF-8.
Pros: Log messages of all our clients can fit in it. We can use any
generic editor/viewer to open it.
Nothing changes for Linux (and other OSes with UTF-8 encoding).
Cons: All the strings written to log file should go through some
conversation function.
On 19 Jul 2012, at 24:20, Bob Pawley wrote:
When I substitute new.fluid_id for the actual fluid)id the expression returns
the right value.
Following is the table -
CREATE TABLE p_id.fluids
(
p_id_id integer,
fluid_id serial,
I think people meant the one on which the trigger fires ;)
Hello,
Implementing any of these isn't trivial - especially making sure
messages emitted to stderr from things like segfaults and dynamic
linker messages are always correct. Ensuring that the logging
collector knows when setlocale() has been called to change the
encoding and translation of
I am thinking about variant of C.
Problem with C is, converting from other encoding to UTF-8 is not
cheap because it requires huge conversion tables. This may be a
serious problem with busy server. Also it is possible some information
is lossed while in this conversion. This is because
Hello,
Implementing any of these isn't trivial - especially making sure
messages emitted to stderr from things like segfaults and dynamic
linker messages are always correct. Ensuring that the logging
collector knows when setlocale() has been called to change the
encoding and translation of
Hi,
In my searching I found several references (in pg-hackers, circa 2007)
concerning the implementation of the SQL:2003 GENERATED column features.
This does not seem to have made it into release? Any plans, etc?
Dan.
The initial issue was that log file contains messages in different
encodings. So transcoding is performed already, but it's not
This is not true. Transcoding happens only when PostgreSQL is built
with --enable-nls option (default is no nls).
I'll restate the initial issue as I see it.
I have
And regarding mule internal encoding - reading about Mule
http://www.emacswiki.org/emacs/UnicodeEncoding I found:
/In future (probably Emacs 22), Mule will use an internal encoding
which is a UTF-8 encoding of a superset of Unicode. /
So I still see UTF-8 as a common denominator for all the
You can google by encoding EUC_JP has no equivalent in UTF8 or
some such to find such an example. In this case PostgreSQL just throw
an error. For frontend/backend encoding conversion this is fine. But
what should we do for logs? Apparently we cannot throw an error here.
Unification is
Ok, maybe the time of real universal encoding has not yet come. Then
we maybe just should add a new parameter log_encoding (UTF-8 by
default) to postgresql.conf. And to use this encoding consistently
within logging_collector.
If this encoding is not available then fall back to 7-bit ASCII.
What
Sorry, it was inaccurate phrase. I mean if the conversion to this
encoding is not avaliable. For example, when we have database in
EUC_JP and log_encoding set to Latin1. I think that we can even fall
back to UTF-8 as we can convert all encodings to it (with some
exceptions that you noticed).
On 19 July 2012 10:40, Alexander Law exclus...@gmail.com wrote:
Ok, maybe the time of real universal encoding has not yet come. Then
we maybe just should add a new parameter log_encoding (UTF-8 by
default) to postgresql.conf. And to use this encoding consistently
within logging_collector.
If
Yikes, messed up my grammar a bit I see!
On 19 July 2012 10:58, Alban Hertroys haram...@gmail.com wrote:
I like Craig's idea of adding the client encoding to the log lines. A
possible problem with that (I'm not an encoding expert) is that a log
line like that will contain data about the
Sorry, it was inaccurate phrase. I mean if the conversion to this
encoding is not avaliable. For example, when we have database in
EUC_JP and log_encoding set to Latin1. I think that we can even fall
back to UTF-8 as we can convert all encodings to it (with some
exceptions that you noticed).
I've serious problems with slow link between continents, and twice a
week I have to manually reestablish the slave, by running the following
script:
--
psql
I like Craig's idea of adding the client encoding to the log lines. A
possible problem with that (I'm not an encoding expert) is that a log
line like that will contain data about the database server meta-data
(log time, client encoding, etc) in the database default encoding and
database data (the
On 19 July 2012 13:50, Alexander Law exclus...@gmail.com wrote:
I like Craig's idea of adding the client encoding to the log lines. A
possible problem with that (I'm not an encoding expert) is that a log
line like that will contain data about the database server meta-data
(log time, client
On Thu, Jul 19, 2012 at 3:45 PM, Edson Richter edsonrich...@hotmail.com wrote:
The rsync above can be optimized? Both servers are CentOS 5 with OpenVPN
Yes, it can be optimized. You can turn compression on by specifying
-z. The compression level 1 is the one that performs best for my
needs. You
On Thu, Jul 19, 2012 at 4:27 PM, Sergey Konoplev
sergey.konop...@postgresql-consulting.com wrote:
On Thu, Jul 19, 2012 at 3:45 PM, Edson Richter edsonrich...@hotmail.com
wrote:
The rsync above can be optimized? Both servers are CentOS 5 with OpenVPN
Yes, it can be optimized. You can turn
On 07/19/2012 03:24 PM, Tatsuo Ishii wrote:
BTW, I'm not stick with mule-internal encoding. What we need here is a
super encoding which could include any existing encodings without
information loss. For this purpose, I think we can even invent a new
encoding(maybe something like very first
On 07/19/2012 04:58 PM, Alban Hertroys wrote:
On 19 July 2012 10:40, Alexander Law exclus...@gmail.com wrote:
Ok, maybe the time of real universal encoding has not yet come. Then
we maybe just should add a new parameter log_encoding (UTF-8 by
default) to postgresql.conf. And to use this
OK, understood, thanks.
On Wed, Jul 18, 2012 at 10:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
James W. Wilson jww1...@gmail.com writes:
I'm confused. I thought foreign data wrappers were required to create
database links from one Postgresql server to another.
contrib/dblink doesn't require
Hello,
I'm trying to fix psycopg2 issue #113: network disconnection not
handled correctly in async mode.
If I'm in the middle between the PQsendQuery and the PQgetResult and
an error is detected (let's say we don't know yet if
application-related or connection-related), is there a way to abort
The function is too long to copy.
I separated it into another trigger function with just the update statement.
Here is the error -
ERROR: record new has no field fluid_id
SQL state: 42703
Context: SQL statement update p_id.fluids
set fluid_short =
(select shape.text
from shape,
On 07/19/2012 06:43 AM, Bob Pawley wrote:
The function is too long to copy.
I separated it into another trigger function with just the update
statement.
Here is the error -
ERROR: record new has no field fluid_id
SQL state: 42703
Context: SQL statement update p_id.fluids
set fluid_short =
Daniel McGreal daniel.mcgr...@redbite.com writes:
In my searching I found several references (in pg-hackers, circa 2007)
concerning the implementation of the SQL:2003 GENERATED column features.
This does not seem to have made it into release? Any plans, etc?
AFAIK nobody is working on such a
I've serious problems with slow link between continents, and twice a
week I have to manually reestablish the slave, by running the
following script:
Hi,
First :
ps -ef | grep postgres
and kill -9 (PID of your query)
Sec :
select procpid, datname, usename, client_addr, current_query from
pg_stat_activity where current_query!='IDLE';
and
SELECT pg_cancel_backend(procpid);
younus,
--
View this message in context:
Thank you Craig for explaining in such a detail. I am adding more
information and would see what more I can add,
$ulimit -a
core file size (blocks, -c) 0
So I assume there to be no core dump file.
If I set 'ulimit -c unlimited' will it generate core dump if there is
another occurrence.
Check your hotmail inbox. You have an answer there.
On Thu, Jul 19, 2012 at 3:35 PM, Edson Richter rich...@simkorp.com.brwrote:
I've serious problems with slow link between continents, and twice a week
I have to manually reestablish the slave, by running the following script:
In all my reading of new and old I never made that connection.
Thanks Adrian
Bob
-Original Message-
From: Adrian Klaver
Sent: Thursday, July 19, 2012 6:50 AM
To: Bob Pawley
Cc: Alan Hodgson ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Trouble with NEW
On 07/19/2012
On Thu, Jul 19, 2012 at 2:47 PM, younus younus.essa...@gmail.com wrote:
Hi,
First :
ps -ef | grep postgres
and kill -9 (PID of your query)
Sec :
select procpid, datname, usename, client_addr, current_query from
pg_stat_activity where current_query!='IDLE';
and
SELECT
Hi,
Yes, I'm sure, it's work.
if you execute query by another program (program java), you must use the
first solution [ps -ef | grep postgres and kill -9 (PID of your query)].
if you use pgsql terminal and you're connecting with postgres you can use
select procpid, datname, usename,
Hi Scott,
thank you for your comment
2012/7/19 Scott Marlowe scott.marl...@gmail.com
On Thu, Jul 19, 2012 at 3:17 AM, younus younus.essa...@gmail.com wrote:
Hi,
First :
ps -ef | grep postgres
and kill -9 (PID of your query)
NEVER kill -9 a postgres process unless you've
On 07/19/2012 08:41 AM, Bob Pawley wrote:
In all my reading of new and old I never made that connection.
It makes more sense if you know what NEW and OLD represent.
What follows is a simplification:
1)Postgres uses Multiversion Concurrency Control(MVCC). See here for
brief intro:
Today I found a strange behavior after restoring a PostgreSQL database: the
schema of all serialfields default values are trimmed out.
For example:
CREATE TABLE testschema.testtable
(
id serial,
name character varying(255),
CONSTRAINT pk_testtable PRIMARY KEY (id)
)
WITH (
OIDS =
4) If you want to pull information from another table, you either need to set
up a FOREIGN KEY relationship that you can leverage or you need to do a
query in the trigger function that pulls in the necessary information.
I do not get where the OR comes from. There is nothing magical about
On 07/19/2012 11:26 AM, David Johnston wrote:
4) If you want to pull information from another table, you either need to set
up a FOREIGN KEY relationship that you can leverage or you need to do a
query in the trigger function that pulls in the necessary information.
I do not get where the OR
Luiz Damim lui...@gmail.com writes:
Today I found a strange behavior after restoring a PostgreSQL database: the
schema of all serialfields default values are trimmed out.
I don't think anything's being trimmed out. It's the normal behavior
of regclass literals to not print the schema if the
On Thu, Jul 19, 2012 at 1:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Luiz Damim lui...@gmail.com writes:
After restore, default_value changes to
nextval('testtable_id_seq'::regclass) and INSERT's start to fail as the
sequence can´t be found on it's schema.
This claim is utter nonsense. If
On Thu, Jul 19, 2012 at 3:17 AM, younus younus.essa...@gmail.com wrote:
Hi,
First :
ps -ef | grep postgres
and kill -9 (PID of your query)
NEVER kill -9 a postgres process unless you've exhausted all other
possibilities, as it forces a restart of all the other backends as
well. A
Scott Marlowe scott.marl...@gmail.com writes:
On Thu, Jul 19, 2012 at 1:13 PM, Tom Lane t...@sss.pgh.pa.us wrote:
This claim is utter nonsense. If you are having a problem it's not due
to the way regclass literals print. Please show a complete example of
something failing.
Is it possible
Hello,
I’m using pg 9.1.3 on CentOS 5 and have a few slave databases setup
using the built in streaming replication.
On the slaves I set the “listen_addresses” config option to an ip
address for a virtual alias on my network interfaces. The host has an
address of 10.1.1.10, and there is a
I''m hoping someone can help me out. I'm wanting to run GRASS GIS from within
a plpythonu function block. But to run GRASS GIS externally, the following
environmental variables need to be available to the Postgresql server...
GISBASE='/usr/local/grass-6.4.3svn'
46 matches
Mail list logo