[ Please don't do top posting]
20. cd /usr/local/pgsql/
21 . tar -czf data.tar.gzdata/
After 21. step, seems you forgot to execute pg_stop_backup() command.
With this, I would recommend you to follow the documentation given below:
On 21/04/2011 14:33, Vibhor Kumar wrote:
On Apr 21, 2011, at 4:23 PM, Tiruvenkatasamy Baskaran wrote:
Which version of postgresql supports replication on RHEL6?
RHEL version : 2.6.32-71.el6.x86_64
Why are you re-posting your question, if it has been answered?
Only guessing, but maybe
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data, and at some point it finds out it
needs to allocate
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data, and
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data,
Yes it shows only one server on the remote computer.
I can send the table as an sql dump if the list will accept an attachment.
The dumped table contains the geom information that I can't see on the
remote connection.
I restored that same dumped table into a different local database.
We are moving our databases to new hardware soon, so we felt it would be
a good time to get the encoding correct. Our databases are currently
SQL_ASCII and we plan to move them to UTF8.
So, as previously noted, there are certain characters that won't load
into a UTF8 database from a dump of
On Fri, Apr 22, 2011 at 11:00 AM, Geoffrey Myers
li...@serioustechnology.com wrote:
Here's our problem. We planned on moving databases a few at a time.
Problem is, there is a process that pushes data from one database to
another. If this process attempts to push data from a SQL_ASCII
On Friday, April 22, 2011 8:00:08 am Geoffrey Myers wrote:
What is the harm in leaving our databases SQL_ASCII encoded?
SQL_ASCII is basically no encoding. The world is slowly but surely moving to
Unicode, sooner or later you are going to hit the unknown encoding/Unicode
wall.
Probably
On Fri, Apr 22, 2011 at 11:16 AM, Geoffrey Myers g...@serioustechnology.com
wrote:
Totally agree. Still, the question remains, why not leave it as SQL_ASCII?
you have no guarantees that the data stored within is utf-8. that is all.
if you can make such guarantees from within your
Vick Khera wrote:
On Fri, Apr 22, 2011 at 11:00 AM, Geoffrey Myers
li...@serioustechnology.com mailto:li...@serioustechnology.com wrote:
Here's our problem. We planned on moving databases a few at a time.
Problem is, there is a process that pushes data from one database to
On 04/22/2011 09:16 AM, Geoffrey Myers wrote:
Vick Khera wrote:
On Fri, Apr 22, 2011 at 11:00 AM, Geoffrey Myers
li...@serioustechnology.com mailto:li...@serioustechnology.com wrote:
Here's our problem. We planned on moving databases a few at a time.
Problem is, there is a process that
On Fri, Apr 22, 2011 at 9:16 AM, Geoffrey Myers
g...@serioustechnology.comwrote:
Vick Khera wrote:
The database's enforcement of the encoding should be the last layer that
does so. Your applications should be enforcing strict utf-8 encoding from
start to finish. Once this is done, and the
Sorry, but I'm not able to understand about how to use pgsnap for measuring
query performance. I have installed pgsnap. when I run pgsnap it shows some
error:
*Connecting to test database...
Adding some HTML files...
Getting Misc informations...
Getting General informations...
sh: pg_controldata:
On 04/22/2011 08:00 AM, Geoffrey Myers wrote:
We are moving our databases to new hardware soon, so we felt it would
be a good time to get the encoding correct. Our databases are
currently SQL_ASCII and we plan to move them to UTF8.
We are in the same boat, fortunately only on one older server
Hi
A little more research.
I accessed the problem table as a remote connection using PGAdmin.
I selected the cell that shows as being null and copied and pasted the
contents into Word.
The geom IS there.
Using this method the geom is also present, but not visible, in the table I
am
On Friday, April 22, 2011 12:52:28 pm Bob Pawley wrote:
Hi
A little more research.
I accessed the problem table as a remote connection using PGAdmin.
I selected the cell that shows as being null and copied and pasted the
contents into Word.
The geom IS there.
Using this method the
Hey folks,
Having not had to worry about character encoding in the past we
blithely used the SQL_ASCII encoding and had the application do
the input filtering. We have reached the point where we would
like to have the DB enforce the character encoding for us. We
have chosen to go with LATIN9
Can we measure the number of Physical I/Os or Disk I/Os for a particular
query in Postgres?
In Oracle we can do this with the help of a TraceFile TKPROF.
--
Thank You,
Subham Roy,
CSE IIT Bombay.
We're trying to figure out how to account for our disk space
consumption in a database.
$ sudo du -shx /var/lib/postgresql/8.4/main/
1.9G/var/lib/postgresql/8.4/main/
But when we query Postgresql to find out how much disk space is
actually being used by the various databases, we get a total
If it's empty space at the beginning it goes on for a long time.
Can I send the table as an sql dump as an attachment with the list??
Bob
-Original Message-
From: Adrian Klaver
Sent: Friday, April 22, 2011 1:29 PM
To: Bob Pawley
Cc: pgsql-general@postgresql.org ; Scott Marlowe
On Friday, April 22, 2011 2:06:52 pm Bob Pawley wrote:
If it's empty space at the beginning it goes on for a long time.
Can I send the table as an sql dump as an attachment with the list??
If you want you can send off list to me.
Bob
--
Adrian Klaver
adrian.kla...@gmail.com
--
Sent
On 4/22/2011 4:03 PM, SUBHAM ROY wrote:
Can we measure the number of Physical I/Os or Disk I/Os for a particular
query in Postgres?
In Oracle we can do this with the help of a TraceFile TKPROF.
--
Thank You,
Subham Roy,
CSE IIT Bombay.
Nope.
-Andy
--
Sent via pgsql-general mailing list
Hello,
My C function:
PG_FUNCTION_INFO_V1(my_function);
Datum my_function(PG_FUNCTION_ARGS)
{
MemoryContext old_context;
int * p = NULL;
float f = 0.0;
old_context = MemoryContextSwitchTo(fcinfo-flinfo-fn_mcxt);
p = palloc(100);
MemoryContextSwitchTo(old_context);
// do some other stuff
What syntax or operator did I (accidentally) invoke by putting parentheses
around my column list?
SELECT (a, b, c) FROM mytable...
It gets me a single result column with comma-separated values in
parentheses (see 2nd SELECT below). I can't find an explanation in the
PostgreSQL manual. It
ljb ljb9...@pobox.com writes:
What syntax or operator did I (accidentally) invoke by putting parentheses
around my column list?
SELECT (a, b, c) FROM mytable...
It gets me a single result column with comma-separated values in
parentheses (see 2nd SELECT below). I can't find an
=?ISO-8859-1?Q?Jorge_Ar=E9valo?= jorge.arev...@deimos-space.com writes:
old_context = MemoryContextSwitchTo(fcinfo-flinfo-fn_mcxt);
p = palloc(100);
MemoryContextSwitchTo(old_context);
Why are you doing that?
Should I free the memory allocated for p? I'm getting memory leaks
when I don't
29 matches
Mail list logo