=?ISO-8859-1?Q?Jorge_Ar=E9valo?= writes:
> old_context = MemoryContextSwitchTo(fcinfo->flinfo->fn_mcxt);
> p = palloc(100);
> MemoryContextSwitchTo(old_context);
Why are you doing that?
> Should I free the memory allocated for p? I'm getting memory leaks
> when I don't free the memory, and they
ljb writes:
> What syntax or operator did I (accidentally) invoke by putting parentheses
> around my column list?
> SELECT (a, b, c) FROM mytable...
> It gets me a single result column with comma-separated values in
> parentheses (see 2nd SELECT below). I can't find an explanation in the
>
What syntax or operator did I (accidentally) invoke by putting parentheses
around my column list?
SELECT (a, b, c) FROM mytable...
It gets me a single result column with comma-separated values in
parentheses (see 2nd SELECT below). I can't find an explanation in the
PostgreSQL manual. It doe
Hello,
My C function:
PG_FUNCTION_INFO_V1(my_function);
Datum my_function(PG_FUNCTION_ARGS)
{
MemoryContext old_context;
int * p = NULL;
float f = 0.0;
old_context = MemoryContextSwitchTo(fcinfo->flinfo->fn_mcxt);
p = palloc(100);
MemoryContextSwitchTo(old_context);
// do some other stuff
PG_R
On 4/22/2011 4:03 PM, SUBHAM ROY wrote:
Can we measure the number of Physical I/Os or Disk I/Os for a particular
query in Postgres?
In Oracle we can do this with the help of a TraceFile & TKPROF.
--
Thank You,
Subham Roy,
CSE IIT Bombay.
Nope.
-Andy
--
Sent via pgsql-general mailing list (
On Friday, April 22, 2011 2:06:52 pm Bob Pawley wrote:
> If it's empty space at the beginning it goes on for a long time.
>
> Can I send the table as an sql dump as an attachment with the list??
If you want you can send off list to me.
>
> Bob
>
--
Adrian Klaver
adrian.kla...@gmail.com
--
If it's empty space at the beginning it goes on for a long time.
Can I send the table as an sql dump as an attachment with the list??
Bob
-Original Message-
From: Adrian Klaver
Sent: Friday, April 22, 2011 1:29 PM
To: Bob Pawley
Cc: pgsql-general@postgresql.org ; Scott Marlowe
Subject
We're trying to figure out how to account for our disk space
consumption in a database.
$ sudo du -shx /var/lib/postgresql/8.4/main/
1.9G/var/lib/postgresql/8.4/main/
But when we query Postgresql to find out how much disk space is
actually being used by the various databases, we get a total o
Can we measure the number of Physical I/Os or Disk I/Os for a particular
query in Postgres?
In Oracle we can do this with the help of a TraceFile & TKPROF.
--
Thank You,
Subham Roy,
CSE IIT Bombay.
Hey folks,
Having not had to worry about character encoding in the past we
blithely used the SQL_ASCII encoding and had the application do
the input filtering. We have reached the point where we would
like to have the DB enforce the character encoding for us. We
have chosen to go with LATIN9 enc
On Friday, April 22, 2011 12:52:28 pm Bob Pawley wrote:
> Hi
>
> A little more research.
>
> I accessed the problem table as a remote connection using PGAdmin.
>
> I selected the cell that shows as being null and copied and pasted the
> contents into Word.
>
> The geom IS there.
>
> Using this
Hi
A little more research.
I accessed the problem table as a remote connection using PGAdmin.
I selected the cell that shows as being null and copied and pasted the
contents into Word.
The geom IS there.
Using this method the geom is also present, but not visible, in the table I
am accessi
On 04/22/2011 08:00 AM, Geoffrey Myers wrote:
We are moving our databases to new hardware soon, so we felt it would
be a good time to get the encoding correct. Our databases are
currently SQL_ASCII and we plan to move them to UTF8.
We are in the same boat, fortunately only on one older server w
Sorry, but I'm not able to understand about how to use pgsnap for measuring
query performance. I have installed pgsnap. when I run pgsnap it shows some
error:
*Connecting to test database...
Adding some HTML files...
Getting Misc informations...
Getting General informations...
sh: pg_controldata:
On Fri, Apr 22, 2011 at 9:16 AM, Geoffrey Myers
wrote:
> Vick Khera wrote:
>
> The database's enforcement of the encoding should be the last layer that
>> does so. Your applications should be enforcing strict utf-8 encoding from
>> start to finish. Once this is done, and the old data already in
On 04/22/2011 09:16 AM, Geoffrey Myers wrote:
Vick Khera wrote:
On Fri, Apr 22, 2011 at 11:00 AM, Geoffrey Myers
mailto:li...@serioustechnology.com>> wrote:
Here's our problem. We planned on moving databases a few at a time.
Problem is, there is a process that pushes data from one database to
Vick Khera wrote:
On Fri, Apr 22, 2011 at 11:00 AM, Geoffrey Myers
mailto:li...@serioustechnology.com>> wrote:
Here's our problem. We planned on moving databases a few at a time.
Problem is, there is a process that pushes data from one database to
another. If this process attempts
On Fri, Apr 22, 2011 at 11:16 AM, Geoffrey Myers wrote:
> Totally agree. Still, the question remains, why not leave it as SQL_ASCII?
>
you have no guarantees that the data stored within is utf-8. that is all.
if you can make such guarantees from within your application, then you have
some conf
On Friday, April 22, 2011 8:00:08 am Geoffrey Myers wrote:
>
> What is the harm in leaving our databases SQL_ASCII encoded?
SQL_ASCII is basically no encoding. The world is slowly but surely moving to
Unicode, sooner or later you are going to hit the unknown encoding/Unicode
wall.
Probably be
On Fri, Apr 22, 2011 at 11:00 AM, Geoffrey Myers <
li...@serioustechnology.com> wrote:
> Here's our problem. We planned on moving databases a few at a time.
> Problem is, there is a process that pushes data from one database to
> another. If this process attempts to push data from a SQL_ASCII da
We are moving our databases to new hardware soon, so we felt it would be
a good time to get the encoding correct. Our databases are currently
SQL_ASCII and we plan to move them to UTF8.
So, as previously noted, there are certain characters that won't load
into a UTF8 database from a dump of t
Yes it shows only one server on the remote computer.
I can send the table as an sql dump if the list will accept an attachment.
The dumped table contains the geom information that I can't see on the
remote connection.
I restored that same dumped table into a different local database. Somewher
> On Fri, Apr 22, 2011 at 8:20 PM, wrote:
>>> On Fri, Apr 22, 2011 at 7:07 PM, wrote:
>>> In the pg_dumpall backup process, I get this error. Does this help?
>>>
>>
>> Well, not really - it's just another incarnation of the problem we've
>> already seen. PostgreSQL reads the data, and at some p
On Fri, Apr 22, 2011 at 8:20 PM, wrote:
>> On Fri, Apr 22, 2011 at 7:07 PM, wrote:
>> In the pg_dumpall backup process, I get this error. Does this help?
>>
>
> Well, not really - it's just another incarnation of the problem we've
> already seen. PostgreSQL reads the data, and at some point it
> On Fri, Apr 22, 2011 at 7:07 PM, wrote:
> In the pg_dumpall backup process, I get this error. Does this help?
>
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data, and at some point it finds out it
needs to allocate 4294967293B of memo
On Fri, Apr 22, 2011 at 7:07 PM, wrote:
>> On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula
>> wrote:
>>> On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
> Tomas,
>
> I did a crash log with the strace for PID of the index comma
> On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula
> wrote:
>> On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra wrote:
>>> Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's the ou
On 21/04/2011 14:33, Vibhor Kumar wrote:
On Apr 21, 2011, at 4:23 PM, Tiruvenkatasamy Baskaran wrote:
Which version of postgresql supports replication on RHEL6?
RHEL version : 2.6.32-71.el6.x86_64
Why are you re-posting your question, if it has been answered?
Only guessing, but maybe the
[ Please don't do top posting]
> 20. cd /usr/local/pgsql/
> 21 . tar -czf data.tar.gzdata/
After 21. step, seems you forgot to execute pg_stop_backup() command.
With this, I would recommend you to follow the documentation given below:
http://wiki.postgresql.org/wiki/Streaming_R
29 matches
Mail list logo