Re: [GENERAL] Postgres on SSD

2011-08-10 Thread Amitabh Kant
There have been several discussions for SSD in recent months although not
specific to Fusion IO drives.

See http://archives.postgresql.org/pgsql-general/2011-04/msg00460.php . You
can search the archives for more such reference.


Amitabh

2011/8/11 Ondrej Ivanič 

> Hi,
>
> 2011/8/10 Tomas Vondra :
> > On 10 Srpen 2011, 1:17, Ondrej Ivanič wrote:
> >> - What needs to be changed at Postgres/Operating system level? The
> >> obvious one is to change random_page_cost (now: 2) and seq_page_cost
> >> (now: 4). What else should I look at?
> >
> > Are you sure about this? I'm not quite sure setting seq_page_cost=4 and
> > random_page_cost=2 makes sense. Usually seq_page_cost is lower than
> > random_page_cost, so I wonder why have you set it like that.
>
> Ups! Well spotted Tomas! The actual values are:
> random_page_cost = 2
> seq_page_cost = 1
>
> --
> Ondrej Ivanic
> (ondrej.iva...@gmail.com)
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Convert mysql to postgresql

2011-08-10 Thread Håvard Wahl Kongsgård
try
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL


On Thu, Aug 11, 2011 at 7:32 AM, AI Rumman  wrote:
> I have to convert some mysql queries to postgresql.
> Is there any good tools for this task?
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Convert mysql to postgresql

2011-08-10 Thread AI Rumman
I have to convert some mysql queries to postgresql.
Is there any good tools for this task?


Re: [GENERAL] is max connections in a database table somewhere

2011-08-10 Thread Greg Smith

On 08/10/2011 02:46 PM, Geoffrey Myers wrote:

Is the max connections value in a system table somewhere?


If you intend to do anything with the value you probably want one of 
these forms:


SELECT CAST(current_setting('max_connections') AS integer);
SELECT CAST(setting AS integer) FROM pg_settings WHERE 
name='max_connections';


The setting comes back as a text field when using current_setting on the 
pg_settings view (which isn't a real table, under the hood it's calling 
a system function)


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql server crash on windows 7 when using plpython

2011-08-10 Thread Craig Ringer

On 11/08/2011 1:00 AM, c k wrote:

I have renamed uuid-ossp.dll to 0uuid-ossp.dll in postgresql's lib
directory. But it is also found that postgresql's bin directory is not
included in the path.


It doesn't have to be. On Windows, the directory containing the current 
executable is implicitly the first path entry. Since postgres.exe lives 
in the same directory as uuid-ossp.dll, that's the copy of the DLL 
that'll be called.



Then started the postgresql again, called the same plpython function
again and again server crashed without any details in the log.
Also searched for the above dll or similar in python's installation but
not founf, one available was not dll but was .py file.


OK, so maybe Python doesn't use uuid-ossp but its own implementation in 
pure Python. I'll pull out a Windows box and check.



Recent log contains following lines.



*Fatal Python error: PyThreadState_Get: no current thread*


Hmm, that's interesting. Thanks for supplying the error message; looks 
like I was probably on entirely the wrong track because I was forced to 
guess with not enough information.


If I get a chance I'll have a play with the function you posted and see 
if I can reproduce the crash on my Win7 box.


In the mean time, if you want you can try to collect some more 
information about the crash according to these instructions:


http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Windows

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres on SSD

2011-08-10 Thread Ondrej Ivanič
Hi,

2011/8/10 Tomas Vondra :
> On 10 Srpen 2011, 1:17, Ondrej Ivanič wrote:
>> - What needs to be changed at Postgres/Operating system level? The
>> obvious one is to change random_page_cost (now: 2) and seq_page_cost
>> (now: 4). What else should I look at?
>
> Are you sure about this? I'm not quite sure setting seq_page_cost=4 and
> random_page_cost=2 makes sense. Usually seq_page_cost is lower than
> random_page_cost, so I wonder why have you set it like that.

Ups! Well spotted Tomas! The actual values are:
random_page_cost = 2
seq_page_cost = 1

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] streaming replication: one problem & several questions

2011-08-10 Thread Lonni J Friedman
Greetings,
I've got three Linux systems (each with Fedora15-x86_64 running
PostgreSQL-9.0.4).  I'm attempting to get a basic streaming
replication setup going with one master & two standby servers.  At
this point, the replication portion appears to be working.  I can run
an 'update' statement on the master, and view the result with a
'SELECT' on both standby servers.

I've mostly been working off the information presented here:
http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial
http://wiki.postgresql.org/wiki/Streaming_Replication

plus the official Postgresql website documentation, with copious
googling for other sources when something didn't make sense.

First the problem.  On *only* one of the two standby servers, I'm
seeing errors like the following whenever I issue any SQL commands on
the master which write (insert, update, etc) to the database:
LOG:  invalid record length at 8/7A20
FATAL:  terminating walreceiver process due to administrator command
LOG:  invalid record length at 8/7AB0
LOG:  streaming replication successfully connected to primary
LOG:  invalid record length at 8/7B20
FATAL:  terminating walreceiver process due to administrator command
LOG:  record with zero length at 8/7BB0
LOG:  streaming replication successfully connected to primary
LOG:  record with incorrect prev-link 8/7958 at 8/7DB0
LOG:  streaming replication successfully connected to primary

The thing that makes this even more confusing is that the data seems
to remain synchronized on both standby servers even with that error,
so I can't tell if I'm merely missing the implication of the errors,
of if they're somehow harmless?  Maybe its able to resume streaming
replication only because i have wal_keep_segments=128 ?  I googled a
bit on this, and found a few other references to these errors,
including this recent one which suggested that too much network
latency might be the problem:
http://permalink.gmane.org/gmane.comp.db.postgresql.general/153445

I should note that the standby that is exhibiting this problem is
running inside of a virtual machine, while the standby without the
problem is running on real HW.  Whenever I get to the point where I
want to push the entire setup into production, it will all be running
on real HW.

Now a few unrelated questions:
0) I've successfully setup the WAL archiving on the master, and set
archive_timeout=61.  However, what I'm seeing is that new files are
not getting generated every 61 seconds, but instead only when some
kind of SQL is invoked which writes to the database, or every 305
seconds (whichever comes first).  I know this is the case because I
wrote a script which is being invoked via the archive_command option,
and that script is logging both the timestamp and filenames each time
it is invoked.  Can I debug this?
1) Both of the wiki links above comment that the restore_command may
not be necessary if wal_keep_segments is large enough (mine is set to
128).  I was going to setup the restore_command anyway, as I'm not yet
confident enough about streaming replication and failover with
postgresql to take chances, although the fact that i have two standby
servers makes this setup a bit more complex.  However, can anyone
comment about whether its ever truly safe 100% of the time to run
without a restore_command ?

thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with planner

2011-08-10 Thread Cédric Villemain
2011/8/9 hubert depesz lubaczewski :
> On Tue, Aug 09, 2011 at 04:08:39PM -0400, Tom Lane wrote:
>> I suppose what's going on here is that the "state" and "ending_tsz"
>> columns are highly correlated, such that there are lots of 'active'
>> items but hardly any of them ended more than a day ago?  If so,
>
> yes, that's correct.
>
>> you're going to have to rethink the representation somehow to get
>> good results, because there's no way the planner will see this until
>> we have cross-column stats in some form.
>>
>> The least invasive fix that I can think of offhand is to set up an
>> index (non-partial) on the expression
>>
>>       case when state = 'active' then ending_tsz else null end
>>
>> and phrase the query as
>>
>>       WHERE (case when state = 'active' then ending_tsz else null end) <= 
>> (now() - '1 day'::interval)
>>
>> This should result in condensing the stats about active items'
>> ending_tsz into a format the planner can deal with, assuming
>> you're running a PG version that will keep and use stats on
>> expression indexes.
>
> it's 8.3.11.
> I solved the problem by adding "enable_bitmapscan = false" (and keeping
> the query in original format, with subselect) which caused the plan to
> be ok.
>
> but I'm much more interested to understand why pg chooses *not* to use
> index which is tailored specifically for the query - it has exactly
> matching where clause, and it indexes the column that we use for
> comparison.
>
> the thing is - i solved the problem for now. I might add new index the
> way you suggest, and it might help. but it's is very unnerving that
> postgresql will just choose to ignore specially made index, perfectly
> matching the criteria in query.
>
> since I can't test it - is there any chance (Cédric suggested something
> like this) that some newer version has more logic to try harder to use
> best index?

I wondered if it is the same logic to choose between bitmap and
indexscan in both 8.3 and HEAD.
It looks like it is (except that now you can put the not-wanted index
on another tablepsace and increase the cost of accessing it, which is
another no-so-pretty way to workaround the issue).

>
> Best regards,
>
> depesz
>
> --
> The best thing about modern society is how easy it is to avoid contact with 
> it.
>                                                             http://depesz.com/
>



-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is max connections in a table somewhere?

2011-08-10 Thread Adrian Klaver
On Wednesday, August 10, 2011 11:47:25 am Geoffrey Myers wrote:
> Is max connections in any table in the database I can access?

SELECT current_setting('max_connections');
 current_setting 
-
 100

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is max connections in a table somewhere?

2011-08-10 Thread Geoffrey Myers

Scott Marlowe wrote:

On Wed, Aug 10, 2011 at 12:47 PM, Geoffrey Myers
 wrote:

Is max connections in any table in the database I can access?


No it's in the postgresql.conf file, which is in various places
depending on how pg was installed.  for debian / ubuntu it's in
/etc/postgresql/8.x/main for the default cluster.  It's in
/var/lib/pgsql/data for RHEL 5.  Not sure about other distros.


Yeah, I knew it was in the postgresql.conf file, but since I've got a 
piece of code that's already connected to the database, I figured I'd 
get it from the database, rather then open the file and read it from there.


--
Geoffrey Myers
Myers Consulting Inc.
770.592.1651

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is max connections in a table somewhere?

2011-08-10 Thread Andy Colson

On 8/10/2011 1:49 PM, Guillaume Lelarge wrote:

On Wed, 2011-08-10 at 13:41 -0500, Andy Colson wrote:

On 8/10/2011 1:47 PM, Geoffrey Myers wrote:

Is max connections in any table in the database I can access?


Not really a table, but it is selectable:

show max_connections;

use "show all" to see everything.



Actually, it's also available in a table (pg_settings).




Well thats cool, I did not realize there was a table.

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is max connections in a table somewhere?

2011-08-10 Thread Ray Stell
On Wed, Aug 10, 2011 at 02:47:25PM -0400, Geoffrey Myers wrote:
> Is max connections in any table in the database I can access?

edbstore=> \d pg_catalog.pg_settings;
  View "pg_catalog.pg_settings"
   Column   |  Type   | Modifiers 
+-+---
 name   | text| 
 setting| text| 
 unit   | text| 
 category   | text| 
 short_desc | text| 
 extra_desc | text| 
 context| text| 
 vartype| text| 
 source | text| 
 min_val| text| 
 max_val| text| 
 enumvals   | text[]  | 
 boot_val   | text| 
 reset_val  | text| 
 sourcefile | text| 
 sourceline | integer | 
View definition:
 SELECT a.name, a.setting, a.unit, a.category, a.short_desc, a.extra_desc, 
a.context, a.vartype, a.source, a.min_val, a.max_val, a.enumvals, a.boot_val, 
a.reset_val, a.sourcefile, a.sourceline
   FROM pg_show_all_settings() a(name, setting, unit, category, short_desc, 
extra_desc, context, vartype, source, min_val, max_val, enumvals, boot_val, 
reset_val, sourcefile, sourceline);
Rules:
 pg_settings_n AS
ON UPDATE TO pg_settings DO INSTEAD NOTHING
 pg_settings_u AS
ON UPDATE TO pg_settings
   WHERE new.name = old.name DO  SELECT set_config(old.name, new.setting, 
false) AS set_config


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is max connections in a table somewhere?

2011-08-10 Thread Guillaume Lelarge
On Wed, 2011-08-10 at 13:41 -0500, Andy Colson wrote:
> On 8/10/2011 1:47 PM, Geoffrey Myers wrote:
> > Is max connections in any table in the database I can access?
> 
> Not really a table, but it is selectable:
> 
> show max_connections;
> 
> use "show all" to see everything.
> 

Actually, it's also available in a table (pg_settings).


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] is max connections in a database table somewhere

2011-08-10 Thread Geoffrey Myers

Is the max connections value in a system table somewhere?

Thanks.

--
Geoffrey Myers
Myers Consulting Inc.
770.592.1651

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is max connections in a table somewhere?

2011-08-10 Thread Scott Marlowe
On Wed, Aug 10, 2011 at 12:47 PM, Geoffrey Myers
 wrote:
> Is max connections in any table in the database I can access?

No it's in the postgresql.conf file, which is in various places
depending on how pg was installed.  for debian / ubuntu it's in
/etc/postgresql/8.x/main for the default cluster.  It's in
/var/lib/pgsql/data for RHEL 5.  Not sure about other distros.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is max connections in a table somewhere?

2011-08-10 Thread Andy Colson

On 8/10/2011 1:47 PM, Geoffrey Myers wrote:

Is max connections in any table in the database I can access?


Not really a table, but it is selectable:

show max_connections;

use "show all" to see everything.

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is max connections in a table somewhere?

2011-08-10 Thread Geoffrey Myers

Is max connections in any table in the database I can access?
--
Until later, Geoffrey

"I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of them."
- Thomas Jefferson

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql server crash on windows 7 when using plpython

2011-08-10 Thread c k
I have renamed uuid-ossp.dll to 0uuid-ossp.dll in postgresql's lib
directory. But it is also found that postgresql's bin directory is not
included in the path.
Then started the postgresql again, called the same plpython function again
and again server crashed without any details in the log.
Also searched for the above dll or similar in python's installation but not
founf, one available was not dll but was .py file.
Recent log contains following lines.

2011-08-10 22:25:38 IST LOG:  database system was shut down at 2011-08-10
14:40:36 IST
2011-08-10 22:25:38 IST FATAL:  the database system is starting up
2011-08-10 22:25:38 IST LOG:  database system is ready to accept connections
2011-08-10 22:25:38 IST LOG:  autovacuum launcher started
*Fatal Python error: PyThreadState_Get: no current thread*

This application has requested the Runtime to terminate it in an unusual
way.
Please contact the application's support team for more information.
2011-08-10 22:27:49 IST LOG:  server process (PID 2584) exited with exit
code 3
2011-08-10 22:27:49 IST LOG:  terminating any other active server processes
2011-08-10 22:27:49 IST WARNING:  terminating connection because of crash of
another server process
2011-08-10 22:27:49 IST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-08-10 22:27:49 IST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-08-10 22:27:49 IST WARNING:  terminating connection because of crash of
another server process
2011-08-10 22:27:49 IST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-08-10 22:27:49 IST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-08-10 22:27:49 IST WARNING:  terminating connection because of crash of
another server process
2011-08-10 22:27:49 IST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-08-10 22:27:49 IST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-08-10 22:27:49 IST WARNING:  terminating connection because of crash of
another server process
2011-08-10 22:27:49 IST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-08-10 22:27:49 IST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-08-10 22:27:49 IST LOG:  all server processes terminated;
reinitializing
2011-08-10 22:27:58 IST FATAL:  pre-existing shared memory block is still in
use
2011-08-10 22:27:58 IST HINT:  Check if there are any old server processes
still running, and terminate them.

Is it related with python threads?

Regards,
Chaitanya Kulkarni

On Wed, Aug 10, 2011 at 7:34 PM, Adrian Klaver wrote:

> On Tuesday, August 09, 2011 10:13:17 pm c k wrote:
> > Here is the actual function.
> > create or replace function software.python_test() returns text as
> > $body$
> > import sys
> > from uuid import getnode as get_mac
> > mac = get_mac()
> > return mac
> > $body$
> > language plpythonu volatile security definer;
> >
> > When running the same code from python prompt, it run correctly without
> any
> > error.
>
> The above function runs correctly in Postgres 9.0.3 on my Linux machine. I
> would
> say Craigs  post about multiple versions of uuid causing the problem is the
> answer to your problem.
>
>
> >
> >
> > Chaitanya Kulkarni
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] PLPGSQL Generic query in trigger doubt

2011-08-10 Thread Ioana Danes
Hi Mauro,

If you try to determine what fields were changed you can check this post:

http://jaime2ndquadrant.blogspot.com/

It might work for you.

Ioana

--- On Wed, 8/10/11, Mauro  wrote:
Hi, good morning list
I'm writing a generic trigger in plpgsql to provide a system log to my system, 
but I'm stopped with the folow problem:
Example:
TABLE STRUCT:table1   fldA    VARCHAR   fldB    VARCHAR   fldC    VARCHAR
FUNCTION:
DECLARE
   myrecord RECORD; -- record that will be storing field names from 
internal postres tables
   fieldtest    NAME; -- actual field name parsing
BEGIN
   -- Generic function to automatize the log of changes
   
   -- 1st, discover the field names from a table
   FOR myrecord IN
   SELECT
  att.attname
 FROM
  pg_attribute att,
  pg_class cls
 WHERE
  cls.oid = att.attrelid
  AND att.attnum > 0
  AND cls.relname = TG_RELNAME limit 1
   LOOP
  -- storing
 the actual field name
  fieldtest = myrecord.attname;
  
  /* Here I'd like to do a parse in the 'fieldtest' variable to 
teste if the new value is diferent of the old value. The problem is: 
Variable name: fieldtest Variable content: fldA
 How I can test the two records (new and old)? -- new.fieldtest 
= fieldtest is not a field name to new record -- new."fieldtest" = 
fieldtest is not a field name to new record -- new.(fieldtest) = 
plpgsql can not do a parser in this -- 'new.' || fieldtest = this is a 
string and can not be evaluate
  */


   END LOOP;
   
   -- Returning
   RETURN NEW;
END;




 Mauro Gonçalves





Re: [GENERAL] postgresql server crash on windows 7 when using plpython

2011-08-10 Thread Adrian Klaver
On Tuesday, August 09, 2011 10:13:17 pm c k wrote:
> Here is the actual function.
> create or replace function software.python_test() returns text as
> $body$
> import sys
> from uuid import getnode as get_mac
> mac = get_mac()
> return mac
> $body$
> language plpythonu volatile security definer;
> 
> When running the same code from python prompt, it run correctly without any
> error.

The above function runs correctly in Postgres 9.0.3 on my Linux machine. I 
would 
say Craigs  post about multiple versions of uuid causing the problem is the 
answer to your problem.


> 
> 
> Chaitanya Kulkarni
> 


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need to purge applied log from arch_replicate directory

2011-08-10 Thread Alex Lai

Dear all,

I am test postgres streaming replication under 9.0.4.  The  master and 
slave  have been replicated smoothly.  In master, I have a directory 
"/path/to/arch_replicate/" to used by slave to access and apply the 
logs.  The arch_replicate directory has been growing without purging any 
applied logs.  What is the best way to manage the logs to save the disk 
space?  I also found my postgres does not have pg_archivecleanup 
utility.  Where do I find this utility?  How do I use this utility?  
Your help would be greatly appreciated.


--
Best regards,


Alex Lai



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PLPGSQL Generic query in trigger doubt

2011-08-10 Thread Merlin Moncure
On Wed, Aug 10, 2011 at 8:38 AM, Mauro  wrote:
> Hi, good morning list
> I'm writing a generic trigger in plpgsql to provide a system log to my
> system, but I'm stopped with the folow problem:
> Example:
> TABLE STRUCT:
> table1
>    fldA    VARCHAR
>    fldB    VARCHAR
>    fldC    VARCHAR
> FUNCTION:
> DECLARE
>    myrecord RECORD; -- record that will be storing field names from
> internal postres tables
>    fieldtest    NAME; -- actual field name parsing
> BEGIN
>    -- Generic function to automatize the log of changes
>
>    -- 1st, discover the field names from a table
>    FOR myrecord IN
>    SELECT
>   att.attname
>  FROM
>   pg_attribute att,
>   pg_class cls
>  WHERE
>   cls.oid = att.attrelid
>   AND att.attnum > 0
>   AND cls.relname = TG_RELNAME limit 1
>    LOOP
>   -- storing the actual field name
>   fieldtest = myrecord.attname;
>
>   /*
>  Here I'd like to do a parse in the 'fieldtest' variable to teste if
> the new value is diferent of the old value. The problem is:
>  Variable name: fieldtest
>  Variable content: fldA
>  How I can test the two records (new and old)?
>  -- new.fieldtest = fieldtest is not a field name to new record
>  -- new."fieldtest" = fieldtest is not a field name to new record
>  -- new.(fieldtest) = plpgsql can not do a parser in this
>  -- 'new.' || fieldtest = this is a string and can not be evaluate
>   */
>
>
>    END LOOP;
>
>    -- Returning
>    RETURN NEW;
> END;

This is a FAQ.  See archives (for example, here:
http://www.mail-archive.com/pgsql-hackers@postgresql.org/msg182249.html).

A couple quick points:
*) plpgsql does not directly support record access like that.
*) you can compare new vs old directly without breaking down to fields
*) record into hstore is probably the easiest approach to dynamic
access of record fields by field name -- then you can drop the catalog
query

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PLPGSQL Generic query in trigger doubt

2011-08-10 Thread Mauro
Hi, good morning list

I'm writing a generic trigger in plpgsql to provide a system log to my system, 
but I'm stopped with the folow problem:

Example:

TABLE STRUCT:
table1
   fldA    VARCHAR
   fldB    VARCHAR
   fldC    VARCHAR

FUNCTION:

DECLARE
   myrecord RECORD; -- record that will be storing field names from 
internal postres tables
   fieldtest    NAME; -- actual field name parsing
BEGIN
   -- Generic function to automatize the log of changes
   
   -- 1st, discover the field names from a table
   FOR myrecord IN
   SELECT
  att.attname
 FROM
  pg_attribute att,
  pg_class cls
 WHERE
  cls.oid = att.attrelid
  AND att.attnum > 0
  AND cls.relname = TG_RELNAME limit 1
   LOOP
  -- storing the actual field name
  fieldtest = myrecord.attname;
  

  /*
 Here I'd like to do a parse in the 'fieldtest' variable to teste if 
the new value is diferent of the old value. The problem is:
 Variable name: fieldtest
 Variable content: fldA

 How I can test the two records (new and old)?
 -- new.fieldtest = fieldtest is not a field name to new record
 -- new."fieldtest" = fieldtest is not a field name to new record
 -- new.(fieldtest) = plpgsql can not do a parser in this
 -- 'new.' ||fieldtest = this is a string and can not be evaluate

  */



   END LOOP;
   
   -- Returning
   RETURN NEW;
END;




 

Mauro Gonçalves

Re: [GENERAL] Pgadmin goes missing in Ubuntu

2011-08-10 Thread Chetan Suttraway
On Tue, Aug 9, 2011 at 7:10 PM, Adrian Klaver wrote:

> On Tuesday, August 09, 2011 2:18:47 am Adarsh Sharma wrote:
> > Dear all,
> >
> > I installed postgresql from  a postgresplus-8.4.1-2-linux-x64 binary and
> > it is working properly in the system.Yesterday i do some work in it.
> >
> > But today when I restart the system, pgadmin goes missing from
> > Accessories > .
> > I don't know the reason of this .Please let me know how to solve this.
> > Any other way to open pgadmin.
>
> Open a terminal window and type pgadmin3.
>
> >
> >
> > Thanks
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

The script related to launching pgadmin is :
/opt/PostgreSQL/9.0/scripts/launchpgadmin.sh

so either you can create shortcut to this script or to pgadim3 (whose
location can be seen in above script)
or can add new menu item to the left panel.

Just right click "applications"->"edit menus"->"new menu" / "new item" and
point to the script.


Regards,
Chetan


-- 
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

 Website: www.enterprisedb.com
EnterpriseDB Blog : http://blogs.enterprisedb.com
Follow us on Twitter : http://www.twitter.com/enterprisedb


Re: [GENERAL] Problem with planner

2011-08-10 Thread hubert depesz lubaczewski
On Tue, Aug 09, 2011 at 05:18:38PM -0400, Tom Lane wrote:
> hubert depesz lubaczewski  writes:
> > I solved the problem by adding "enable_bitmapscan = false" (and keeping
> > the query in original format, with subselect) which caused the plan to
> > be ok.
> 
> I doubt that solution is any more robust than what you had before ---
> in particular, it's likely to fall back to seqscans.
> 
> > but I'm much more interested to understand why pg chooses *not* to use
> > index which is tailored specifically for the query - it has exactly
> > matching where clause, and it indexes the column that we use for
> > comparison.
> 
> Because the planner thinks it will have to pull a huge number of rows
> from the index.  Whether the index is "tailored" for the query
> is irrelevant if it looks more expensive to use than a seqscan.


The query with enabled bitmap scans finally finished:

   QUERY PLAN
-
 Aggregate  (cost=9117479.68..9117479.69 rows=1 width=0) (actual 
time=24964197.316..24964197.317 rows=1 loops=1)
   ->  Bitmap Heap Scan on objects  (cost=326375.14..9097597.32 rows=7952942 
width=0) (actual time=24804152.598..24964197.207 rows=48 loops=1)
 Recheck Cond: (state = 'active'::text)
 Filter: (ending_tsz <= (now() - '1 day'::interval))
 ->  Bitmap Index Scan on objects_stat_user_id_creation_tsz  
(cost=0.00..324386.90 rows=9754574 width=0) (actual 
time=24788705.254..24788705.254 rows=10582798 loops=1)
   Index Cond: (state = 'active'::text)
 Total runtime: 24964211.224 ms
(7 rows)

In the mean time, based on your other mail, I created this index:

create index concurrently depesz_test_idx on objects (state, ending_tsz) WHERE 
state = 'active'::text;

and now it's being used by plans:
  QUERY PLAN
  
--
 Aggregate  (cost=8510568.24..8510568.25 rows=1 width=0)
   ->  Bitmap Heap Scan on objects  (cost=183260.11..8490232.49 rows=8134302 
width=0)
 Recheck Cond: ((state = 'active'::text) AND (ending_tsz <= (now() - '1 
day'::interval)))
 ->  Bitmap Index Scan on depesz_test_idx  (cost=0.00..181226.54 
rows=8134302 width=0)
   Index Cond: ((state = 'active'::text) AND (ending_tsz <= (now() 
- '1 day'::interval)))
(5 rows)

but when I'll disable bitmap scans it reverts back to seq scan.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] postgresql server crash on windows 7 when using plpython

2011-08-10 Thread Scott Marlowe
On Wed, Aug 10, 2011 at 1:23 AM, c k  wrote:
> When I try to re-execute it, it says 'no connection to the server'. When
> checked the logs I found
>
> 2011-08-09 19:46:00 IST LOG:  database system was interrupted; last known up
> at 2011-08-09 19:45:17 IST
> 2011-08-09 19:46:00 IST LOG:  database system was not properly shut down;
> automatic recovery in progress
> 2011-08-09 19:46:00 IST FATAL:  the database system is starting up
> 2011-08-09 19:46:00 IST LOG:  consistent recovery state reached at
> 0/420B8C00
> 2011-08-09 19:46:00 IST LOG:  record with zero length at 0/420B8C00
> 2011-08-09 19:46:00 IST LOG:  redo is not required
> 2011-08-09 19:46:00 IST LOG:  database system is ready to accept connections
> 2011-08-09 19:46:01 IST LOG:  autovacuum launcher started
> Fatal Python error: PyThreadState_Get: no current thread

There should be an error before that that caused the database to not
shut down properly etc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] canceling autovacuum task error

2011-08-10 Thread Scott Marlowe
On Tue, Aug 9, 2011 at 11:07 PM, tamanna madaan
 wrote:
> Hi All
>
> I am using a cluster setup having postgres-8.4.0 and slon 2.0.4 is being

There are known data eating bugs in that version of postgresql, and I
personally had issues with earlier 2.0.x releases.  There are
important updates to BOTH of those packages you need to install.

> Can anybody shed some light on it if these errors are related or what could
> be the reason for these errors .

You are running a known buggy version of postgresql.  Update to the
latest 8.4.x immediately.  Your database may be corrupted beyond easy
recovery, got a backup?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] postgresql server crash on windows 7 when using plpython

2011-08-10 Thread c k
I didn't get any error message. When calling the function from PGAdmin I get
error**.
When I try to re-execute it, it says 'no connection to the server'. When
checked the logs I found

2011-08-09 19:46:00 IST LOG:  database system was interrupted; last known up
at 2011-08-09 19:45:17 IST
2011-08-09 19:46:00 IST LOG:  database system was not properly shut down;
automatic recovery in progress
2011-08-09 19:46:00 IST FATAL:  the database system is starting up
2011-08-09 19:46:00 IST LOG:  consistent recovery state reached at
0/420B8C00
2011-08-09 19:46:00 IST LOG:  record with zero length at 0/420B8C00
2011-08-09 19:46:00 IST LOG:  redo is not required
2011-08-09 19:46:00 IST LOG:  database system is ready to accept connections
2011-08-09 19:46:01 IST LOG:  autovacuum launcher started
Fatal Python error: PyThreadState_Get: no current thread

This application has requested the Runtime to terminate it in an unusual
way.
Please contact the application's support team for more information.
2011-08-09 19:49:39 IST LOG:  server process (PID 2596) exited with exit
code 3
2011-08-09 19:49:39 IST LOG:  terminating any other active server processes
2011-08-09 19:49:39 IST WARNING:  terminating connection because of crash of
another server process
2011-08-09 19:49:39 IST DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-08-09 19:49:39 IST HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-08-09 19:49:39 IST LOG:  all server processes terminated;
reinitializing
2011-08-09 19:49:50 IST FATAL:  pre-existing shared memory block is still in
use
2011-08-09 19:49:50 IST HINT:  Check if there are any old server processes
still running, and terminate them.

It only crashes when calling the first stated get_mac. If I use any other
logic inside the function it works fine. May be problem occurs when using
hardware related methods.

When I restart the server manually, there is nothing about the problem.

Chaitanya Kulkarni

On Wed, Aug 10, 2011 at 12:07 PM, Craig Ringer wrote:

> On 9/08/2011 10:54 PM, c k wrote:
>
>  Normal python functions returning text etc. are working but when
>> conatining
>>
>> import sys
>> from uuid import getnode as get_mac
>> mac = get_mac()
>> return mac
>>
>> fails. What will be the reason?
>>
>
> You still haven't supplied the error message you get when you run this.
>
> In the absence of better information, my guess would be that python's uuid
> module uses ossp-uuid, same as PostgreSQL does, but a different version or
> one compiled differently. The PostgreSQL `bin' dir with the postgresql
> version of the DLL will be in the path before the Python one, so Python is
> calling into a different version of the DLL than it expects and probably
> crashing as a result.
>
> That would be consistent with it working from the python command line.
>
> To work around this, I'd either use the ossp-uuid functions via the SPI
> rather than using the Python UUID module, or I'd remove the OSSP-UUID dll
> from the postgresql directory. You can certainly try that to see if it
> helps.
>
> This is one of the joys you get with Windows software not being managed by
> a central packaging system. Everyone bundles their own versions of all the
> dependencies, leaving messes like this where two DLLs with the same name
> aren't quite compatible. Yay!
>
> --
> Craig Ringer
>


Re: [GENERAL] Postgres on SSD

2011-08-10 Thread Tomas Vondra
On 10 Srpen 2011, 1:17, Ondrej Ivanič wrote:
> - What needs to be changed at Postgres/Operating system level? The
> obvious one is to change random_page_cost (now: 2) and seq_page_cost
> (now: 4). What else should I look at?

Are you sure about this? I'm not quite sure setting seq_page_cost=4 and
random_page_cost=2 makes sense. Usually seq_page_cost is lower than
random_page_cost, so I wonder why have you set it like that.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] canceling autovacuum task error

2011-08-10 Thread tamanna madaan
Hi All

I am using a cluster setup having postgres-8.4.0 and slon 2.0.4 is being
used for replication . It happened that the autovacuum was not running
successfully on one of the nodes in cluster and was giving error :

2011-05-13 23:07:42 CDTERROR:  canceling autovacuum task
2011-05-13 23:07:42 CDTCONTEXT:  automatic vacuum of table
"abc.abc.sometablename"
2011-05-13 23:07:42 CDTERROR:  could not open relation with OID 141231 at
character 87

sometimes it was giving a different error as below :

2011-05-13 04:45:05 CDTERROR:  canceling autovacuum task
2011-05-13 04:45:05 CDTCONTEXT:  automatic analyze of table
"abc.abc.sometablename"
2011-05-13 04:45:05 CDTLOG:  could not receive data from client: Connection
reset by peer
2011-05-13 04:45:05 CDTLOG:  unexpected EOF on client connection
2011-05-13 04:45:05 CDTERROR:  duplicate key value violates unique
constraint "sl_nodelock-pkey"
2011-05-13 04:45:05 CDTSTATEMENT:  select "_schemaname".cleanupNodelock();
insert into "_mswcluster".sl_nodelock values (2, 0,
"pg_catalog".pg_backend_pid());

Can see the below log also in postgres logs :

"checkpoints are occurring too frequently (19 seconds apart)"
 I am not sure when these all errors started coming . Just noticed these
when database size grew huge and it became slow.

Can anybody shed some light on it if these errors are related or what could
be the reason for these errors .

Thanks..
Tamanna