[GENERAL] Postgresql CIFS

2015-03-04 Thread AI Rumman
Hi All,

I am working on setting up a new database server with shared disk and
cluster failover.
In this environment, only one node will be active at a certain time.
Underneath, we are planning to have shared storage with CIFS protocol.

As I am newbie with this storag, can anyone please help me with some info
what are the database issues I can face with this kind of file system
protocol with Postgresql 9.1

Thanks.


Re: [GENERAL] Postgresql CIFS

2015-03-05 Thread AI Rumman
Thanks.

On Wed, Mar 4, 2015 at 10:22 PM, John R Pierce  wrote:

> On 3/4/2015 9:10 PM, AI Rumman wrote:
>
>> I am working on setting up a new database server with shared disk and
>> cluster failover.
>> In this environment, only one node will be active at a certain time.
>> Underneath, we are planning to have shared storage with CIFS protocol.
>>
>> As I am newbie with this storag, can anyone please help me with some info
>> what are the database issues I can face with this kind of file system
>> protocol with Postgresql 9.1
>>
>>
> why are you building a NEW system with the 2nd oldest release of
> postgres?  within a year or so, 9.1 will be obsolete and unsupported.
>
> CIFS will be pretty slow at the sorts of random writes that a database
> server does a lot of, and there's all sorts of room for hard-to-diagnose
> issues with unsafe write cache buffering in the file server, depending on
> the specifics of the CIFS server implementation. Not sure how you
> implement a high availability CIFS server without single points of failure,
> either...   thats hard enough with shared block storage implementations
> (requiring redundant storage networks, switches, and dual storage
> controllers with shared cache, dual homing the actual physical block
> storage, which is dual ported and all raid 10 typically).
>
> ISCSI or a proper SAN (fiberchannel) would be a much better choice for a
> shared storage active/passive cluster, just implement some sort of storage
> fencing to ensure only one node can have the file system mounted at a time.
>
> with postgres, its usually better to implement a HA cluster via streaming
> replication, the master and slave each with their own dedicated storage,
> and promoting the slave to master if/when the master dies.
>
>
> --
> john r pierce  37N 122W
> somewhere on the middle of the left coast
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Setting up HA postgresql

2015-07-21 Thread AI Rumman
Hi,

I made the following document 4 years back:
http://www.rummandba.com/2011/02/postgresql-failover-with-pgpool-ii.html

You may have a look if it makes any good to your work.

BTW, if you want to setup a share-nothing high scalable system with
data-sharding, you can go for pl/proxy.

Thanks.

On Tue, Jul 21, 2015 at 10:55 AM, Aviel Buskila  wrote:

> Can you link me up to a good tutorial using pgpool-II?
>
> 2015-07-21 20:02 GMT+03:00 Joshua D. Drake :
>
>>
>> On 07/21/2015 08:34 AM, William Dunn wrote:
>>
>>> Hello Aviel,
>>>
>>> On Tue, Jul 21, 2015 at 3:56 AM, Aviel Buskila >> > wrote:
>>>
>>> How can I set a highly available postgresql in a share-nothing
>>> architecture?
>>>
>>> I suggest you review the official documentation on high-availability
>>> configurations linked below:
>>> http://www.postgresql.org/docs/current/static/high-availability.html
>>>
>>> The most common configuration is to use PostgreSQL's built in
>>> master/standby streaming replication. However you will need to develop
>>> any fail-over logic you need yourself or use a third party tool such as
>>> EnterpriseDB's Failover Manager.
>>>
>>
>> Or use already available open source tools such as Pgpool-II or Linux-HA.
>>
>> Sincerely,
>> JD
>>
>> --
>> Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
>> PostgreSQL Centered full stack support, consulting and development.
>> Announcing "I'm offended" is basically telling the world you can't
>> control your own emotions, so everyone else should do it for you.
>>
>
>


[GENERAL] pg_dump error

2015-07-27 Thread AI Rumman
Hi,

I am getting the following error during pg_dump:

pg_dump dbname -v -t tablename -s -Fc -f dbname_tablename,sqlc

pg_dump: [custom archiver] WARNING: ftell mismatch with expected position
-- ftell used

I am using Postgresql 9.1 and I have enough disk space on it. The backup
drive is on ISOLON CIFS file system.

What may be the errors?

Please advice.


Thanks.


Re: [GENERAL] pg_dump error

2015-07-27 Thread AI Rumman
No it is "-s" for schema only backup.

Thanks.

On Mon, Jul 27, 2015 at 10:53 AM, Adrian Klaver 
wrote:

> On 07/27/2015 10:48 AM, AI Rumman wrote:
>
>> Hi,
>>
>> I am getting the following error during pg_dump:
>>
>> pg_dump dbname -v -t tablename -s -Fc -f dbname_tablename,sqlc
>>
>^
> Is that really a ','(comma)?
>
>
>
>> pg_dump: [custom archiver] WARNING: ftell mismatch with expected
>> position -- ftell used
>>
>> I am using Postgresql 9.1 and I have enough disk space on it. The backup
>> drive is on ISOLON CIFS file system.
>>
>> What may be the errors?
>>
>> Please advice.
>>
>>
>> Thanks.
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] pg_dump error

2015-07-27 Thread AI Rumman
I am not using any comma here. With the same command I am able to take dump
on other disks.
But with ISOLON CIFS, I am getting the error.

Thanks.

On Mon, Jul 27, 2015 at 10:56 AM, Adrian Klaver 
wrote:

> On 07/27/2015 10:55 AM, AI Rumman wrote:
>
>> No it is "-s" for schema only backup.
>>
>
> My ^ got misplaced in sending. I was talking about the ',' in the file
> name at the end of the command.
>
>
>> Thanks.
>>
>> On Mon, Jul 27, 2015 at 10:53 AM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 07/27/2015 10:48 AM, AI Rumman wrote:
>>
>> Hi,
>>
>> I am getting the following error during pg_dump:
>>
>> pg_dump dbname -v -t tablename -s -Fc -f dbname_tablename,sqlc
>>
>> ^
>> Is that really a ','(comma)?
>>
>>
>>
>> pg_dump: [custom archiver] WARNING: ftell mismatch with expected
>> position -- ftell used
>>
>> I am using Postgresql 9.1 and I have enough disk space on it.
>> The backup
>> drive is on ISOLON CIFS file system.
>>
>> What may be the errors?
>>
>> Please advice.
>>
>>
>> Thanks.
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] Postgresql upgrade from 8.4 to latest

2015-07-28 Thread AI Rumman
Hi,

I need to upgrade Postgresql database from 8.4 to latest stable version
(9.4). The db size is almost 2.5 TB.
Is pg_upgrade in-place is a good idea for it?

Thanks for advice.

Regards.


Re: [GENERAL] Postgresql upgrade from 8.4 to latest

2015-07-28 Thread AI Rumman
But what I read, in-place upgrade has smaller outage, compared to
dump/restore. But so many articles on having bugs afterwards.
Do you think it is a good idea to use pg_upgrade for critical database
application?
Or any other tool should I consider? For example - slony?

Thanks for advice.
Regards.

On Tue, Jul 28, 2015 at 1:29 PM, Joshua D. Drake 
wrote:

>
> On 07/28/2015 01:12 PM, AI Rumman wrote:
>
>> Hi,
>>
>> I need to upgrade Postgresql database from 8.4 to latest stable version
>> (9.4). The db size is almost 2.5 TB.
>> Is pg_upgrade in-place is a good idea for it?
>>
>
> With quite a bit of testing, yes.
>
> But keep in mind, it is still an outage.
>
> JD
>
>
>
>> Thanks for advice.
>>
>> Regards.
>>
>
>
> --
> Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> Announcing "I'm offended" is basically telling the world you can't
> control your own emotions, so everyone else should do it for you.
>


Re: [GENERAL] Postgresql upgrade from 8.4 to latest

2015-07-29 Thread AI Rumman
Thanks for good suggestions.


On Tue, Jul 28, 2015 at 3:13 PM, Joshua D. Drake 
wrote:

>
> On 07/28/2015 01:35 PM, AI Rumman wrote:
>
>> But what I read, in-place upgrade has smaller outage, compared to
>> dump/restore.
>>
>
> Correct, in fact if you do it with the link option, it will be very fast.
>
>
>  But so many articles on having bugs afterwards.
>> Do you think it is a good idea to use pg_upgrade for critical database
>> application?
>>
>
> It entirely depends, I have successfully used pg_upgrade many, many times.
> That is what -c is for, to work out all the kinks before you upgrade.
>
>  Or any other tool should I consider? For example - slony?
>>
>
> On at 2.5TB database, you very well be doing a lot more harm than good
> using a tool such as slony.
>
> JD
>
>
> --
> Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> Announcing "I'm offended" is basically telling the world you can't
> control your own emotions, so everyone else should do it for you.
>


[GENERAL] official rpm build spec file

2015-08-08 Thread AI Rumman
Hi,

How to get postgresql official rpm spec file?
Please let me know. I want to build my own Postgresql rpm.

Thanks.


[GENERAL] ERROR: invalid input syntax for type date: IS IT A BUG here?

2015-08-21 Thread AI Rumman
Hi All,

I am using Postgresql 9.1  where have a partitioned table as below:

> events_20150101
> events_20150102
> events_20150103
> ...
> events_overflow


When I am running the following query it gives me result:
*SQL 1: *

> select all relname, pg_total_relation_size(relname::text) as s,
> substr(relname,18)::date as dt from pg_stat_user_tables where schemaname =
> 'partitions' and relname not like '%overflow'


But when I run the following one, it gives me error:
*SQL 2: *

> select * as ts
> from
> (
> select relname, pg_total_relation_size(relname::text) as s,
> substr(relname,18)::date as dt from pg_stat_user_tables where schemaname =
> 'partitions' and relname not like '%overflow'  order by
> pg_total_relation_size(relname::text) desc
> ) as q
> where dt = '2015-01-01'::date;
> *ERROR:  invalid input syntax for type date: ""*

However, explain is showing plan:

>  Sort  (cost=202.03..202.04 rows=1 width=64)
>Sort Key:
> (pg_total_relation_size(((pg_stat_all_tables.relname)::text)::regclass))
>->  Subquery Scan on pg_stat_all_tables  (cost=201.93..202.02 rows=1
> width=64)
>  ->  HashAggregate  (cost=201.93..201.99 rows=1 width=136)
>->  Nested Loop Left Join  (cost=0.00..201.92 rows=1
> width=136)
>  ->  Nested Loop  (cost=0.00..194.23 rows=1 width=132)
>Join Filter: (c.relnamespace = n.oid)
>->  Seq Scan on pg_namespace n
> (cost=0.00..1.39 rows=1 width=68)
>  Filter: ((nspname <> ALL
> ('{pg_catalog,information_schema}'::name[])) AND (nspname !~
> '^pg_toast'::text) AND (nspname = 'partitions'::name))
>->  Seq Scan on pg_class c  (cost=0.00..192.77
> rows=6 width=72)
>  Filter: ((relkind = ANY
> ('{r,t}'::"char"[])) AND (relname !~~ '%overflow'::text) AND
> ((substr((relname)::text, 18))::date = '2015-01-01'::date))
>  ->  Index Scan using pg_index_indrelid_index on
> pg_index i  (cost=0.00..7.66 rows=2 width=8)
>Index Cond: (c.oid = indrelid)



Again, if I create a table and run the query it runs:

*SQL 3:*

> create table dba.tbl_list as  select all relname,
> pg_total_relation_size(relname::text) as s, substr(relname,18)::date as dt
> from pg_stat_user_tables where schemaname = 'partitions' and relname not
> like '%overflow' ;
> SELECT 558



> \d+ dba.tbl_list
>  Table "dba.tbl_list"
>  Column  |  Type  | Modifiers | Storage | Description
> -++---+-+-
>  relname | name   |   | plain   |
>  s   | bigint |   | plain   |
>  dt  | date   |   | plain   |
> Has OIDs: no


*SQL 4:*

> select * from dba.tbl_list  where dt = '2015-01-01';
>   relname  | s  | dt
> ---++
>  events_20150101 | 1309966336 | 2015-01-01
> (1 row)


Why the 2nd query is showing error? Is it a bug? Or am I doing any silly?
Any advice, please.

Thanks & Regards.


Re: [GENERAL] ERROR: invalid input syntax for type date: IS IT A BUG here?

2015-08-21 Thread AI Rumman
Hi Adrian,

Thanks for replying here.

Actually, I modified the actual table name from my production where I
forgot to change the subtr value.

You can see the result "SELECT 558" in SQL 3 where it selected that many
rows.

Regards.

On Fri, Aug 21, 2015 at 3:13 PM, Adrian Klaver 
wrote:

> On 08/21/2015 02:32 PM, AI Rumman wrote:
>
>> Hi All,
>>
>> I am using Postgresql 9.1  where have a partitioned table as below:
>>
>> events_20150101
>> events_20150102
>> events_20150103
>> ...
>> events_overflow
>>
>>
>> When I am running the following query it gives me result:
>>
>> *SQL 1: *
>>
>> select all relname, pg_total_relation_size(relname::text) as s,
>> substr(relname,18)::date as dt from pg_stat_user_tables where
>> schemaname = 'partitions' and relname not like '%overflow'
>>
>>
> What is the result?
>
>
>> But when I run the following one, it gives me error:
>>
>> *SQL 2: *
>>
>> select * as ts
>> from
>> (
>> select relname, pg_total_relation_size(relname::text) as s,
>> substr(relname,18)::date as dt from pg_stat_user_tables where
>> schemaname = 'partitions' and relname not like '%overflow'  order by
>> pg_total_relation_size(relname::text) desc
>> ) as q
>> where dt = '2015-01-01'::date;
>> *ERROR:  invalid input syntax for type date: ""*
>>
>
>
> production=# select substr('events_20150101', 18);
>  substr
> 
>
> (1 row)
>
> production=# select substr('events_20150101', 18)::date;
> ERROR:  invalid input syntax for type date: ""
>
> Your substr is creating an empty str which cannot be cast to a date. I
> can't see how you could get a result from your first query, which is why I
> asked for what you are seeing.
>
>
>> However, explain is showing plan:
>>
>>   Sort  (cost=202.03..202.04 rows=1 width=64)
>> Sort Key:
>>
>> (pg_total_relation_size(((pg_stat_all_tables.relname)::text)::regclass))
>> ->  Subquery Scan on pg_stat_all_tables  (cost=201.93..202.02
>> rows=1 width=64)
>>   ->  HashAggregate  (cost=201.93..201.99 rows=1 width=136)
>> ->  Nested Loop Left Join  (cost=0.00..201.92 rows=1
>> width=136)
>>   ->  Nested Loop  (cost=0.00..194.23 rows=1
>> width=132)
>> Join Filter: (c.relnamespace = n.oid)
>> ->  Seq Scan on pg_namespace n
>> (cost=0.00..1.39 rows=1 width=68)
>>   Filter: ((nspname <> ALL
>> ('{pg_catalog,information_schema}'::name[])) AND (nspname !~
>> '^pg_toast'::text) AND (nspname = 'partitions'::name))
>> ->  Seq Scan on pg_class c
>> (cost=0.00..192.77 rows=6 width=72)
>>   Filter: ((relkind = ANY
>> ('{r,t}'::"char"[])) AND (relname !~~ '%overflow'::text) AND
>> ((substr((relname)::text, 18))::date = '2015-01-01'::date))
>>   ->  Index Scan using pg_index_indrelid_index
>> on pg_index i  (cost=0.00..7.66 rows=2 width=8)
>> Index Cond: (c.oid = indrelid)
>>
>> Again, if I create a table and run the query it runs:
>>
>> *SQL 3:*
>>
>> create table dba.tbl_list as  select all relname,
>> pg_total_relation_size(relname::text) as s, substr(relname,18)::date
>> as dt from pg_stat_user_tables where schemaname = 'partitions' and
>> relname not like '%overflow' ;
>> SELECT 558
>>
>> \d+ dba.tbl_list
>>   Table "dba.tbl_list"
>>   Column  |  Type  | Modifiers | Storage | Description
>> -++---+-+-
>>   relname | name   |   | plain   |
>>   s   | bigint |   | plain   |
>>   dt  | date   |   | plain   |
>> Has OIDs: no
>>
>>
>> *SQL 4:*
>>
>> select * from dba.tbl_list  where dt = '2015-01-01';
>>relname  | s  | dt
>> ---++
>>   events_20150101 | 1309966336 | 2015-01-01
>> (1 row)
>>
>> Why the 2nd query is showing error? Is it a bug? Or am I doing any silly?
>> Any advice, please.
>>
>> Thanks & Regards.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] pgpool ssl handshake failure

2015-10-15 Thread AI Rumman
Hi,

I am using pgpool-II version 3.4.3 (tataraboshi).
Where my database is Postgresql 8.4.

I am trying to configure ssl mode from client and between pgpool and
database it is non-ssl.
I configured as document and now I am getting this in my log:

>
> *2015-10-13 22:17:58: pid 1857: LOG:  new connection received*
> *2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5
> port=65326*
> *2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl handshake
> failure"*
> *2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from 
> frontend**2015-10-13
> 22:17:58: pid 1857: DETAIL:  socket read failed with an error "Success"*

Please let me know what wrong I am doing.

Thanks & Regards.


Re: [GENERAL] pgpool ssl handshake failure

2015-10-15 Thread AI Rumman
I configured Postgresql 9.4 and still getting the same error.

Thanks.

On Thu, Oct 15, 2015 at 7:16 AM, Adrian Klaver 
wrote:

> On 10/15/2015 06:59 AM, AI Rumman wrote:
>
>> Hi,
>>
>> I am using pgpool-II version 3.4.3 (tataraboshi).
>> Where my database is Postgresql 8.4.
>>
>
> Probably already know, but 8.4 is approximately 1.25 years beyond EOL:
>
> http://www.postgresql.org/support/versioning/
>
>
>> I am trying to configure ssl mode from client and between pgpool and
>> database it is non-ssl.
>>
>
> What is non-ssl, the database or pgpool?
>
> I configured as document and now I am getting this in my log:
>>
>> /2015-10-13 22:17:58: pid 1857: LOG:  new connection received
>> //2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5
>> port=65326
>> //2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl
>> handshake failure"
>> //2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from
>> frontend
>> //2015-10-13 22:17:58: pid 1857: DETAIL:  socket read failed with an
>> error "Success"/
>>
>> Please let me know what wrong I am doing.
>>
>
> Not quite sure but given the below from the 9.5 Release Notes:
>
> "
> Remove server configuration parameter ssl_renegotiation_limit, which was
> deprecated in earlier releases (Andres Freund)
>
> While SSL renegotiation is a good idea in theory, it has caused enough
> bugs to be considered a net negative in practice, and it is due to be
> removed from future versions of the relevant standards. We have therefore
> removed support for it from PostgreSQL."
>
> I would check to see what  ssl_renegotiation_limit is set to:
>
> http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html
>
> and if it is not set to 0, then try that.
>
>
>
>> Thanks & Regards.
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] Building 9.4 rpm for Red Hat 5

2016-01-19 Thread AI Rumman
Hi All,

My production boxes are running on Redhat 5 and I need to build Postgresql
9.4 rpm for it following our environment setup.
If I build the rpm on Centos 5, will it be ok to run on Red Hat 5 boxes?
Can you please let me know?

Thanks.


Re: [GENERAL] Building 9.4 rpm for Red Hat 5

2016-01-19 Thread AI Rumman
We have all our scripts running on older Postrgesql versions for years
where it is using PGHOME and other paths. We need to make the 9.4 rpm to
follow those directory structure.
Considering above, am I going on right track? Please let me know.

Thanks.

On Tue, Jan 19, 2016 at 12:25 PM, Joshua D. Drake 
wrote:

> On 01/19/2016 12:21 PM, AI Rumman wrote:
>
>> Hi All,
>>
>> My production boxes are running on Redhat 5 and I need to build
>> Postgresql 9.4 rpm for it following our environment setup.
>> If I build the rpm on Centos 5, will it be ok to run on Red Hat 5 boxes?
>> Can you please let me know?
>>
>> Thanks.
>>
>
> CentOS 5 and RHEL 5 are binary compatible, so yes.
>
> However, is there a reason you aren't just using yum.postgresql.org?
>
> JD
>
> --
> Command Prompt, Inc.  http://the.postgres.company/
>  +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
>


Re: [GENERAL] Building 9.4 rpm for Red Hat 5

2016-01-19 Thread AI Rumman
We build our own rpms.

Thanks.

On Tue, Jan 19, 2016 at 12:34 PM, Adrian Klaver 
wrote:

> On 01/19/2016 12:29 PM, AI Rumman wrote:
>
>> We have all our scripts running on older Postrgesql versions for years
>> where it is using PGHOME and other paths. We need to make the 9.4 rpm to
>> follow those directory structure.
>> Considering above, am I going on right track? Please let me know.
>>
>
> How did you install the Postgres version you have running now?
>
>
>> Thanks.
>>
>> On Tue, Jan 19, 2016 at 12:25 PM, Joshua D. Drake > <mailto:j...@commandprompt.com>> wrote:
>>
>> On 01/19/2016 12:21 PM, AI Rumman wrote:
>>
>> Hi All,
>>
>> My production boxes are running on Redhat 5 and I need to build
>> Postgresql 9.4 rpm for it following our environment setup.
>> If I build the rpm on Centos 5, will it be ok to run on Red Hat
>> 5 boxes?
>> Can you please let me know?
>>
>> Thanks.
>>
>>
>> CentOS 5 and RHEL 5 are binary compatible, so yes.
>>
>> However, is there a reason you aren't just using yum.postgresql.org
>> <http://yum.postgresql.org>?
>>
>> JD
>>
>> --
>> Command Prompt, Inc. http://the.postgres.company/
>> +1-503-667-4564 
>> PostgreSQL Centered full stack support, consulting and development.
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread AI Rumman
Hi,

I have a table with daily partition schema on Postgresql 9.1 where we are
keeping 2 years of data.
Often I experience that autovacuum process is busy with old tables where
there is no change. How can I stop it?
Please advice.

Thanks.


Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread AI Rumman
But, will it not create transaction wraparound for those table?

Thanks.

On Wed, Jan 20, 2016 at 4:44 PM, Melvin Davidson 
wrote:

>
> ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false,
> toast.autovacuum_enabled = false);
>
> On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman  wrote:
>
>> Hi,
>>
>> I have a table with daily partition schema on Postgresql 9.1 where we are
>> keeping 2 years of data.
>> Often I experience that autovacuum process is busy with old tables where
>> there is no change. How can I stop it?
>> Please advice.
>>
>> Thanks.
>>
>>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


[GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
Hi,

I am running Postgresql 9.1 and I can see the datfrozenxid is going high
and vacuum process is not bringing it down. And this has been happening on
template1 database.

2016-02-12 16:51:50.400 CST [19445][@] : [13-1] WARNING:  oldest xmin is
> far in the past
> 2016-02-12 16:51:50.400 CST [19445][@] : [14-1] HINT:  Close open
> transactions soon to avoid wraparound problems.
> 2016-02-12 16:51:50.400 CST [19445][@] : [15-1] LOG:  automatic vacuum of
> table "template1.pg_catalog.pg_database": index scans: 0
> pages: 0 removed, 1 remain
> tuples: 0 removed, 9 remain
> system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
> 2016-02-12 16:51:50.411 CST [19435][@] : [82-1] WARNING:  oldest xmin is
> far in the past
> 2016-02-12 16:51:50.411 CST [19435][@] : [83-1] HINT:  Close open
> transactions soon to avoid wraparound problems.
> 2016-02-12 16:51:50.411 CST [19435][@] : [84-1] LOG:  automatic vacuum of
> table "template1.pg_catalog.pg_largeobject": index scans: 0
> pages: 0 removed, 0 remain
> tuples: 0 removed, 0 remain
> system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec


I vacuum database manually but nothing is working out.
Please help,

Thanks.


Re: [GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
I checked it and I did not find any log running sql or any open
transaction. Not even in pg_prepared_xacts.
And it looks like pg_catalog database is making the alarm.

Any other idea please, where I need to look into.

Thanks.


On Fri, Feb 12, 2016 at 3:05 PM, Adrian Klaver 
wrote:

> On 02/12/2016 02:56 PM, AI Rumman wrote:
>
>> Hi,
>>
>> I am running Postgresql 9.1 and I can see the datfrozenxid is going high
>> and vacuum process is not bringing it down. And this has been happening
>> on template1 database.
>>
>> 2016-02-12 16:51:50.400 CST [19445][@] : [13-1] WARNING:  oldest
>> xmin is far in the past
>> 2016-02-12 16:51:50.400 CST [19445][@] : [14-1] HINT:  Close open
>> transactions soon to avoid wraparound problems.
>>
>
> The above seems to be the contributing factor.
>
> Does:
>
> select * from pg_stat_activity
>
> show long running queries.
>
>
> 2016-02-12 16:51:50.400 CST [19445][@] : [15-1] LOG:  automatic
>> vacuum of table "template1.pg_catalog.pg_database": index scans: 0
>> pages: 0 removed, 1 remain
>> tuples: 0 removed, 9 remain
>> system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>> 2016-02-12 16:51:50.411 CST [19435][@] : [82-1] WARNING:  oldest
>> xmin is far in the past
>> 2016-02-12 16:51:50.411 CST [19435][@] : [83-1] HINT:  Close open
>> transactions soon to avoid wraparound problems.
>> 2016-02-12 16:51:50.411 CST [19435][@] : [84-1] LOG:  automatic
>> vacuum of table "template1.pg_catalog.pg_largeobject": index scans: 0
>> pages: 0 removed, 0 remain
>> tuples: 0 removed, 0 remain
>> system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>>
>>
>> I vacuum database manually but nothing is working out.
>> Please help,
>>
>> Thanks.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
Used this query in each of the database::

SELECT t.relname, l.database, l.locktype, l.pid , l.mode, l.granted,
p.current_query, p.query_start ,p.waiting
FROM pg_locks as l
INNER JOIN pg_stat_all_tables t
on l.relation = t.relid
INNER JOIN pg_stat_activity as p
on l.pid = p.procpid ;

No luck. At present, db is working, but t is going towards wraparound.

On Fri, Feb 12, 2016 at 3:28 PM, Adrian Klaver 
wrote:

> On 02/12/2016 03:10 PM, AI Rumman wrote:
>
>> I checked it and I did not find any log running sql or any open
>> transaction. Not even in pg_prepared_xacts.
>> And it looks like pg_catalog database is making the alarm.
>>
>> Any other idea please, where I need to look into.
>>
>
> Should have added:
>
> select * from pg_database
>
>
>> Thanks.
>>
>>
>> On Fri, Feb 12, 2016 at 3:05 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 02/12/2016 02:56 PM, AI Rumman wrote:
>>
>> Hi,
>>
>> I am running Postgresql 9.1 and I can see the datfrozenxid is
>> going high
>> and vacuum process is not bringing it down. And this has been
>> happening
>> on template1 database.
>>
>>  2016-02-12 16:51:50.400 CST [19445][@] : [13-1] WARNING:
>> oldest
>>  xmin is far in the past
>>  2016-02-12 16:51:50.400 CST [19445][@] : [14-1] HINT:
>> Close open
>>  transactions soon to avoid wraparound problems.
>>
>>
>> The above seems to be the contributing factor.
>>
>> Does:
>>
>> select * from pg_stat_activity
>>
>> show long running queries.
>>
>>
>>  2016-02-12 16:51:50.400 CST [19445][@] : [15-1] LOG:
>> automatic
>>  vacuum of table "template1.pg_catalog.pg_database": index
>> scans: 0
>>  pages: 0 removed, 1 remain
>>  tuples: 0 removed, 9 remain
>>  system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>>  2016-02-12 16:51:50.411 CST [19435][@] : [82-1] WARNING:
>> oldest
>>  xmin is far in the past
>>  2016-02-12 16:51:50.411 CST [19435][@] : [83-1] HINT:
>> Close open
>>  transactions soon to avoid wraparound problems.
>>  2016-02-12 16:51:50.411 CST [19435][@] : [84-1] LOG:
>> automatic
>>  vacuum of table "template1.pg_catalog.pg_largeobject":
>> index scans: 0
>>  pages: 0 removed, 0 remain
>>  tuples: 0 removed, 0 remain
>>  system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>>
>>
>> I vacuum database manually but nothing is working out.
>> Please help,
>>
>> Thanks.
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
In pg_subtrans, I have files like:

>
> $ ls -lrt | more
> total 1269436
> -rw--- 1 postgres postgres 262144 Jan 25 18:49 D907
> -rw--- 1 postgres postgres 262144 Jan 25 18:54 D908
> -rw--- 1 postgres postgres 262144 Jan 25 18:58 D909
> -rw--- 1 postgres postgres 262144 Jan 25 18:59 D90A
> -rw--- 1 postgres postgres 262144 Jan 25 19:04 D90B
> -rw--- 1 postgres postgres 262144 Jan 25 19:09 D90C
> -rw--- 1 postgres postgres 262144 Jan 25 19:14 D90D
> -rw--- 1 postgres postgres 262144 Jan 25 19:18 D90E
> -rw--- 1 postgres postgres 262144 Jan 25 19:19 D90F
> -rw--- 1 postgres postgres 262144 Jan 25 19:24 D910
> -rw--- 1 postgres postgres 262144 Jan 25 19:29 D911
> -rw--- 1 postgres postgres 262144 Jan 25 19:33 D912
> -rw--- 1 postgres postgres 262144 Jan 25 19:34 D913
> -rw--- 1 postgres postgres 262144 Jan 25 19:39 D914
> -rw--- 1 postgres postgres 262144 Jan 25 19:44 D915
> -rw--- 1 postgres postgres 262144 Jan 25 19:49 D916
> -rw--- 1 postgres postgres 262144 Jan 25 19:53 D917
> -rw--- 1 postgres postgres 262144 Jan 25 19:54 D918
> -rw--- 1 postgres postgres 262144 Jan 25 19:59 D919
> -rw--- 1 postgres postgres 262144 Jan 25 20:04 D91A
> -rw--- 1 postgres postgres 262144 Jan 25 20:09 D91B
> -rw--- 1 postgres postgres 262144 Jan 25 20:14 D91C
> -rw--- 1 postgres postgres 262144 Jan 25 20:19 D91D
> -rw--- 1 postgres postgres 262144 Jan 25 20:23 D91E
> -rw--- 1 postgres postgres 262144 Jan 25 20:24 D91F
> -rw--- 1 postgres postgres 262144 Jan 25 20:29 D920
> -rw--- 1 postgres postgres 262144 Jan 25 20:34 D921
> -rw--- 1 postgres postgres 262144 Jan 25 20:39 D922
> -rw--- 1 postgres postgres 262144 Jan 25 20:44 D923
> -rw--- 1 postgres postgres 262144 Jan 25 20:49 D924
> -rw--- 1 postgres postgres 262144 Jan 25 20:54 D925
> -rw--- 1 postgres postgres 262144 Jan 25 20:59 D926
> -rw--- 1 postgres postgres 262144 Jan 25 21:04 D927
> .


Does it mean that I have too many open transactions? If yes, it is not
showing in pg_stat_activity.

On Fri, Feb 12, 2016 at 3:38 PM, AI Rumman  wrote:

> Used this query in each of the database::
>
> SELECT t.relname, l.database, l.locktype, l.pid , l.mode, l.granted,
> p.current_query, p.query_start ,p.waiting
> FROM pg_locks as l
> INNER JOIN pg_stat_all_tables t
> on l.relation = t.relid
> INNER JOIN pg_stat_activity as p
> on l.pid = p.procpid ;
>
> No luck. At present, db is working, but t is going towards wraparound.
>
> On Fri, Feb 12, 2016 at 3:28 PM, Adrian Klaver 
> wrote:
>
>> On 02/12/2016 03:10 PM, AI Rumman wrote:
>>
>>> I checked it and I did not find any log running sql or any open
>>> transaction. Not even in pg_prepared_xacts.
>>> And it looks like pg_catalog database is making the alarm.
>>>
>>> Any other idea please, where I need to look into.
>>>
>>
>> Should have added:
>>
>> select * from pg_database
>>
>>
>>> Thanks.
>>>
>>>
>>> On Fri, Feb 12, 2016 at 3:05 PM, Adrian Klaver
>>> mailto:adrian.kla...@aklaver.com>> wrote:
>>>
>>> On 02/12/2016 02:56 PM, AI Rumman wrote:
>>>
>>> Hi,
>>>
>>> I am running Postgresql 9.1 and I can see the datfrozenxid is
>>> going high
>>> and vacuum process is not bringing it down. And this has been
>>> happening
>>> on template1 database.
>>>
>>>  2016-02-12 16:51:50.400 CST [19445][@] : [13-1] WARNING:
>>> oldest
>>>  xmin is far in the past
>>>  2016-02-12 16:51:50.400 CST [19445][@] : [14-1] HINT:
>>> Close open
>>>  transactions soon to avoid wraparound problems.
>>>
>>>
>>> The above seems to be the contributing factor.
>>>
>>> Does:
>>>
>>> select * from pg_stat_activity
>>>
>>> show long running queries.
>>>
>>>
>>>  2016-02-12 16:51:50.400 CST [19445][@] : [15-1] LOG:
>>> automatic
>>>  vacuum of table "template1.pg_catalog.pg_database": index
>>> scans: 0
>>>  pages: 0 removed, 1 remain
>>>  tuples: 0 removed, 9 remain
>>>  system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
>>>  2016-02-12 16:51:50.411 CST [19435][@] : [82-1] WARNING:
>>> oldest
>>>  xmin is far in the past
>>> 

Re: [GENERAL] template1 database is facing high datfrozenxid

2016-02-12 Thread AI Rumman
Yes. And we solved the problem.
We looked into the pg_subtrans and found that we had subrans pending from
January 25th. We investigated more and found that I large sql was executed
on Streaming standby around that date.
More digging we found the date of the below alert is also near:

WARNING:  oldest xmin is far in the past

We stopped standby and the problem solved. :)

Thanks.

On Fri, Feb 12, 2016 at 4:11 PM, Adrian Klaver 
wrote:

> On 02/12/2016 04:03 PM, AI Rumman wrote:
>
>> In pg_subtrans, I have files like:
>>
>
> Are you sure you are looking at the same database cluster in all the cases?
>
> What does:
>
> SELECT datname, age(datfrozenxid) FROM pg_database;
>
> give you?
>
>
>>
>> $ ls -lrt | more
>> total 1269436
>> -rw--- 1 postgres postgres 262144 Jan 25 18:49 D907
>> -rw--- 1 postgres postgres 262144 Jan 25 18:54 D908
>> -rw--- 1 postgres postgres 262144 Jan 25 18:58 D909
>> -rw--- 1 postgres postgres 262144 Jan 25 18:59 D90A
>> -rw--- 1 postgres postgres 262144 Jan 25 19:04 D90B
>> -rw--- 1 postgres postgres 262144 Jan 25 19:09 D90C
>> -rw--- 1 postgres postgres 262144 Jan 25 19:14 D90D
>> -rw--- 1 postgres postgres 262144 Jan 25 19:18 D90E
>> -rw--- 1 postgres postgres 262144 Jan 25 19:19 D90F
>> -rw--- 1 postgres postgres 262144 Jan 25 19:24 D910
>> -rw--- 1 postgres postgres 262144 Jan 25 19:29 D911
>> -rw--- 1 postgres postgres 262144 Jan 25 19:33 D912
>> -rw--- 1 postgres postgres 262144 Jan 25 19:34 D913
>> -rw--- 1 postgres postgres 262144 Jan 25 19:39 D914
>> -rw--- 1 postgres postgres 262144 Jan 25 19:44 D915
>> -rw--- 1 postgres postgres 262144 Jan 25 19:49 D916
>> -rw--- 1 postgres postgres 262144 Jan 25 19:53 D917
>> -rw--- 1 postgres postgres 262144 Jan 25 19:54 D918
>> -rw--- 1 postgres postgres 262144 Jan 25 19:59 D919
>> -rw--- 1 postgres postgres 262144 Jan 25 20:04 D91A
>> -rw--- 1 postgres postgres 262144 Jan 25 20:09 D91B
>> -rw--- 1 postgres postgres 262144 Jan 25 20:14 D91C
>> -rw--- 1 postgres postgres 262144 Jan 25 20:19 D91D
>> -rw--- 1 postgres postgres 262144 Jan 25 20:23 D91E
>> -rw--- 1 postgres postgres 262144 Jan 25 20:24 D91F
>> -rw--- 1 postgres postgres 262144 Jan 25 20:29 D920
>> -rw--- 1 postgres postgres 262144 Jan 25 20:34 D921
>> -rw--- 1 postgres postgres 262144 Jan 25 20:39 D922
>> -rw--- 1 postgres postgres 262144 Jan 25 20:44 D923
>> -rw--- 1 postgres postgres 262144 Jan 25 20:49 D924
>> -rw--- 1 postgres postgres 262144 Jan 25 20:54 D925
>> -rw--- 1 postgres postgres 262144 Jan 25 20:59 D926
>> -rw--- 1 postgres postgres 262144 Jan 25 21:04 D927
>> .
>>
>>
>> Does it mean that I have too many open transactions? If yes, it is not
>> showing in pg_stat_activity.
>>
>> On Fri, Feb 12, 2016 at 3:38 PM, AI Rumman > <mailto:rumman...@gmail.com>> wrote:
>>
>> Used this query in each of the database::
>>
>> SELECT t.relname, l.database, l.locktype, l.pid , l.mode, l.granted,
>> p.current_query, p.query_start ,p.waiting
>> FROM pg_locks as l
>>     INNER JOIN pg_stat_all_tables t
>> on l.relation = t.relid
>> INNER JOIN pg_stat_activity as p
>> on l.pid = p.procpid ;
>>
>> No luck. At present, db is working, but t is going towards wraparound.
>>
>> On Fri, Feb 12, 2016 at 3:28 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 02/12/2016 03:10 PM, AI Rumman wrote:
>>
>> I checked it and I did not find any log running sql or any
>> open
>> transaction. Not even in pg_prepared_xacts.
>> And it looks like pg_catalog database is making the alarm.
>>
>> Any other idea please, where I need to look into.
>>
>>
>> Should have added:
>>
>> select * from pg_database
>>
>>
>> Thanks.
>>
>>
>> On Fri, Feb 12, 2016 at 3:05 PM, Adrian Klaver
>> > <mailto:adrian.kla...@aklaver.com>
>> <mailto:adrian.kla...@aklaver.com
>>
>> <mailto:adrian.kla...@aklaver.com>>> wrote:
>>
>>  On 02/12/2016 02:56 PM, AI Rumman wrote:
>>
>>  Hi,
>>
>>  

[GENERAL] 8.1 Table partition and getting error

2010-10-07 Thread AI Rumman
I am using POstgreql 8.1.
I create table partition as follows:
alter table crm rename to crm_bak;

CREATE TABLE crm
(
  crmid integer NOT NULL,
  description text,
  deleted integer NOT NULL DEFAULT 0
)
WITHOUT OIDS;
ALTER TABLE crm OWNER TO vcrm;


create table crm_deleted ( check ( deleted = 1 ) ) inherits (crm);
create table crm_active ( check ( deleted = 0 ) ) inherits (crm);


create index crm_deleted_idx on crm_active(deleted);
analyze crm_active;

CREATE OR REPLACE FUNCTION crm_insert_p()
RETURNS TRIGGER AS $$
BEGIN
  IF ( NEW.deleted = 0 ) THEN
  INSERT INTO crm_active VALUES (NEW.*);
  ELSE
  INSERT INTO crm_deleted VALUES (NEW.*);
  END IF;
  RETURN NULL;
END;
$$
LANGUAGE plpgsql;

CREATE TRIGGER crm_insert_t
 BEFORE INSERT ON crm
 FOR EACH ROW EXECUTE PROCEDURE crm_insert_p();


CREATE OR REPLACE FUNCTION crm_update_deleted_p()
RETURNS TRIGGER AS $$
  BEGIN
  IF (NEW.deleted = 1) THEN
  INSERT INTO crm_deleted VALUES (NEW.*);
  DELETE FROM crm_active WHERE crmid = NEW.crmid;
  ELSE
  RETURN (NEW.*);
  END IF;
  END;
$$
LANGUAGE plpgsql;


CREATE TRIGGER crm_update_t
  BEFORE UPDATE ON crm
  FOR EACH ROW EXECUTE PROCEDURE crm_update_deleted_p();

INSERT INTO crm
SELECT * FROM crm_bak;

select count(*) from crm;

select count(*) from crm_active;

select count(*) from crm_deleted;

set constraint_exclusion = on;

--
It works fine.
But when I want to use the following sql, I get error:
update crm set deleted = 1 where crmid = 3;
ERROR: new row for relation "crm_active" violates check constraint
"crm_active_deleted_check"

Any idea please.


[GENERAL] VACUUM FULL for performance

2010-10-07 Thread AI Rumman
MayVACUUM FULL on a table improve perfromance of the system?


[GENERAL] How to find which query are doing seq_scan

2010-10-17 Thread AI Rumman
I am using Postgresql 8.1.
I need to know which query are executing seq_scan on tables as statistics
said there were 4 seq_scan on the tables for the last 2 days.
Any idea please.


[GENERAL] server gets slow at certain time

2010-10-29 Thread AI Rumman
I am using Postgresql 8.1 in 8-cpu cores with 32 gb ram with CestOS 5.
I have different DBs in the system and the large one is 12 GB and it is a
CRM DB and heavily used.
Every day after 1 pm the system gets slow and after analyzing the nmon
spreadsheet, I found that IO spike at that period.
Users write data much at that period of time.
DB parameters are as follows:
max_connections= 500
shared_buffers=122144
effective_cache_size = 1835010
work_mem = 8192
max_fsm_pages= 356656
fsync=on
commit_delay=0
commit_siblings= 5
random_page_cost = 0.01
default_statistics_target=100
wah_buffers = 500

Any idea please how to find the bottleneck and improve performance.
Let me know if any more information is needed.


Re: [GENERAL] server gets slow at certain time

2010-10-29 Thread AI Rumman
At present,
checkpoint_segment=3
checkpoint_timeout=300

On 10/29/10, Thom Brown  wrote:
> On 29 October 2010 13:24, AI Rumman  wrote:
>
>> I am using Postgresql 8.1 in 8-cpu cores with 32 gb ram with CestOS 5.
>> I have different DBs in the system and the large one is 12 GB and it is a
>> CRM DB and heavily used.
>> Every day after 1 pm the system gets slow and after analyzing the nmon
>> spreadsheet, I found that IO spike at that period.
>> Users write data much at that period of time.
>> DB parameters are as follows:
>> max_connections= 500
>> shared_buffers=122144
>> effective_cache_size = 1835010
>> work_mem = 8192
>> max_fsm_pages= 356656
>> fsync=on
>> commit_delay=0
>> commit_siblings= 5
>> random_page_cost = 0.01
>> default_statistics_target=100
>> wah_buffers = 500
>>
>> Any idea please how to find the bottleneck and improve performance.
>> Let me know if any more information is needed.
>>
>>
> What's your checkpoint_segments setting?  You may wish to up it if you're
> getting many inserts/updates.
>
> --
> Thom Brown
> Twitter: @darkixion
> IRC (freenode): dark_ixion
> Registered Linux user: #516935
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] max_fsm_pages increase

2010-10-31 Thread AI Rumman
I using Postgresql 8.1 and during vacuum at night time, I am getting the
following log:
number of page slots needed (2520048) exceeds max_fsm_pages (356656)
Do I need to increase max_fsm_pages to 2520048? Does it have any bad affect?


[GENERAL] index in desc order

2010-11-02 Thread AI Rumman
Is it possible to create an index in descending order?


Re: [GENERAL] index in desc order

2010-11-02 Thread AI Rumman
But I am using Postgresql 8.1. Is it possible here?

On Tue, Nov 2, 2010 at 3:42 PM, Szymon Guz  wrote:

>
>
> On 2 November 2010 10:36, AI Rumman  wrote:
>
>> Is it possible to create an index in descending order?
>>
>
> yes...
>
> create index i on t(i desc);
>
>
> regards
> Szymon
>


[GENERAL] migrate from 8.1 to 9.0

2010-11-08 Thread AI Rumman
I am going to migrate my produciton DB from postgresql 8.1 to 9.0.1.
Anyone please tell me what the important things I have to look for this
migration.
Thanking you all.


[GENERAL] REINDEX requirement?

2010-11-09 Thread AI Rumman
How do I know that index require REINDEX?


[GENERAL] good settings for DB parameters such as shared_buffers, checkpoint_segment in Postrgesql 9

2010-11-11 Thread AI Rumman
I am going to install Postgresql 9.0 for my running applicaiton which is at
8.1.
My Db size is 3 GB.
Server Specification:
  dual-core 4 cpu
  RAM: 32 GB
  OS: Centos
What will be good settings for DB parameters such as shared_buffers,
checkpoint_segment and etc.
Any help please.


[GENERAL] POstgresql 9.0.1 FTS NOTICE: word is too long to be indexed

2010-11-15 Thread AI Rumman
I getting error in Postgresql 9.0.1.
select description ,to_tsvector(description) from crm where crmid = 1;
NOTICE: word is too long to be indexed
DETAIL: Words longer than 2047 characters are ignored.

Any idea pelase how to solve it.


[GENERAL] FTS is taking "

2010-11-24 Thread AI Rumman
I found that FTS is taking "') as c;
 c
---

(1 row)


 select * from ts_debug('english', '') as c ;
 alias | description | token | dictionaries | dictionary | lexemes
---+-+--+--++-
 tag | XML tag |  | {} | |
I need to parse the statement "RMAN is not a DBA".
How can I do this?
Anye idea please.


[GENERAL] between SIMPLE and ENGLISH FULL TEXT Configuration in Postgresql 9.0.1.?

2010-11-28 Thread AI Rumman
What is the difference between SIMPLE and ENGLISH FULL TEXT Configuration in
Postgresql 9.0.1.


[GENERAL] Full Text Partial Match at begining

2010-11-30 Thread AI Rumman
Is it possible to match "%text' in Postgresql 9 Full Text.
select to_tsvector('english','this is advantage') @@ to_tsquery('tage');
f
I need to get result true for this type of matching.
Any idea please.


[GENERAL] Which query is good - IN or OR

2010-12-09 Thread AI Rumman
A simple query I can write in any of the following two ways:

1.
Select col1
from table
where col2 in ('A','B');

2.
Select col1
from table
where
 col2 = 'A'
or  col2 = 'B'

Here IN condition may be more than two.
I need to know which one is good for good performance.

Any idea please.


[GENERAL] postgresql 9 devel rpm

2010-12-13 Thread AI Rumman
Can anyone tell me please where I can get the postgresql 9 devel rpms?


[GENERAL] pg_statsinfo problem

2010-12-20 Thread AI Rumman
Does any one use pg_statsinfo for Postgresql 9.0.1? Is there any similar
tools to collect DB Snapshots?

I am facing problem during pg_statsinfo configuration.
I set as follows:
shared_preload_libraries = 'pg_statsinfo,pg_stat_statements'
custom_variable_classes = 'pg_statsinfo'

But my Db is not starting.


[GENERAL] Postgresql 9 connection problem

2010-12-26 Thread AI Rumman
I have very strange problem with my application, upon occasions I get
following error when connecting to postgres. This happens without any
pattern, often happens when I run some extensive tests in my application.

Unable to connect to PostgreSQL server: server closed the connection
unexpectedly This probably means the server terminated abnormally before or
while processing the request.

Postgres logs shows no abnormalities. Moreover, I can easily connect to the
psql console and execute query at that period.

My Postgres is 9.0.1, running on CentOS 5.

Any hints ?


[GENERAL] is it good to disable autovacuum and schedule routine vacuum process?

2011-01-03 Thread AI Rumman
I am using Postgresql 9.0.1.
I want to know which one is good regarding  VACUUM - Routine VACUUM manualy
or AutoVacuum.
Recently, I found in my production that  some tables were not vacuumed for a
good period of time when the autovacuum was enabled and the DB was slow. I
vacuumed the DB manually and the performance was good.
Can anyone explain why this was happened and is it good to disable
autovacuum and schedule routine vacuum process?


[GENERAL] What is pg_toast_temp_* in my DB?

2011-01-04 Thread AI Rumman
What is pg_toast_temp_* in my DB?


[GENERAL] cast problem in Postgresql 9.0.1

2011-01-31 Thread AI Rumman
I migrated my DB from Postgresql 8.1 to Postgresql 9.0.1.

I have a table "testtab"
\d testtab
id int,
hours varchar

When I execute the following:
select sum(hours) from testtab
I get cast error.

Then,

I created following IMPLICIT CAST functions in my DB =>

CREATE FUNCTION pg_catalog.integer(varchar) RETURNS int4 STRICT IMMUTABLE
LANGUAGE SQL AS 'SELECT int4in(varcharout($1));';
CREATE CAST (varchar AS integer) WITH FUNCTION pg_catalog.integer(varchar)
AS IMPLICIT;

CREATE FUNCTION pg_catalog.smallint(varchar) RETURNS smallint STRICT
IMMUTABLE LANGUAGE SQL AS 'SELECT int2in(varcharout($1));';
CREATE CAST (varchar AS smallint) WITH FUNCTION pg_catalog.smallint(varchar)
AS IMPLICIT;

Now, the above query works, but
SELECT COALESCE(hours,0) from testtab
failed.

Any idea why?


[GENERAL] index for ilike operation

2011-02-07 Thread AI Rumman
I found that in Postresql 9.0.3 documentation:

*It is also possible to use B-tree indexes for ILIKE and ~*, but only if the
pattern starts with non-alphabetic characters, i.e., characters that are not
affected by upper/lower case conversion.

*Can anyone please tell me how to configure that?*
*


[GENERAL] <>(not equal to) in where clause can use index

2011-02-08 Thread AI Rumman
Is it possible to add some operator class in Postgresql 9 so that <>(not
equal to) in where clause can use index?

If yes how?

Any idea please.


[GENERAL] Full text t_tsquery is giving error

2011-02-09 Thread AI Rumman
I am getting the following results:

select * from ts_debug('Rumman (AI)');

   alias   |   description   | token  |dictionaries|  dictionary  |
lexemes
---+-+++--+--
 asciiword | Word, all ASCII | Rumman | {syn,english_stem} | english_stem |
{rumman}
 blank | Space symbols   |  ( | {} |  |
 asciiword | Word, all ASCII | AI | {syn,english_stem} | english_stem |
{ai}
 blank | Space symbols   | )  | {} |  |
(4 rows)

But,

select to_tsquery('Rumman (AI)');

ERROR:  syntax error in tsquery: "Rumman (AI)"

I don't know why this is happening.
Any idea please.


[GENERAL] question regarding full_page_write

2011-02-15 Thread AI Rumman
I can't clearly understand what FULL_PAGE_WRITE parameter is stand for.
Documentation suggest that If I make it OFF, then I have the chance for DB
crash.
Can anyone please tell me how it could be happened?


[GENERAL] why does the toast table exist?

2011-02-16 Thread AI Rumman
I found in my Postgresql 9.0.1 DB as follows:

select oid,relname,reltoastrelid,relpages,relfilenode,reltuples
from pg_class
where oid in ( 90662,90665);

-[ RECORD 1 ]-+---
oid   | 90662
relname   | audit_trial
reltoastrelid | 90665
relpages  | 7713
relfilenode   | 373748
reltuples | 930648
-[ RECORD 2 ]-+---
oid   | 90665
relname   | pg_toast_90662
reltoastrelid | 0
relpages  | 0
relfilenode   | 373751
reltuples | 0

I have no idea why the TOAST table exists for audit_trial table.

 \d audit_trial
  Table "public.audit_trial"
   Column   |Type | Modifiers
+-+---
 auditid| integer | not null
 userid | integer |
 module | character varying(255)  |
 action | character varying(255)  |
 recordid   | character varying(20)   |
 actiondate | timestamp without time zone |
Indexes:
"audit_trial_pkey" PRIMARY KEY, btree (auditid)
"audit_trial_action_idx" btree (action)
"audit_trial_actiondate_desc_idx" btree (actiondate DESC) CLUSTER
"audit_trial_module_idx" btree (module)
"audit_trial_userid_idx" btree (userid)*
*


[GENERAL] ERROR: invalid byte sequence for encoding "UTF8": 0xc35c

2011-02-27 Thread AI Rumman
I am getting error in Postgresql 9.0.1.

update import_details_test
set data_row = '["4","1 Monor JoÃ\u083ão S. AntÃ\u0083ão (Schools
21,22,76)( Ru)","http://www.asdas.aa.nj.us","","908 436
4861","","","--None--","",","","--None--","","","0","","--None--","0","2008-12-29
17:53:08","","2010-08-23 14:58:56","0","Vantage Learning-OTS","
hsm...@vantage.com","","","","","","","","","","","","","","0","","","","","","","0","","300
S BROAD
ST","","","","ELIZABETH","","NJ","","07208","","","","","UNION","0","School","0","Tier
II","","0","","","","","ELIZABETH CITY","Elementary
School","Public","500","0","0","0","Suburban","","--None--","","","","0","","","","","","","","","","","","","--None--","0","","","","","","","","","","","","","","","","0","0","","0","0","0","--None--","--None--","","","","","0","","","0","","0","","","","0","","","","","","","","","","","","","","","","","","","","","0","","","","","","","","",""]'
WHERE import_queue_id = '35' AND record_num = 181;
ERROR:  invalid byte sequence for encoding "UTF8": 0xc35c

Any idea please.


[GENERAL] index bloat query understand

2011-03-27 Thread AI Rumman
I got a query to calculate index bloat as follows:

http://pgsql.tapoueh.org/site/html/news/20080131.bloat.html

---
CREATE OR REPLACE VIEW bloat AS
  SELECT
schemaname, tablename, reltuples::bigint, relpages::bigint, otta,
ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric
END,1) AS tbloat,
relpages::bigint - otta AS wastedpages,
bs*(sml.relpages-otta)::bigint AS wastedbytes,
pg_size_pretty((bs*(relpages-otta))::bigint) AS wastedsize,
iname, ituples::bigint, ipages::bigint, iotta,
ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE
ipages/iotta::numeric END,1) AS ibloat,
CASE WHEN ipages < iotta THEN 0 ELSE ipages::bigint - iotta END AS
wastedipages,
CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS
wastedibytes,
CASE WHEN ipages < iotta THEN pg_size_pretty(0) ELSE
pg_size_pretty((bs*(ipages-iotta))::bigint) END AS wastedisize
  FROM (
SELECT
  schemaname, tablename, cc.reltuples, cc.relpages, bs,
  CEIL((cc.reltuples*((datahdr+ma-
(CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma
END))+nullhdr2+4))/(bs-20::float)) AS otta,
  COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS
ituples, COALESCE(c2.relpages,0) AS ipages,
  COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS
iotta -- very rough approximation, assumes all cols
FROM (
  SELECT
ma,bs,schemaname,tablename,
(datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma
END)))::numeric AS datahdr,
(maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE
nullhdr%ma END))) AS nullhdr2
  FROM (
SELECT
  schemaname, tablename, hdr, ma, bs,
  SUM((1-null_frac)*avg_width) AS datawidth,
  MAX(null_frac) AS maxfracsum,
  hdr+(
SELECT 1+count(*)/8
FROM pg_stats s2
WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND
s2.tablename = s.tablename
  ) AS nullhdr
FROM pg_stats s, (
  SELECT
(SELECT current_setting('block_size')::numeric) AS bs,
CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27
ELSE 23 END AS hdr,
CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma
  FROM (SELECT version() AS v) AS foo
) AS constants
GROUP BY 1,2,3,4,5
  ) AS foo
) AS rs
JOIN pg_class cc ON cc.relname = rs.tablename
JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname =
rs.schemaname
LEFT JOIN pg_index i ON indrelid = cc.oid
LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
  ) AS sml
  WHERE sml.relpages - otta > 0 OR ipages - iotta > 10
  ORDER BY wastedbytes DESC, wastedibytes DESC;


--

Could anyone please describe me what are the meaning of the followings in
the query:


# ma -  CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma
# CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS
hdr
# COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta
# (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma
END))+nullhdr2+4))/(bs-20::float)) AS otta


[GENERAL] pg_connect connection problem

2011-04-12 Thread AI Rumman
I am connecting to Postgresql 9 from my php application using pg_connect.
After 30 concurrent connections from a single host, I am getting database
connection error at my app.
Does any one have any idea why the problem is occurring.


[GENERAL] Lock during insert statement

2014-05-21 Thread AI Rumman
Could any one please tell me why my system is waiting to get lock for an
INSERT statement?

2014-05-21 07:52:49.965 PDT [9-1]LOG:  process 31407 acquired ExclusiveLock
on extension of relation 429298276 of database 21497 after 3219.963 ms
2014-05-21 07:52:49.965 PDT [10-1]STATEMENT:  INSERT INTO table1
(end_id,account_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)
2014-05-21 07:52:49.965 PDT [3-1]LOG:  duration: 4590.048 ms  execute
: INSERT INTO table1
(end_id,account_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)


Thanks.


Re: [GENERAL] Lock during insert statement

2014-05-21 Thread AI Rumman
Got it.
Thanks.
Any special parameter to tune it? Like wal_buffers or shared_buffers?


On Wed, May 21, 2014 at 3:28 PM, Jeff Janes  wrote:

> On Wed, May 21, 2014 at 3:14 PM, AI Rumman  wrote:
>
>> Could any one please tell me why my system is waiting to get lock for an
>> INSERT statement?
>>
>> 2014-05-21 07:52:49.965 PDT [9-1]LOG:  process 31407 acquired
>> ExclusiveLock on extension of relation 429298276 of database 21497 after
>> 3219.963 ms
>>
>
> "on extension of relation" means that it needs to add 8KB to the end of
> the table.  That it takes so long to obtain that locks suggests you have a
> some serious IO congestion.
>
> Cheers,
>
> Jeff
>


[GENERAL] skipping analyze of "table1" --- lock not available?

2014-06-23 Thread AI Rumman
Could someone please tell me why I am getting these in my log:

2014-06-23 00:00:00.031 CDT [11379][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available
2014-06-23 00:00:00.056 CDT [11380][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available
2014-06-23 00:00:00.081 CDT [11381][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available
2014-06-23 00:00:00.106 CDT [11382][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available
2014-06-23 00:00:00.131 CDT [11383][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available
2014-06-23 00:00:00.156 CDT [11384][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available
2014-06-23 00:00:00.181 CDT [11385][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available
2014-06-23 00:00:00.206 CDT [11386][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available
2014-06-23 00:00:00.231 CDT [11387][@] : [1-1]LOG:  skipping analyze of
"table1" --- lock not available


Thanks.


[GENERAL] lock contention, need profiling idea

2014-06-30 Thread AI Rumman
I see lots of similar log message at a certain time in a day on Postgresql
9,.1:

LOG:  process 18855 still waiting for ShareLock on transaction 2856146023
after 1001.209 ms
STATEMENT:  UPDATE table1 SET time = $1 WHERE id = $2

The table1 size is 17 G.

What could be the reason for this lock contention?
autovacuum?

Please give some idea.

Thanks.


Re: [GENERAL] lock contention, need profiling idea

2014-07-01 Thread AI Rumman
There was no CREATE INDEX command running on the host.


On Mon, Jun 30, 2014 at 5:06 PM, Michael Paquier 
wrote:

>
>
>
> On Tue, Jul 1, 2014 at 7:36 AM, AI Rumman  wrote:
>
>> I see lots of similar log message at a certain time in a day on
>> Postgresql 9,.1:
>>
>> LOG:  process 18855 still waiting for ShareLock on transaction 2856146023
>> after 1001.209 ms
>> STATEMENT:  UPDATE table1 SET time = $1 WHERE id = $2
>>
>> The table1 size is 17 G.
>>
>> What could be the reason for this lock contention?
>> autovacuum?
>>
> This may be a CREATE INDEX query taking some time, perhaps combined with
> an old prepared transaction still holding a lock? Perhaps a cron job
> running behind that you are not aware of?
> You should have a look at pg_stat_activity, pg_prepared_xacts and pg_locks
> to get more information about the transactions running and the locks being
> taken.
> --
> Michael
>


[GENERAL] Whats is lock type transactionid?

2014-07-17 Thread AI Rumman
Hi,

I have been facing lock contention in my Postgresql 9.1 DB.
And when I am querying in the pg_locks table I found a lock type with
transactionid.
Could someone please tell me what it means?

Thanks.


Re: [GENERAL] Whats is lock type transactionid?

2014-07-17 Thread AI Rumman
I am experiencing lock contention on one single UPDATE statement at a
certain time in whole day. This is a small table to UPDATE.
My suspect is we are facing it for one specific ID.
Could you please let me know how can I identify the tuple.

I got a log like follows:
  blocker_target   | blocker_pid | blocker_mode  | depth |
 target   | pid  | mode  |seq
---+-+---+---+---+--+---+---
 (tuple,475999,662775,1988,6,) |3557 | ExclusiveLock | 1 |
(tuple,475999,662775,1988,6,) | 3543 | ExclusiveLock | 3557,3543
 (tuple,475999,662775,1988,6,) |3557 | ExclusiveLock | 1 |
(tuple,475999,662775,1988,6,) | 7387 | ExclusiveLock | 3557,7387

Any idea on it.

Thanks.


On Thu, Jul 17, 2014 at 12:40 PM, Douglas J Hunley 
wrote:

> On Thu, Jul 17, 2014 at 3:34 PM, AI Rumman  wrote:
>
>> Hi,
>>
>> I have been facing lock contention in my Postgresql 9.1 DB.
>> And when I am querying in the pg_locks table I found a lock type with
>> transactionid.
>> Could someone please tell me what it means?
>>
>> Thanks.
>>
>
> from http://www.postgresql.org/docs/9.3/static/view-pg-locks.html :
> Every transaction holds an exclusive lock on its virtual transaction ID
> for its entire duration. If a permanent ID is assigned to the transaction
> (which normally happens only if the transaction changes the state of the
> database), it also holds an exclusive lock on its permanent transaction ID
> until it ends. When one transaction finds it necessary to wait specifically
> for another transaction, it does so by attempting to acquire share lock on
> the other transaction ID (either virtual or permanent ID depending on the
> situation). That will succeed only when the other transaction terminates
> and releases its locks.
>
> I believe that describes what you're seeing
>
> --
> Douglas J Hunley (doug.hun...@gmail.com)
>


Re: [GENERAL] Whats is lock type transactionid?

2014-07-17 Thread AI Rumman
Yes. But as we are using bind variables, we are not able to get the ID of
the tuple.


On Thu, Jul 17, 2014 at 2:08 PM, Douglas J Hunley 
wrote:

>
> On Thu, Jul 17, 2014 at 12:54 PM, AI Rumman  wrote:
>
>> I am experiencing lock contention on one single UPDATE statement at a
>> certain time in whole day. This is a small table to UPDATE.
>> My suspect is we are facing it for one specific ID.
>> Could you please let me know how can I identify the tuple.
>>
>
> Have you tried the lock monitoring queries on
> http://wiki.postgresql.org/wiki/Lock_Monitoring yet by chance?
>
>
>
> --
> Douglas J Hunley (doug.hun...@gmail.com)
>


[GENERAL] not finding rows using ctid

2014-08-07 Thread AI Rumman
Hi,

I am getting the logs as follows:

LOG:  process 32145 acquired ExclusiveLock on tuple (153420,5) of relation
663326 of database 475999 after 1123.028 ms


But, when I am executing sqls to find the row on that table using the ctid
= '(153420,5)', I get no rows.


Any idea, why?


Thanks.


Re: [GENERAL] not finding rows using ctid

2014-08-07 Thread AI Rumman
I didn't execute any Vacuum Full and I tried to get the row after 3 hours
of the issue.

Thanks.


On Thu, Aug 7, 2014 at 1:51 PM, Adrian Klaver 
wrote:

> On 08/07/2014 12:40 PM, AI Rumman wrote:
>
>>
>> Hi,
>>
>> I am getting the logs as follows:
>>
>> LOG:  process 32145 acquired ExclusiveLock on tuple (153420,5) of
>> relation 663326 of database 475999 after 1123.028 ms
>>
>>
>> But, when I am executing sqls to find the row on that table using the
>> ctid = '(153420,5)', I get no rows.
>>
>>
>> Any idea, why?
>>
>
> http://www.postgresql.org/docs/9.3/static/ddl-system-columns.html
> "ctid
>
> The physical location of the row version within its table. Note that
> although the ctid can be used to locate the row version very quickly, a
> row's ctid will change if it is updated or moved by VACUUM FULL. Therefore
> ctid is useless as a long-term row identifier. The OID, or even better a
> user-defined serial number, should be used to identify logical rows."
>
> Something changed the row between the time you saw it in the log and the
> time you did the query.
>
>
>>
>> Thanks.
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] pgcluu error

2014-09-15 Thread AI Rumman
Hi,

I am trying to use pgcluu with collected stats and got the error:
Can't call method "print" on an undefined value at
/opt/pgdata/pgcluu_prod/pgcluubin/pgcluu line 5494

Any one has idea?

Thanks.


[GENERAL] what is parse ?

2015-01-29 Thread AI Rumman
Hi All,

I am facing some slow sqls in my database as follows:

2015-01-29 18:57:19.777 CST [29024][user@user] 10.6.48.226(59246): [1-1]
LOG:  duration: 3409.729 ms  parse :
2015-01-29 18:57:19.782 CST [29140][user@user] 10.6.48.227(36662): [1-1]
LOG:  duration: 3468.549 ms  parse :
2015-01-29 18:57:19.785 CST [29512][user@db] 10.6.48.227(37110): [1-1] LOG:
 duration: 3505.666 ms  parse :
2015-01-29 18:57:19.799 CST [28340][user@user] 10.6.48.224(36070): [1-1]
LOG:  duration: 3551.065 ms  parse :
2015-01-29 18:57:19.803 CST [29678][user@db] 10.6.48.226(59748): [1-1] LOG:
 duration: 3557.675 ms  parse :
2015-01-29 18:57:19.814 CST [27185][user@db] 10.6.48.224(35502): [1-1] LOG:
 duration: 3575.141 ms  parse : INSERT INTO schema.table
(id,aac_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)
2015-01-29 18:57:19.816 CST [29604][user@db] 10.6.48.226(59600): [1-1] LOG:
 duration: 3508.277 ms  parse :
2015-01-29 18:57:19.816 CST [27586][user@db] 10.6.48.225(38279): [1-1] LOG:
 duration: 3540.860 ms  parse : INSERT INTO schema.table
(id,aac_id,create_time,event_type,details) VALUES($1,$2,$3,$4,$5)
2015-01-29 18:57:19.826 CST [28996][user@db] 10.6.48.225(39696): [1-1] LOG:
 duration: 3589.108 ms  parse : SELECT id, location_ts, type,
aac_id, latitude, longitude, accuracy, source, create_time, modified_time
FROM schema.table2 WHERE aac_id = $1 AND type = 'E'


This query are running fine when I am executing them separately.

Can you please let me know what does it mean by
parse 
?

Thanks.


[GENERAL] increasing varchar column size is taking too much time

2015-02-13 Thread AI Rumman
Hi,

I started the following query in Postgresql 9.1 where only this sql is
running on the host and it has been taking more than an hour and still
running.

 alter table userdata.table1 alter column name type varchar(512);


Here is the table description:


>  d+ userdata.table1
>Table "userdata.table1"
>Column|Type |   Modifiers
>  | Storage  |   Description
>
> -+-++--+-
>  id  | character varying(50)   | not null
>   | extended |
>  code   | character varying(32)   | not null   |
> extended |
>  accid   | character varying(50)   | not null
>   | extended |
>  name| character varying(100)  | not null
>   | extended |
>  create_time | timestamp with time zone| not null default
> now() | plain|
>  modified_time   | timestamp with time zone| not null default
> now() | plain|
>  install_date| timestamp without time zone |
>  | plain|
>  recent_scan_date| timestamp without time zone |
>  | plain|
>  update_date | timestamp without time zone |
>  | plain|
>  setting | character varying(100)  |
>  | extended |
>  name| character varying(100)  |
>  | extended |
>  type| character varying(8)|
>  | extended |
>  version | character varying(128)  |
>  | extended |
>  package | character varying(255)  |
>  | extended |
>  permission  | text|
>  | extended |
>  trigger | character varying(10)   |
>  | extended |
>  reasons | character varying(200)  |
>  | extended |
>  note| character varying(255)  |
>  | extended |
>  size| bigint  ||
> plain|
>  usage| bigint  ||
> plain|
>  running | character varying(4)|
>  | extended |
>  location| character varying(60)   |
>  | extended |
>  can_stop| character(1)|
>  | extended |
>  can_uninstall   | character(1)|
>  | extended |
>  flagged_status  | character(1)|
>  | extended |
> status   | character(1)|
>  | extended |
>  consultation_status | character(1)|
>  | extended |
>  trust   | character(1)|
>  | extended |
> Indexes:
> "table1_pk" PRIMARY KEY, btree (id, code)
> "table1_accid_id_hashcode_idx" btree (accid, id, code)
> "table1_accid_idx" btree (accid)
> "table1_id_idx" btree (id)
> Triggers:
> table1s_delete_trigger BEFORE DELETE ON table1 FOR EACH ROW EXECUTE
> PROCEDURE delete_jangles_table1()
> table1s_insert_trigger BEFORE INSERT ON table1 FOR EACH ROW EXECUTE
> PROCEDURE insert_jangles_table1()
> Child tables: table1_0,
>   table1_1,
>   table1_10,
>   table1_2,
>   table1_3,
>   table1_4,
>   table1_5,
>   table1_6,
>   table1_7,
>   table1_8,
>   table1_9
> Has OIDs: no
>

Here are the number of rows and pages in partition:



>   relname| reltuples | relpages
> ---+---+--
>  table1_0  |  10076840 |   362981
>  table1_1  |  10165073 |   366548
>  table1_2  |  10046372 |   361838
>  table1_3  |  10114727 |   364360
>  table1_4  |  10155816 |   366054
>  table1_5  |  10188953 |   367023
>  table1_6  |  10275270 |   370887
>  table1_7  |  10163937 |   366245
>  table1_8  |  10262516 |   369350
>  table1_9  |  10359893 |   372099
>  table1_10 |  10434026 |   375327
>  table1| 0 |0



Any idea why the above ALTER statement is taking that much time?
Is it because of the number of rows we have in each partition?
Any suggestion for it?


[GENERAL] postgresql 9.2 build error

2013-01-13 Thread AI Rumman
  I am trying to build Postgresql 9.2

  ./configure --prefix=/usr/pgsql-9.2  --with-ossp-uuid --with-libxml

  Got the error at config.log:

  configure:9747: result: no
configure:9752: checking for uuid_export in -luuid
configure:9787: gcc -o conftest -O2 -Wall -Wmissing-prototypes
-Wpointer-arith -Wdeclaration-after-statement -Wendif-labels
-Wmissing-format-attribute -Wformat-security -fno-stri
ct-aliasing -fwrapv -I/usr/local/include -D_GNU_SOURCE
-I/usr/include/libxml2  -L/usr/lib64  conftest.c -luuid  -lxml2 -lz
-lreadline -ltermcap -lcrypt -ldl -lm  >&5
/usr/bin/ld: cannot find -luuid
collect2: ld returned 1 exit status
configure:9794: $? = 1

 What should I do?


[GENERAL] pg_upgrade failed for 9.0 to 9.2

2013-01-13 Thread AI Rumman
Hi,

When I was upgrading database from 9.0 to 9.2 using pg_upgrade, I got the
error:

CREATE VIEW stats_slowest_queries AS
SELECT pg_stat_activity.procpid, (('now'::text)::timestamp(6) with time
zone - pg_stat_activity.query_start) AS execution_time,
pg_stat_activity.current_query FROM pg_stat_activity WHERE
(pg_stat_activity.current_query !~~ '

[GENERAL] Query Parallelizing with dblink

2013-01-30 Thread AI Rumman
Hi All,

I made an idea to utilize multiple core for Postgresql query. I used dblink
to achieve this task.
You may find it:
http://www.rummandba.com/2013/01/postgresql-query-parallelizing-with.html


It is just a basic idea and completely usable for my data model. But I
share it with you all so that you may add some more good ideas here.

-- Added the document here
--

I have three tables with almost 10 millions of records in each. The tables
are:

customer

account

tickets

"customer" table holds record for all kind of customers which are related
to account or tickets.

We need to generate a report with of customer and its related accounts or
tickets.

The basic sql is like:

select *

from

(

select c.custid,

case when a.accountid is not null then

  a.accountid

 when t.ticketid is not null then

  t.ticketid

end

as relatedid

from customer as c

left join account as a  on c.custid = a.accountid and type ='Accounts'

left  join tickets as t on c.custid = t.ticketid and type ='HelpDesk'

where c.deleted = 0

) as q

where relatedid is not null


I have all the required indexes. But the query was taking too much time.

One of the bottleneck that I always feel with Postgresql is its lack of
query parallelism technique. Good news is that, the great developers are
working on it.

However, I have to improve the query performance at this moment. So I make
a plan to divide the query in two parts and then execute each part
asynchronously and then collect the result.


To achieve this, I make the function qry_parallel_fn. This function create
two separate dblink connection conn1 and conn2 and execute two different
queries in async mode.

There is a while loop which checks if both the connections have completed
the task or not. If yes, then the function return results.


CREATE OR REPLACE FUNCTION qry_parallel_fn() RETURNS SETOF RECORD AS $$

DECLARE

   v_qry1 text;

   v_qry2 text;

   cur1 cursor is

   select *

   from dblink_get_result('conn1') as t1(custid int, relatedid int);



   cur2 cursor is

   select *

   from dblink_get_result('conn2') as t1(custid int, relatedid int);



   v_closed smallint;



BEGIN



 v_qry1 := 'select custid, accountid as relatedid from customer c inner
join account a on c.custid = a.accountid where c.deleted = 0';

 RAISE NOTICE 'vqry1 = %' , v_qry1;

 v_qry2 := 'select custid, ticketid as relatedid from customer c inner
join tickets as t on c.custid = t.ticketid where c.deleted = 0';

   PERFORM dblink_connect('conn1','dbname=rumman');

   PERFORM dblink_connect('conn2','dbname=rumman');

 PERFORM dblink_send_query('conn1',v_qry1);

 PERFORM dblink_send_query('conn2',v_qry2);



 v_closed := 0;

 WHILE v_closed <> 2 loop

   if check_conn_is_busy('conn1') = 0 then

  v_closed := v_closed + 1;

   end if;

   if check_conn_is_busy('conn2') = 0 then

  v_closed := v_closed + 1;

   end if;

 END LOOP;



 FOR rec IN cur1

 LOOP

   RETURN NEXT rec;

 END LOOP;



 FOR rec IN cur2

 LOOP

   RETURN NEXT rec;

 END LOOP;



 PERFORM dblink_disconnect('conn1');

 PERFORM dblink_disconnect('conn2');



 RETURN;

END;

$$

language 'plpgsql'


--Execute

--select * from test_fn() as t1(c int, d int);

-- select count(*) from test_fn() as t1(c int, d int);


CREATE OR REPLACE FUNCTION check_conn_is_busy(conn text) RETURNS INT AS $$

DECLARE

  v int;

BEGIN

   SELECT dblink_is_busy(conn) INTO v;

   RETURN v;

END;

$$

language 'plpgsql'


I was monitoring the server performance and found that it was using two cpu
cores to get the result and improve the query timing a bit.

---


Let me know your thoughts on it.


Thanks.


[GENERAL] Is it possible to migrate database from Postgresql 8.2 to 9.2 using pg_upgrade?

2013-02-04 Thread AI Rumman
Is it possible to migrate database from Postgresql 8.2 to 9.2 using
pg_upgrade?

Thanks.


[GENERAL] no implicit cast error in 9.2?

2013-02-08 Thread AI Rumman
Hi,

I am using
PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6
20120305 (Red Hat 4.4.6-4), 64-bit.

I got a bit confused after installing this version. So far I used to know
that from Postgresql 8.3 implicit casting has been removed and the
following should not work at 8.3 :
create table testtab ( id varchar, id1 int)
insert into testtab values (1,1);

Where it was good to work at 8.1

http://osdir.com/ml/pgsql-general/2011-02/msg00055.html

I also faced the problem earlier.

But in 9.2 it is working perfectly.
Am I missing any release notes?

Please share your opinion. It will be appreciated.

Thanks.


[GENERAL] crosstab creating multiple rows for same id

2013-03-07 Thread AI Rumman
Hi,

I am using the crosstab function in Postgresql 9.0.
The functions works fine when the distinct category value is less. But when
it goes to to higher like 100 or more, it creates multiple rows for the
same rowid.

Any idea about it?

Thanks.


Re: [GENERAL] How to join table to itself N times?

2013-03-20 Thread AI Rumman
On Wed, Mar 20, 2013 at 7:38 PM, W. Matthew Wilson  wrote:

> I got this table right now:
>
> select * from market_segment_dimension_values ;
> +--+---+
> | market_segment_dimension | value |
> +--+---+
> | geography| north |
> | geography| south |
> | industry type| retail|
> | industry type| manufacturing |
> | industry type| wholesale |
> +--+---+
> (5 rows)
>
> The PK is (market_segment_dimension, value).
>
> The dimension column refers to another table called
> market_segment_dimensions.
>
> So, "north" and "south" are to values for the "geography" dimension.
>
> In that data above, there are two dimensions.  But sometimes there could
> be just one dimension, or maybe three, ... up to ten.
>
> Now here's the part where I'm stumped.
>
> I need to create a cartesian product of the dimensions.
>
> I came up with this approach by hard-coding the different dimensions:
>
>  with geog as (
> select value
> from market_segment_dimension_values
> where market_segment_dimension = 'geography'),
>
> industry_type as (
> select value
> from market_segment_dimension_values
> where market_segment_dimension = 'industry type')
>
> select geog.value as g,
> industry_type.value as ind_type
> from geog
> cross join industry_type
> ;
> +---+---+
> |   g   |   ind_type|
> +---+---+
> | north | retail|
> | north | manufacturing |
> | north | wholesale |
> | south | retail|
> | south | manufacturing |
> | south | wholesale |
> +---+---+
> (6 rows)
>
> But that won't work if I add a new dimension (unless I update the query).
>  For example, maybe I need to add a new dimension called, say, customer
> size, which has values "big" and "small".  A
>
> I've got some nasty plan B solutions, but I want to know if there's some
> solution.
>
> There's a really elegant solution in python using itertools.product, like
> this:
>
> >>> list(itertools.product(*[['north', 'south'], ['retail',
> 'manufacturing', 'wholesale']]))
>
> [('north', 'retail'),
>  ('north', 'manufacturing'),
>  ('north', 'wholesale'),
>  ('south', 'retail'),
>  ('south', 'manufacturing'),
>  ('south', 'wholesale')]
>
> All advice is welcome.  Thanks in advance!
>
> Matt
>
>
>
> --
> W. Matthew Wilson
> m...@tplus1.com
> http://tplus1.com
>
>
You may try:

Select a.value, b.value
from market_segment_dimension_values as a,
from market_segment_dimension_values as b
where a.market_segment_dimension <> b.market_segment_dimension

-- AI


[GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
Hi,

I have two 9.2 databases running with hot_standby replication. Today when I
was checking, I found that replication has not been working since Mar 1st.
There was a large database restored in master on that day and I believe
after that the lag went higher.

SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset

431326108320

SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS
receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0')
AS replay

   receive|replay
--+--
 245987541312 | 245987534032
(1 row)

I checked the pg_xlog in both the server. In Slave the last xlog file
-rw--- 1 postgres postgres 16777216 Mar  1 06:02
00010039007F

In Master, the first xlog file is
-rw--- 1 postgres postgres 16777216 Mar  1 04:45
00010039005E


Is there any way I could sync the slave in quick process?

Thanks.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman wrote:

> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman  wrote:
> > Hi,
> >
> > I have two 9.2 databases running with hot_standby replication. Today
> when I
> > was checking, I found that replication has not been working since Mar
> 1st.
> > There was a large database restored in master on that day and I believe
> > after that the lag went higher.
> >
> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset
> >
> > 431326108320
> >
> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS
> > receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(),
> '0/0')
> > AS replay
> >
> >receive|replay
> > --+--
> >  245987541312 | 245987534032
> > (1 row)
> >
> > I checked the pg_xlog in both the server. In Slave the last xlog file
> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
> > 00010039007F
> >
> > In Master, the first xlog file is
> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
> > 00010039005E
> >
> >
> > Is there any way I could sync the slave in quick process?
>
> generate a new base backup, and seed the slave with it.
>

OK. I am getting these error in slave:
LOG:  invalid contrecord length 284 in log file 57, segment 127, offset 0

What is the actual reason?

Thanks.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 3:52 PM, Lonni J Friedman wrote:

> On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman  wrote:
> >
> >
> > On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman 
> > wrote:
> >>
> >> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman 
> wrote:
> >> > Hi,
> >> >
> >> > I have two 9.2 databases running with hot_standby replication. Today
> >> > when I
> >> > was checking, I found that replication has not been working since Mar
> >> > 1st.
> >> > There was a large database restored in master on that day and I
> believe
> >> > after that the lag went higher.
> >> >
> >> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS
> >> > offset
> >> >
> >> > 431326108320
> >> >
> >> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0')
> AS
> >> > receive,   pg_xlog_location_diff(pg_last_xlog_replay_location(),
> >> > '0/0')
> >> > AS replay
> >> >
> >> >receive|replay
> >> > --+--
> >> >  245987541312 | 245987534032
> >> > (1 row)
> >> >
> >> > I checked the pg_xlog in both the server. In Slave the last xlog file
> >> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
> >> > 00010039007F
> >> >
> >> > In Master, the first xlog file is
> >> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
> >> > 00010039005E
> >> >
> >> >
> >> > Is there any way I could sync the slave in quick process?
> >>
> >> generate a new base backup, and seed the slave with it.
> >
> >
> > OK. I am getting these error in slave:
> > LOG:  invalid contrecord length 284 in log file 57, segment 127, offset 0
> >
> > What is the actual reason?
>
> Corruption?  What were you doing when you saw the error?
>

I did not have enough idea about these stuffs. I got the database now and
saw the error.
Is there any way to recover from this state. The master database is a large
database of 500 GB.


Re: [GENERAL] replication behind high lag

2013-03-25 Thread AI Rumman
On Mon, Mar 25, 2013 at 4:03 PM, AI Rumman  wrote:

>
>
> On Mon, Mar 25, 2013 at 4:00 PM, Lonni J Friedman wrote:
>
>> On Mon, Mar 25, 2013 at 12:55 PM, AI Rumman  wrote:
>> >
>> >
>> > On Mon, Mar 25, 2013 at 3:52 PM, Lonni J Friedman 
>> > wrote:
>> >>
>> >> On Mon, Mar 25, 2013 at 12:43 PM, AI Rumman 
>> wrote:
>> >> >
>> >> >
>> >> > On Mon, Mar 25, 2013 at 3:40 PM, Lonni J Friedman <
>> netll...@gmail.com>
>> >> > wrote:
>> >> >>
>> >> >> On Mon, Mar 25, 2013 at 12:37 PM, AI Rumman 
>> >> >> wrote:
>> >> >> > Hi,
>> >> >> >
>> >> >> > I have two 9.2 databases running with hot_standby replication.
>> Today
>> >> >> > when I
>> >> >> > was checking, I found that replication has not been working since
>> Mar
>> >> >> > 1st.
>> >> >> > There was a large database restored in master on that day and I
>> >> >> > believe
>> >> >> > after that the lag went higher.
>> >> >> >
>> >> >> > SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS
>> >> >> > offset
>> >> >> >
>> >> >> > 431326108320
>> >> >> >
>> >> >> > SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(),
>> '0/0')
>> >> >> > AS
>> >> >> > receive,
>> pg_xlog_location_diff(pg_last_xlog_replay_location(),
>> >> >> > '0/0')
>> >> >> > AS replay
>> >> >> >
>> >> >> >receive|replay
>> >> >> > --+--
>> >> >> >  245987541312 | 245987534032
>> >> >> > (1 row)
>> >> >> >
>> >> >> > I checked the pg_xlog in both the server. In Slave the last xlog
>> file
>> >> >> > -rw--- 1 postgres postgres 16777216 Mar  1 06:02
>> >> >> > 00010039007F
>> >> >> >
>> >> >> > In Master, the first xlog file is
>> >> >> > -rw--- 1 postgres postgres 16777216 Mar  1 04:45
>> >> >> > 00010039005E
>> >> >> >
>> >> >> >
>> >> >> > Is there any way I could sync the slave in quick process?
>> >> >>
>> >> >> generate a new base backup, and seed the slave with it.
>> >> >
>> >> >
>> >> > OK. I am getting these error in slave:
>> >> > LOG:  invalid contrecord length 284 in log file 57, segment 127,
>> offset
>> >> > 0
>> >> >
>> >> > What is the actual reason?
>> >>
>> >> Corruption?  What were you doing when you saw the error?
>> >
>> >
>> > I did not have enough idea about these stuffs. I got the database now
>> and
>> > saw the error.
>> > Is there any way to recover from this state. The master database is a
>> large
>> > database of 500 GB.
>>
>> generate a new base backup, and seed the slave with it.  if the error
>> persists, then i'd guess that your master is corrupted, and then
>> you've got huge problems.
>>
>
> Master is running fine right now showing only a warning:
> WARNING:  archive_mode enabled, yet archive_command is not set
>
> Do you think the master could be corrupted?
>
>
Hi,

I got the info that there was a master db restart on Feb 27th. Could this
be a reason of this error?

Thanks.


Re: [GENERAL] how to completely disable toasted table in postgresql and best practices to follow

2013-04-05 Thread AI Rumman
According to doc, YES.

http://www.postgresql.org/docs/9.2/static/storage-toast.html
MAIN allows compression but not out-of-line storage. (Actually, out-of-line
storage will still be performed for such columns, but only as a last resort
when there is no other way to make the row small enough to fit on a page.)


On Fri, Apr 5, 2013 at 9:53 AM, Szymon Guz  wrote:

>
> On 5 April 2013 15:49, Andres Freund  wrote:
>
>> On 2013-04-05 18:32:47 +0530, Zahid Quadri wrote:
>> > Dear admin,
>> >
>> > please help me i want to completely disable Toasted tables in
>> postgresql as per your suggestion i have used 9.0.13 but still toasted
>> tables are getting created.
>> >
>> > also i want to know best practices and methods to clear such issues.
>>
>> Why do you want to do that?
>>
>> It is not possible to completely disable the usage of toast tables, but
>> maybe explaining the real reason of you wanting that helps us to give
>> you another solution.
>>
>>
>>
> Even if I set storage to plain for all the columns? Will then toast be
> used for something else?
>
> regards
> Szymon
>
>


[GENERAL] Character set display

2013-04-09 Thread AI Rumman
Hi,

I got a data like:
AHrühn

And I need the output like:
  AHrühn

The DB is running with UTF8 on Postgresql 9.2.

Any help will be appreciated.

Thanks.


[GENERAL] check_postgres_last_vacuum for all databases

2013-05-14 Thread AI Rumman
Hi,

I have been setting up nagios alert for check_postgres_last_vacuum.
I went through the code and saw that I have to define the databases in
order to check for all database tables. In my environment, database
creation and deletion is dynamic. So any time any one can create database
or delete for their needs.
And I configured manual vacuum process for all databases each night.

In this scenario, I was looking for a tool to check the last vacuum time in
the servers so that if any table's last vacuum date is N days older, nagios
will send an alert.

Any idea for it.

Thanks,
AI


Re: [GENERAL]

2013-05-15 Thread AI Rumman
Create the functions:
CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
LANGUAGE SQL AS 'SELECT textin(int4out($1));';
CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
IMPLICIT;
http://www.rummandba.com/2010/10/migrate-database-from-postgresql-81-to.html

It'll work.


On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer <
carlos.rei...@opendb.com.br> wrote:

> Hi,
>
> Currently, our application is still using PG 8.2 and we are trying to use
> 9.2 but there are some problems related with the implicit casts removed on
> 8.3.
>
> Example:
>
> 1) select 'teste'||1;
> 2) select trim(1);
>
> Select 1 & 2 does run fine on 8.2 but in 9.2 select 1 is ok and select 2
> got an error due the fact implicit cast were removed on PG 8.3:
>
> ERROR:  function pg_catalog.btrim(integer) does not exist
> LINE 1: select trim(1);
>^
> HINT:  No function matches the given name and argument types. You might
> need to add explicit type casts.
>
> ** Error **
>
> ERROR: function pg_catalog.btrim(integer) does not exist
> SQL state: 42883
> Hint: No function matches the given name and argument types. You might
> need to add explicit type casts.
> Character: 8
>
> Recreated the implicit cast for select 2:
>
> CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
> LANGUAGE SQL AS 'SELECT textin(int4out($1));';
> CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
> IMPLICIT;
>
> Now select 2 works fine but select 1 gets an error:
>
> ERROR:  operator is not unique: unknown || integer
> LINE 1: select 'teste'||1;
>   ^
> HINT:  Could not choose a best candidate operator. You might need to add
> explicit type casts.
>
> ** Error **
>
> ERROR: operator is not unique: unknown || integer
> SQL state: 42725
> Hint: Could not choose a best candidate operator. You might need to add
> explicit type casts.
> Character: 15
>
>
> Is there a way to make both selects work on PG 9.2 without explicit casts?
>
> Thank you in advace!
>
> --
> Reimer
> 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
>


Re: [GENERAL]

2013-05-15 Thread AI Rumman
Ah! I missed that.
Which version of 9.2 you are working with? I am also at 9.2 and its working
fine.
Try out using
select 'teste'||1::int;

See if it works or not.


On Wed, May 15, 2013 at 3:41 PM, Carlos Henrique Reimer <
carlos.rei...@opendb.com.br> wrote:

> Actually, as stated in my first note, this is what I've done to fix the
> select 2 (select trim(1);) but after creating this function/cast the select
> 1 (select 'teste'||1;) started not working.
>
>
> On Wed, May 15, 2013 at 4:24 PM, AI Rumman  wrote:
>
>> Create the functions:
>> CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
>> LANGUAGE SQL AS 'SELECT textin(int4out($1));';
>> CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
>> IMPLICIT;
>>
>> http://www.rummandba.com/2010/10/migrate-database-from-postgresql-81-to.html
>>
>> It'll work.
>>
>>
>> On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer <
>> carlos.rei...@opendb.com.br> wrote:
>>
>>> Hi,
>>>
>>> Currently, our application is still using PG 8.2 and we are trying to
>>> use 9.2 but there are some problems related with the implicit casts removed
>>> on 8.3.
>>>
>>> Example:
>>>
>>> 1) select 'teste'||1;
>>> 2) select trim(1);
>>>
>>> Select 1 & 2 does run fine on 8.2 but in 9.2 select 1 is ok and select 2
>>> got an error due the fact implicit cast were removed on PG 8.3:
>>>
>>> ERROR:  function pg_catalog.btrim(integer) does not exist
>>> LINE 1: select trim(1);
>>>^
>>> HINT:  No function matches the given name and argument types. You might
>>> need to add explicit type casts.
>>>
>>> ** Error **
>>>
>>> ERROR: function pg_catalog.btrim(integer) does not exist
>>> SQL state: 42883
>>> Hint: No function matches the given name and argument types. You might
>>> need to add explicit type casts.
>>> Character: 8
>>>
>>> Recreated the implicit cast for select 2:
>>>
>>> CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
>>> LANGUAGE SQL AS 'SELECT textin(int4out($1));';
>>> CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
>>> IMPLICIT;
>>>
>>> Now select 2 works fine but select 1 gets an error:
>>>
>>> ERROR:  operator is not unique: unknown || integer
>>> LINE 1: select 'teste'||1;
>>>   ^
>>> HINT:  Could not choose a best candidate operator. You might need to add
>>> explicit type casts.
>>>
>>> ** Error **
>>>
>>> ERROR: operator is not unique: unknown || integer
>>> SQL state: 42725
>>> Hint: Could not choose a best candidate operator. You might need to add
>>> explicit type casts.
>>> Character: 15
>>>
>>>
>>> Is there a way to make both selects work on PG 9.2 without explicit
>>> casts?
>>>
>>> Thank you in advace!
>>>
>>> --
>>> Reimer
>>> 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
>>>
>>
>>
>
>
> --
> Reimer
> 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
>


Re: [GENERAL]

2013-05-15 Thread AI Rumman
Drop those functions and try again.


On Wed, May 15, 2013 at 4:22 PM, Carlos Henrique Reimer <
carlos.rei...@opendb.com.br> wrote:

> The PG version I'm using is 9.2.4.
>
> It works but after recreating the implicit cast with the function:
>
>
> CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
> LANGUAGE SQL AS 'SELECT textin(int4out($1));';
> CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
> IMPLICIT;
>
> it does not work anymore, even using the explicit cast:
>
> select 'teste'||1::int;
>
> ERROR:  operator is not unique: unknown || integer
> LINE 1: select 'teste'||1::int;
>
>   ^
> HINT:  Could not choose a best candidate operator. You might need to add
> explicit type casts.
>
>
> ** Error **
>
> ERROR: operator is not unique: unknown || integer
> SQL state: 42725
> Hint: Could not choose a best candidate operator. You might need to add
> explicit type casts.
> Character: 15
>
>
>
>
>
> On Wed, May 15, 2013 at 4:45 PM, AI Rumman  wrote:
>
>> Ah! I missed that.
>> Which version of 9.2 you are working with? I am also at 9.2 and its
>> working fine.
>> Try out using
>> select 'teste'||1::int;
>>
>> See if it works or not.
>>
>>
>> On Wed, May 15, 2013 at 3:41 PM, Carlos Henrique Reimer <
>> carlos.rei...@opendb.com.br> wrote:
>>
>>> Actually, as stated in my first note, this is what I've done to fix the
>>> select 2 (select trim(1);) but after creating this function/cast the select
>>> 1 (select 'teste'||1;) started not working.
>>>
>>>
>>> On Wed, May 15, 2013 at 4:24 PM, AI Rumman  wrote:
>>>
>>>> Create the functions:
>>>> CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
>>>> LANGUAGE SQL AS 'SELECT textin(int4out($1));';
>>>> CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
>>>> IMPLICIT;
>>>>
>>>> http://www.rummandba.com/2010/10/migrate-database-from-postgresql-81-to.html
>>>>
>>>> It'll work.
>>>>
>>>>
>>>> On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer <
>>>> carlos.rei...@opendb.com.br> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Currently, our application is still using PG 8.2 and we are trying to
>>>>> use 9.2 but there are some problems related with the implicit casts 
>>>>> removed
>>>>> on 8.3.
>>>>>
>>>>> Example:
>>>>>
>>>>> 1) select 'teste'||1;
>>>>> 2) select trim(1);
>>>>>
>>>>> Select 1 & 2 does run fine on 8.2 but in 9.2 select 1 is ok and select
>>>>> 2 got an error due the fact implicit cast were removed on PG 8.3:
>>>>>
>>>>> ERROR:  function pg_catalog.btrim(integer) does not exist
>>>>> LINE 1: select trim(1);
>>>>>^
>>>>> HINT:  No function matches the given name and argument types. You
>>>>> might need to add explicit type casts.
>>>>>
>>>>> ** Error **
>>>>>
>>>>> ERROR: function pg_catalog.btrim(integer) does not exist
>>>>> SQL state: 42883
>>>>> Hint: No function matches the given name and argument types. You might
>>>>> need to add explicit type casts.
>>>>> Character: 8
>>>>>
>>>>> Recreated the implicit cast for select 2:
>>>>>
>>>>> CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
>>>>> LANGUAGE SQL AS 'SELECT textin(int4out($1));';
>>>>> CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer)
>>>>> AS IMPLICIT;
>>>>>
>>>>> Now select 2 works fine but select 1 gets an error:
>>>>>
>>>>> ERROR:  operator is not unique: unknown || integer
>>>>> LINE 1: select 'teste'||1;
>>>>>   ^
>>>>> HINT:  Could not choose a best candidate operator. You might need to
>>>>> add explicit type casts.
>>>>>
>>>>> ** Error **
>>>>>
>>>>> ERROR: operator is not unique: unknown || integer
>>>>> SQL state: 42725
>>>>> Hint: Could not choose a best candidate operator. You might need to
>>>>> add explicit type casts.
>>>>> Character: 15
>>>>>
>>>>>
>>>>> Is there a way to make both selects work on PG 9.2 without explicit
>>>>> casts?
>>>>>
>>>>> Thank you in advace!
>>>>>
>>>>> --
>>>>> Reimer
>>>>> 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Reimer
>>> 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
>>>
>>
>>
>
>
> --
> Reimer
> 47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
>


Re: [GENERAL]

2013-05-15 Thread AI Rumman
I am using the same version and its fine.
postgres=# select version();
version

---
 PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2
20080704 (Red Hat 4.1.2-52), 64-bit

Try out the query:

SELECT ct.*,
source_t.typname as source_type
,target_t.typname as target_type
, proc.proname
FROM
pg_cast as ct
, pg_type as source_t
, pg_type as target_t
,pg_proc as proc
WHERE
ct.castsource = source_t.oid
and ct.casttarget = target_t.oid
and  ct.castfunc =  proc.oid

I get 144 rows.
http://www.rummandba.com/2013/02/postgresql-type-casting-information.html




On Wed, May 15, 2013 at 4:54 PM, Carlos Henrique Reimer <
carlos.rei...@opendb.com.br> wrote:

> It works if I drop the functions but then the select trim(1) does not work;
>
>
> On Wed, May 15, 2013 at 5:38 PM, AI Rumman  wrote:
>
>> Drop those functions and try again.
>>
>>
>> On Wed, May 15, 2013 at 4:22 PM, Carlos Henrique Reimer <
>> carlos.rei...@opendb.com.br> wrote:
>>
>>> The PG version I'm using is 9.2.4.
>>>
>>> It works but after recreating the implicit cast with the function:
>>>
>>>
>>> CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE
>>> LANGUAGE SQL AS 'SELECT textin(int4out($1));';
>>> CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS
>>> IMPLICIT;
>>>
>>> it does not work anymore, even using the explicit cast:
>>>
>>> select 'teste'||1::int;
>>>
>>> ERROR:  operator is not unique: unknown || integer
>>>  LINE 1: select 'teste'||1::int;
>>>
>>>   ^
>>> HINT:  Could not choose a best candidate operator. You might need to add
>>> explicit type casts.
>>>
>>>
>>> ** Error **
>>>
>>> ERROR: operator is not unique: unknown || integer
>>> SQL state: 42725
>>> Hint: Could not choose a best candidate operator. You might need to add
>>> explicit type casts.
>>> Character: 15
>>>
>>>
>>>
>>>
>>>
>>> On Wed, May 15, 2013 at 4:45 PM, AI Rumman  wrote:
>>>
>>>> Ah! I missed that.
>>>> Which version of 9.2 you are working with? I am also at 9.2 and its
>>>> working fine.
>>>> Try out using
>>>> select 'teste'||1::int;
>>>>
>>>> See if it works or not.
>>>>
>>>>
>>>> On Wed, May 15, 2013 at 3:41 PM, Carlos Henrique Reimer <
>>>> carlos.rei...@opendb.com.br> wrote:
>>>>
>>>>> Actually, as stated in my first note, this is what I've done to fix
>>>>> the select 2 (select trim(1);) but after creating this function/cast the
>>>>> select 1 (select 'teste'||1;) started not working.
>>>>>
>>>>>
>>>>> On Wed, May 15, 2013 at 4:24 PM, AI Rumman wrote:
>>>>>
>>>>>> Create the functions:
>>>>>> CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT
>>>>>> IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int4out($1));';
>>>>>> CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer)
>>>>>> AS IMPLICIT;
>>>>>>
>>>>>> http://www.rummandba.com/2010/10/migrate-database-from-postgresql-81-to.html
>>>>>>
>>>>>> It'll work.
>>>>>>
>>>>>>
>>>>>> On Wed, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer <
>>>>>> carlos.rei...@opendb.com.br> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> Currently, our application is still using PG 8.2 and we are trying
>>>>>>> to use 9.2 but there are some problems related with the implicit casts
>>>>>>> removed on 8.3.
>>>>>>>
>>>>>>> Example:
>>>>>>>
>>>>>>> 1) select 'teste'||1;
>>>>>>> 2) select trim(1);
>>>>>>>
>>>>>>> Select 1 & 2 does run fine on 8.2 but in 9.2 select 1 is ok and
>>>>>>> select 2 got an error due the fact implicit cast were removed on PG 8.3:
>>>>>>>
>>>>>>> ERROR:  function pg_c

Re: [GENERAL] pg_upgrade link mode

2013-05-16 Thread AI Rumman
I always think its a bit risky to use link instead of copying. However, I'd
suggest to try the  --check at first of pg_upgrade.


On Thu, May 16, 2013 at 1:42 PM, Fabio Rueda Carrascosa <
avances...@gmail.com> wrote:

> Hello, I have a 9.1 cluster with 50 databases, only one table per db with
> 2000 rows only, but a lot of schema around each one (postgis databases)
>
> The old cluster size is 1GB
>
> du -chs /var/lib/postgresql/9.1/main/
> 1.1G
>
> now I run a pg_upgrade to 9.2 with hard link mode,
>
> pg_upgrade  --link \
> --old-datadir=/var/lib/postgresql/9.1/main \
> --new-datadir=/var/lib/postgresql/9.2/main \
> --old-bindir=/usr/lib/postgresql/9.1/bin \
> --new-bindir=/usr/lib/postgresql/9.2/bin
>
> du -chs /var/lib/postgresql/9.2/main/
> 880M
>
>
> Is the expected behaviour? I can't double the space in production.
>
> Thanks.
>


Re: [GENERAL] pg_upgrade link mode

2013-05-16 Thread AI Rumman
I told you I would never go for a production with link mode. However, I was
working with pg_upgrade copy mode and in the middle I got an error saying
missing some extensions. I rollback and start the operation after setting
everything up.
I don't know how it will behave in link mode if you fail in the middle.


On Thu, May 16, 2013 at 2:03 PM, Igor Neyman  wrote:

>
> From: pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] On Behalf Of AI Rumman
> Sent: Thursday, May 16, 2013 1:56 PM
> To: Fabio Rueda Carrascosa
> Cc: pgsql-general
> Subject: Re: [GENERAL] pg_upgrade link mode
>
> I always think its a bit risky to use link instead of copying. However,
> I'd suggest to try the  --check at first of pg_upgrade.
>
> --
>
> Why?
> Do you have specific experience, when link mode caused any problems?
> Could you share?
>
> Regards,
> Igor Neyman
>


Re: [GENERAL] pg_upgrade link mode

2013-05-16 Thread AI Rumman
Yes Lonni. I agree with you.


On Thu, May 16, 2013 at 2:23 PM, Lonni J Friedman wrote:

> On Thu, May 16, 2013 at 11:03 AM, Igor Neyman 
> wrote:
> >
> > From: pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] On Behalf Of AI Rumman
> > Sent: Thursday, May 16, 2013 1:56 PM
> > To: Fabio Rueda Carrascosa
> > Cc: pgsql-general
> > Subject: Re: [GENERAL] pg_upgrade link mode
> >
> > I always think its a bit risky to use link instead of copying. However,
> I'd suggest to try the  --check at first of pg_upgrade.
> >
> > --
> >
> > Why?
> > Do you have specific experience, when link mode caused any problems?
> > Could you share?
>
> I assume what he's referring to is if the upgrade gets partially
> completed and fails for any reason, then you have a broken mess, with
> no simple rollback path.  Since your database is only about 1GB in
> size, it shouldn't take very long to run a base backup before doing
> the upgrade.  You can send that backup over the network to a remote
> system, so that you have a fallback solution if the upgrade fails.
>


Re: [GENERAL] Postgres DB crashing

2013-06-18 Thread AI Rumman
Stop the autovacuum process and try again.


On Tue, Jun 18, 2013 at 1:31 PM, bhanu udaya wrote:

>  Hello,
> Greetings.
>
> My PostgresSQL (9.2) is crashing after certain load tests. Currently,
> postgressql is crashing when simulatenously 800 to 1000 threads are run on
> a 10 million records schema. Not sure, if we have to tweak some more
> parameters of postgres. Currently, the postgressql is configured as below
> on a 7GB Ram on an Intel Xeon CPU E5507 2.27 GZ. Is this postgres
> limitation to support only 800 threads or any other configuration required.
> Please look at the log as below with errors. Please reply
>
>
>   max_connections  5000  shared_buffers  2024 MB  synchronous_commit off
> wal_buffers 100 MB  wal_writer_delays 1000ms  checkpoint_segments 512
> checkpoint_timeout 5 min  checkpoint_completion_target 0.5
> checkpoint_warning 30s  work_memory 1G  effective_cache_size 5 GB
>
>
>
> 2013-06-11 15:11:17 GMT [26201]: [1-1]ERROR: canceling autovacuum task
>
> 2013-06-11 15:11:17 GMT [26201]: [2-1]CONTEXT: automatic vacuum of table
> "newrelic.tenant1.customer"
>
> 2013-06-11 15:11:17 GMT [25242]: [1-1]LOG: sending cancel to blocking
> autovacuum PID 26201
>
> 2013-06-11 15:11:17 GMT [25242]: [2-1]DETAIL: Process 25242 waits for
> ExclusiveLock on extension of relation 679054 of database 666546.
>
> 2013-06-11 15:11:17 GMT [25242]: [3-1]STATEMENT: UPDATE tenant1.customer
> SET lastmodifieddate = $1 WHERE id IN ( select random_range((select min(id)
> from tenant1.customer ), (select max(id) from tenant1.customer )) as id )
> AND softdeleteflag IS NOT TRUE
>
> 2013-06-11 15:11:17 GMT [25242]: [4-1]WARNING: could not send signal to
> process 26201: No such process
>
> 2013-06-11 15:22:29 GMT [9]: [11-1]WARNING: worker took too long to
> start; canceled
>
> 2013-06-11 15:24:10 GMT [26511]: [1-1]WARNING: autovacuum worker started
> without a worker entry
>
> 2013-06-11 16:03:33 GMT [23092]: [1-1]LOG: could not receive data from
> client: Connection timed out
>
> 2013-06-11 16:06:05 GMT [23222]: [5-1]LOG: could not receive data from
> client: Connection timed out
>
> 2013-06-11 16:07:06 GMT [26869]: [1-1]FATAL: canceling authentication due
> to timeout
>
> 2013-06-11 16:23:16 GMT [25128]: [1-1]LOG: could not receive data from
> client: Connection timed out
>
> 2013-06-11 16:23:20 GMT [25128]: [2-1]LOG: unexpected EOF on client
> connection with an open transaction
>
> 2013-06-11 16:30:56 GMT [23695]: [1-1]LOG: could not receive data from
> client: Connection timed out
>
> 2013-06-11 16:43:55 GMT [24618]: [1-1]LOG: could not receive data from
> client: Connection timed out
>
> 2013-06-11 16:44:29 GMT [25204]: [1-1]LOG: could not receive data from
> client: Connection timed out
>
> 2013-06-11 16:54:14 GMT [6]: [1-1]PANIC: stuck spinlock
> (0x2aaab54279d4) detected at bufmgr.c:1239
>
> 2013-06-11 16:54:14 GMT [32521]: [8-1]LOG: checkpointer process (PID
> 6) was terminated by signal 6: Aborted
>
> 2013-06-11 16:54:14 GMT [32521]: [9-1]LOG: terminating any other active
> server processes
>
> 2013-06-11 16:54:14 GMT [26931]: [1-1]WARNING: terminating connection
> because of crash of another server process
>
> 2013-06-11 16:54:14 GMT [26931]: [2-1]DETAIL: The postmaster has commanded
> this server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
>
> 2013-06-11 16:54:14 GMT [26931]: [3-1]HINT: In a moment you should be able
> to reconnect to the database and repeat your command.
>
> 2013-06-11 16:54:14 GMT [26401]: [1-1]WARNING: terminating connection
> because of crash of another server process
>
> 2013-06-11 16:54:14 GMT [26401]: [2-1]DETAIL: The postmaster has commanded
> this server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
>
> 2013-06-11 16:55:08 GMT [27579]: [1-1]FATAL: the database system is in
> recovery mode
>
> 2013-06-11 16:55:08 GMT [24041]: [1-1]WARNING: terminating connection
> because of crash of another server process
>
> 2013-06-11 16:55:08 GMT [24041]: [2-1]DETAIL: The postmaster has commanded
> this server process to roll back the current
>
>


Re: [GENERAL] postgresql query

2013-06-19 Thread AI Rumman
Which version of Postgresql are you using?
However, you may use string_agg like below if its available in your version:

 \d t1
  Table "public.t1"
 Column |  Type   | Modifiers
+-+---
 i  | integer |
 amt| integer |


select * from t1;
 i | amt
---+-
 1 |  20
 1 |  30
 2 |  30
(3 rows)


 select i, string_agg(amt::text,',') as c from t1 group by i;;
 i |   c
---+---
 1 | 20,30
 2 | 30
(2 rows)


Have a good day.


On Wed, Jun 19, 2013 at 6:51 AM, Jashaswee  wrote:

> I have numeric values in a numeric column.the column has two parts.i want
> to
> split in 2 differnet column .
> The column value looks like this:
>
> Quantity
> 2000
> -1000
>
> both the quantity values are  of a same product.but i want these in a
> single
> line.
> so what i want is a result set that looks like:
> In quantity  Out quantity
> ---  
> 2000  -1000
>
> how can i get this in a select statement  ?
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/postgresql-query-tp5759846.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] dynamic partitioning

2013-06-26 Thread AI Rumman
Yes, you missed the trigger part. And also you will get error like below
during insert:

INSERT INTO foo VALUES (99, 109, 109, 99, '2013-06-26 16:38:58.466');
NOTICE:  table_name = (foo_100_to_119)
NOTICE:  CREATE TABLE foo_100_to_119 (CHECK ( foo_id >= 100 AND foo_id <=
119 )) INHERITS (foo)
ERROR:  new row for relation "foo_100_to_119" violates check constraint
"foo_100_to_119_foo_id_check"
DETAIL:  Failing row contains (99, 109, 109, 99, 2013-06-26
16:38:58.466-04).
CONTEXT:  SQL statement "INSERT INTO foo_100_to_119 VALUES (($1).*)"
PL/pgSQL function foo_insert_trigger() line 22 at EXECUTE statement


That because you added "entity_id + 1" in your function and hence when you
are giving foo_id = 99, it is creating table with check constraint where
foo_id >= 100 and foo_id <= 119.

I modified  it as below:
*from_value = entry_id ;*
*
*
Now its working:

INSERT INTO foo VALUES (99, 109, 109, 99, '2013-06-26 16:38:58.466');
NOTICE:  table_name = (foo_99_to_119)
NOTICE:  CREATE TABLE foo_99_to_119 (CHECK ( foo_id >= 99 AND foo_id <= 119
)) INHERITS (foo)
INSERT 0 0
postgres=# select * from foo;
 foo_id | blaa_id | blaa_num | foo_num |   createdatetime
+-+--+-+
 99 | 109 |  109 |  99 | 2013-06-26 16:38:58.466-04
(1 row)

postgres=# select * from foo_99_to_119;
 foo_id | blaa_id | blaa_num | foo_num |   createdatetime
+-+--+-+
 99 | 109 |  109 |  99 | 2013-06-26 16:38:58.466-04
(1 row)

postgres=# show constraint_exclusion ;
 constraint_exclusion
--
 partition
(1 row)



On Wed, Jun 26, 2013 at 10:25 AM, Daniel Cristian Cruz <
danielcrist...@gmail.com> wrote:

> You forgot to set the trigger on foo:
>
> CREATE TRIGGER foo_insert
> BEFORE INSERT ON foo
> FOR EACH ROW EXECUTE PROCEDURE foo_insert_trigger();
>
>
>
> 2013/6/26 dafNi zaf 
>
>> one note: I create a table of 100 entries in order to test it so I want 5
>> partition of 20 entries each.
>> (And not a table of 10 entries)
>>
>>
>> thanks again!
>> dafni
>>
>>
>> On Wed, Jun 26, 2013 at 4:47 PM, dafNi zaf  wrote:
>>
>>> Hello!
>>>
>>> I want to dynamically create partition tables that inherit a main table
>>> called "foo".
>>> The creation must occur when needed.
>>>
>>> For example, lets say that I want to insert 10 entries and I want 5
>>> partition
>>> tables (with 2 entries each).
>>>
>>> So, first I need a partition for the first 2 entries and when the
>>> entries reach
>>> the number 2, another partition must be created, e.t.c..
>>>
>>> I guess I need something like that:
>>>
>>> --the main table is:
>>>
>>> CREATE TABLE foo (
>>> foo_id integer NOT NULL,
>>> blaa_id integer NOT NULL,
>>> blaa_num integer NOT NULL,
>>>  foo_num integer NOT NULL,
>>> createdatetime timestamp with time zone DEFAULT now()
>>> );
>>>
>>> --and the trigger function is:
>>>
>>> CREATE OR REPLACE FUNCTION foo_insert_trigger()
>>> RETURNS trigger AS $$
>>> DECLARE
>>> entry_id integer;
>>>  from_value integer;
>>> to_value integer;
>>> table_name varchar;
>>> BEGIN
>>> entry_id = NEW.foo_id;
>>> from_value = entry_id + 1;
>>> to_value = entry_id + 20;
>>>  table_name='foo_' || from_value || '_to_' || to_value;
>>>
>>> IF not exists(select * from pg_class where relname = table_name) THEN
>>>  EXECUTE 'CREATE TABLE ' || table_name || '(CHECK ( foo_id >=' ||
>>> from_value || 'AND foo_id <=' || to_value || ' )) INHERITS (foo)' ;
>>>  EXECUTE 'CREATE UNIQUE INDEX by_blaa_num_' || from_value || '_to_' ||
>>> to_value || 'k ON ' || table_name ||' USING btree (foo_id, blaa_id,
>>> blaa_num)';
>>>  EXECUTE 'CREATE UNIQUE INDEX pk_foo_' || from_value || '_to_' ||
>>> to_value || 'k ON ' || table_name ||' USING btree (foo_id, foo_num)';
>>>  EXECUTE 'GRANT ALL ON TABLE ' || table_name || ' TO foorole, postgres';
>>> EXECUTE 'GRANT SELECT ON TABLE ' || table_name || ' TO blarole';
>>>
>>> END IF;
>>>
>>> EXECUTE 'INSERT INTO ' || table_name ||' VALUES (($1).*)' USING NEW ;
>>>
>>> RETURN NULL;
>>> END;
>>> $$
>>> LANGUAGE plpgsql;
>>>
>>>
>>> but it doesn't seem to work. It doesn't actually create new partition
>>> tables.
>>> The entries are inserted into "foo"
>>>
>>> I attach a test .sql file that contains the data of the table
>>>
>>>
>>> any help would save me from a lot of time!
>>>
>>> thank you in advance!
>>>
>>> dafni
>>>
>>
>>
>
>
> --
> Daniel Cristian Cruz
> クルズ クリスチアン ダニエル
>


Re: [GENERAL] dynamic partitioning

2013-06-26 Thread AI Rumman
That because you are generating table name from from_value which is
distinct everytime.
Like,

INSERT INTO foo VALUES (1, 11, 11, 1, '2013-06-26 16:38:58.466');
NOTICE:  table_name = (foo_1_to_21)
NOTICE:  CREATE TABLE foo_1_to_21 (CHECK ( foo_id >= 1 AND foo_id <= 21 ))
INHERITS (foo)
INSERT 0 0
postgres=# \d
 List of relations
 Schema | Name  | Type  |  Owner
+---+---+--
 public | foo   | table | postgres
 public | foo_1_to_21   | table | postgres
 public | foo_99_to_119 | table | postgres
(3 rows)

postgres=# INSERT INTO foo VALUES (2, 12, 12, 2, '2013-06-26 16:38:58.466');
NOTICE:  table_name = (foo_2_to_22)
NOTICE:  CREATE TABLE foo_2_to_22 (CHECK ( foo_id >= 2 AND foo_id <= 22 ))
INHERITS (foo)
INSERT 0 0
postgres=# \d
 List of relations
 Schema | Name  | Type  |  Owner
+---+---+--
 public | foo   | table | postgres
 public | foo_1_to_21   | table | postgres
 public | foo_2_to_22   | table | postgres
 public | foo_99_to_119 | table | postgres
(4 rows)

Here, for two inserts it creates two tables one for foo_id = 1 and other
for foo_id = 2.


Use,
from_value = entry_id/20::int



On Wed, Jun 26, 2013 at 10:50 AM, dafNi zaf  wrote:

> I solved the problem with the error! thank you very much!
>
> But there is still 1 issue:
>
> when I insert multiple rows (for exaple with the attachment in my fist
> email)
> it creates 100 partition tables that contain 1 entry instead of 5
> partitions with
> 20 entries..
>
> Any ideas in that??
>
> Thanks again!
>
> Dafni
>
>
>
>
> On Wed, Jun 26, 2013 at 5:32 PM, AI Rumman  wrote:
>
>> Yes, you missed the trigger part. And also you will get error like below
>> during insert:
>>
>> INSERT INTO foo VALUES (99, 109, 109, 99, '2013-06-26 16:38:58.466');
>> NOTICE:  table_name = (foo_100_to_119)
>> NOTICE:  CREATE TABLE foo_100_to_119 (CHECK ( foo_id >= 100 AND foo_id <=
>> 119 )) INHERITS (foo)
>> ERROR:  new row for relation "foo_100_to_119" violates check constraint
>> "foo_100_to_119_foo_id_check"
>> DETAIL:  Failing row contains (99, 109, 109, 99, 2013-06-26
>> 16:38:58.466-04).
>> CONTEXT:  SQL statement "INSERT INTO foo_100_to_119 VALUES (($1).*)"
>> PL/pgSQL function foo_insert_trigger() line 22 at EXECUTE statement
>>
>>
>> That because you added "entity_id + 1" in your function and hence when
>> you are giving foo_id = 99, it is creating table with check constraint
>> where foo_id >= 100 and foo_id <= 119.
>>
>> I modified  it as below:
>> *from_value = entry_id ;*
>> *
>> *
>> Now its working:
>>
>> INSERT INTO foo VALUES (99, 109, 109, 99, '2013-06-26 16:38:58.466');
>> NOTICE:  table_name = (foo_99_to_119)
>> NOTICE:  CREATE TABLE foo_99_to_119 (CHECK ( foo_id >= 99 AND foo_id <=
>> 119 )) INHERITS (foo)
>> INSERT 0 0
>> postgres=# select * from foo;
>>  foo_id | blaa_id | blaa_num | foo_num |   createdatetime
>> +-+--+-+
>>  99 | 109 |  109 |  99 | 2013-06-26 16:38:58.466-04
>> (1 row)
>>
>> postgres=# select * from foo_99_to_119;
>>  foo_id | blaa_id | blaa_num | foo_num |   createdatetime
>> +-+--+-+
>>  99 | 109 |  109 |  99 | 2013-06-26 16:38:58.466-04
>> (1 row)
>>
>> postgres=# show constraint_exclusion ;
>>  constraint_exclusion
>> --
>>  partition
>> (1 row)
>>
>>
>>
>> On Wed, Jun 26, 2013 at 10:25 AM, Daniel Cristian Cruz <
>> danielcrist...@gmail.com> wrote:
>>
>>> You forgot to set the trigger on foo:
>>>
>>> CREATE TRIGGER foo_insert
>>> BEFORE INSERT ON foo
>>> FOR EACH ROW EXECUTE PROCEDURE foo_insert_trigger();
>>>
>>>
>>>
>>> 2013/6/26 dafNi zaf 
>>>
>>>> one note: I create a table of 100 entries in order to test it so I want
>>>> 5 partition of 20 entries each.
>>>> (And not a table of 10 entries)
>>>>
>>>>
>>>> thanks again!
>>>> dafni
>>>>
>>>>
>>>> On Wed, Jun 26, 2013 at 4:47 PM, dafNi zaf  wrote:
>>>>
>>>>> Hello!
>>>>>
>>>>> I want to dynamically create partition tables that inherit a main
>>>>> table called "foo".
>>>>> The creation must o

[GENERAL] last_vacuum field is not updating

2013-07-15 Thread AI Rumman
Why does vacuum table is not updating the field last_vacuum of
pg_stat_user_tables?

select * from pg_stat_user_tables  where relname = 'table1';
-[ RECORD 1 ]-+--
relid | 5452445
schemaname| public
relname   | table1
seq_scan  | 54911
seq_tup_read  | 373562142
idx_scan  | 2773802
idx_tup_fetch | 125909227
n_tup_ins | 889595
n_tup_upd | 530602
n_tup_del | 529037
n_tup_hot_upd | 162199
n_live_tup| 360526
n_dead_tup| 0
*last_vacuum   | 2013-06-29 20:04:15.391413-04*
last_autovacuum   | 2013-05-01 00:55:01.970799-04
last_analyze  | 2013-07-15 10:55:40.870926-04
last_autoanalyze  | 2013-05-01 06:04:12.905961-04
vacuum_count  | 92
autovacuum_count  | 248
analyze_count | 94
autoanalyze_count | 560

psql# *vacuum analyze verbose table1;*

INFO:  vacuuming "public.table1"
INFO:  index "table1_pkey" now contains 360965 row versions in 1878 pages
DETAIL:  0 index row versions were removed.
326 index pages have been deleted, 305 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "table1": found 0 removable, 1116 nonremovable row versions in 30
out of 2286 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 6720 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM

select * from pg_stat_user_tables  where relname = 'table1';
-[ RECORD 1 ]-+--
relid | 5452445
schemaname| public
relname   | table1
seq_scan  | 54911
seq_tup_read  | 373562142
idx_scan  | 2773802
idx_tup_fetch | 125909227
n_tup_ins | 889595
n_tup_upd | 530602
n_tup_del | 529037
n_tup_hot_upd | 162199
n_live_tup| 360526
n_dead_tup| 0
*last_vacuum   | 2013-06-29 20:04:15.391413-04*
last_autovacuum   | 2013-05-01 00:55:01.970799-04
last_analyze  | 2013-07-15 10:55:40.870926-04
last_autoanalyze  | 2013-05-01 06:04:12.905961-04
vacuum_count  | 92
autovacuum_count  | 248
analyze_count | 94
autoanalyze_count | 560


Re: [GENERAL] last_vacuum field is not updating

2013-07-16 Thread AI Rumman
Yes, I am sure that I am looking for the same table.


On Tue, Jul 16, 2013 at 4:34 AM, Luca Ferrari  wrote:

> On Mon, Jul 15, 2013 at 6:43 PM, Giuseppe Broccolo
>  wrote:
>
> > Are you sure you are the table's owner?
>
> It should not be a permission problem: it works even after a revoke
> all on 9.2.4. Interestingly also the autovacuum is really old. Have
> you tried to do a simple vacuum?  From the documentation
> (
> http://www.postgresql.org/docs/current/static/monitoring-stats.html#MONITORING-STATS-VIEWS-TABLE
> ):
>
> Last time at which this table was manually vacuumed (not counting VACUUM
> FULL)
>
> Are you sure we are looking at the same table?
>
> Luca
>


[GENERAL] setting high value for wal_keep_segments

2013-08-08 Thread AI Rumman
Hi,

I am going to sync slave with my master which is almost 500 G. I am not
using archive directory instead of I am using wal files for streaming. As
it may take almost 3 hours, I am thinking of setting up 400 for
wal_keep_segments where I have enough space available.

Without the space issue, could there be any other problem in setting up
such high value for "wal_keep_segments"? As this is production, I need to
confirmed.

Thanks.


Re: [GENERAL] setting high value for wal_keep_segments

2013-08-08 Thread AI Rumman
Yeah, I already set it like that and it works.
Thanks.


On Thu, Aug 8, 2013 at 11:59 AM, bricklen  wrote:

> On Thu, Aug 8, 2013 at 6:23 AM, AI Rumman  wrote:
>
>> Hi,
>>
>> I am going to sync slave with my master which is almost 500 G. I am not
>> using archive directory instead of I am using wal files for streaming. As
>> it may take almost 3 hours, I am thinking of setting up 400 for
>> wal_keep_segments where I have enough space available.
>>
>> Without the space issue, could there be any other problem in setting up
>> such high value for "wal_keep_segments"? As this is production, I need to
>> confirmed.
>>
>
> Another data point: I set up SR on two systems recently in production with
> the wal_keep_segments set to 1 (lots of logs were being generated), and
> the slaves were about 1TB each. No problems were experienced.
>
>


[GENERAL] last_vacuum field in not updated

2013-08-15 Thread AI Rumman
Hi,
I am using Postgresql 9.2 where I have a table "table1". I used vacuum
command in that table, but last_vacuum column of pg_stat_user_tables has
not been updated.
Any idea for it?

 \d table1
Table "public.table1"
  Column  |   Type   | Modifiers
--+--+
 batterysessionid | integer  | not null
 processedflag| smallint | not null default 0
Indexes:
"table1_pkey" PRIMARY KEY, btree (batterysessionid)

qualitycore=# select * from pg_stat_user_tables  where last_vacuum =
'2013-07-28 20:04:34.821115-04';
-[ RECORD 1 ]-+--
relid | 5452445
schemaname| public
relname   | table1
seq_scan  | 55394
seq_tup_read  | 458097965
idx_scan  | 3056888
idx_tup_fetch | 345092348
n_tup_ins | 1023618
n_tup_upd | 643602
n_tup_del | 642037
n_tup_hot_upd | 175225
n_live_tup| 381549
n_dead_tup| 77130
last_vacuum   | 2013-07-28 20:04:34.821115-04
last_autovacuum   | 2013-05-01 00:55:01.970799-04
last_analyze  | 2013-07-28 20:04:34.903569-04
last_autoanalyze  | 2013-05-01 06:04:12.905961-04
vacuum_count  | 93
autovacuum_count  | 248
analyze_count | 95
autoanalyze_count | 560

qualitycore=# vacuum  table1;
VACUUM
qualitycore=# vacuum  verbose table1;
INFO:  vacuuming "public.table1"
INFO:  index "table1_pkey" now contains 381973 row versions in 1878 pages
DETAIL:  0 index row versions were removed.
104 index pages have been deleted, 103 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "table1": found 0 removable, 1120 nonremovable row versions in 10
out of 2286 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 1538 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM



select * from pg_stat_user_tables  where last_vacuum = '2013-07-28
20:04:34.821115-04';
-[ RECORD 1 ]-+--
relid | 5452445
schemaname| public
relname   | table1
seq_scan  | 55394
seq_tup_read  | 458097965
idx_scan  | 3056888
idx_tup_fetch | 345092348
n_tup_ins | 1023618
n_tup_upd | 643602
n_tup_del | 642037
n_tup_hot_upd | 175225
n_live_tup| 381549
n_dead_tup| 77130
last_vacuum   | 2013-07-28 20:04:34.821115-04
last_autovacuum   | 2013-05-01 00:55:01.970799-04
last_analyze  | 2013-07-28 20:04:34.903569-04
last_autoanalyze  | 2013-05-01 06:04:12.905961-04
vacuum_count  | 93
autovacuum_count  | 248
analyze_count | 95
autoanalyze_count | 560

Please let me know.

Thanks.


[GENERAL] pg_get_triggerdef can't find the trigger using OID.

2013-08-16 Thread AI Rumman
Why can't pg_get_triggerdef find the trigger using OID.

testdb=# SELECT
testdb-#   p.oid,
testdb-#   n.nspname as "Schema",
testdb-#   p.proname as "Name",
testdb-#   pg_catalog.pg_get_function_result(p.oid) as "Result data type",
testdb-#   pg_catalog.pg_get_function_arguments(p.oid) as "Argument data
types",
testdb-#  CASE
testdb-#   WHEN p.proisagg THEN 'agg'
testdb-#   WHEN p.proiswindow THEN 'window'
testdb-#   WHEN p.prorettype = 'pg_catalog.trigger'::pg_catalog.regtype
THEN 'trigger'
testdb-#   ELSE 'normal'
testdb-# END as "Type"
testdb-# FROM pg_catalog.pg_proc p
testdb-#  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
testdb-# WHERE pg_catalog.pg_function_is_visible(p.oid)
testdb-#   AND n.nspname <> 'pg_catalog'
testdb-#   AND n.nspname <> 'information_schema'
testdb-# ORDER BY 1, 2, 4;
  oid  | Schema |  Name   | Result data type | Argument
data types  |  Type
---++-+--+--+-
 18249 | public | test_f  | trigger  |
 | trigger


testdb=# select pg_get_triggerdef(18249);
ERROR:  could not find tuple for trigger 18249

Thanks.


[GENERAL] regexp idea

2013-08-27 Thread AI Rumman
Hi,

I have a string like:
Gloucester Catholic vs. St. Augustine baseball, South Jersey Non-Public A
final, June 5, 2013

I need to extract date part from the string.

I used the follows:
regexp_matches(title,'[.* ]+\ (Jul|August|Sep)[, a-zA-Z0-9]+' )

But it gives me result August as it stops at "Augustine".

In my case, date can be in different formats, some record may use "," or
some may not.

Any idea to achieve this?

Thanks.


Re: [GENERAL] regexp idea

2013-08-27 Thread AI Rumman
Thanks. That's awesome.
Do you have any good guide where I may get more knowledge on REGEXP?


On Tue, Aug 27, 2013 at 3:57 PM, Rob Sargent  wrote:

> On 08/27/2013 12:44 PM, AI Rumman wrote:
>
>> Hi,
>>
>> I have a string like:
>> Gloucester Catholic vs. St. Augustine baseball, South Jersey Non-Public A
>> final, June 5, 2013
>>
>> I need to extract date part from the string.
>>
>> I used the follows:
>> regexp_matches(title,'[.* ]+\ (Jul|August|Sep)[, a-zA-Z0-9]+' )
>>
>> But it gives me result August as it stops at "Augustine".
>>
>> In my case, date can be in different formats, some record may use "," or
>> some may not.
>>
>> Any idea to achieve this?
>>
>> Thanks.
>>
> select regexp_replace('Gloucester Catholic vs. St. Augustine baseball,
> South Jersey Non-Public A final, June 5, 2013',
> E'(^.*)(\\m(June|July|August|**Sep))([, a-zA-Z0-9]+)', E'\\2 \\4' );
> ++
> | regexp_replace |
> ++
> | June  5, 2013  |
> ++
> (1 row)
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>


  1   2   3   >