Re: [GENERAL] Alternate input for user defined data type

2013-08-21 Thread Albe Laurenz
Aram Fingal wrote:
 I want to create a user defined data type but have flexible input just like, 
 for example, the boolean
 data type where TRUE can be entered as any of (TRUE, true, T, t, YES, yes, Y, 
 y, 1...) and it will be
 interpreted as the same thing.
 
 So suppose I have days of the week:
 
 CREATE TYPE  days_of_week AS ENUM ('Su','M','Tu','W','Th','F','Sa');
 
 Except that I want 'MON', 'Mon', 'mon' and 'monday' all to be interpreted as 
 'M' in the data type.
 What is the best way to do this.? Is that what input_function is for in the 
 CREATE TYPE command?  Do I
 need to create a trigger?

Yes, the type input function converts the string representation to the internal
representation, so that's where you'd implement that in a user defined data 
type.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Convincing the query planner to play nice

2013-08-21 Thread Tim Kane

Thanks Jeff. These queries in particular relate to a set of data that is
rebuilt on a periodic basis. For all intents and purposes, the data is
newly populated and unlikely to reside in cache - hence the need to
perform my tests under similar conditions.

It's probably better than I adjust the random_page_cost for that
particular session, and leave things be otherwise.


Cheers.



On 13/08/2013 17:27, Jeff Janes jeff.ja...@gmail.com wrote:

On Sat, Aug 10, 2013 at 10:32 AM, Timothy Kane tim.k...@gmail.com wrote:

 Hi all,

 I seem to be having some grief with the 9.1.9 query planner favouring an
 index scan + merge join, over a sequential scan + hash join.
 Logically I would have considered the index+merge to be faster, as
suggested
 by the explain output - but in practice, it is in fact slower by orders
of
 magnitude.

 In my timings below, I've tried to reduce the impact of any OS or
 shared_buffer level caching (restarting postgres, and flushing OS cache
 between queries-).


Are you sure that that is the right thing to do?  It seems unlikely
that your production server is constantly executing your query from a
cold start.  Why test it that way?


 I've provided my settings as shown:


 =# show seq_page_cost;
  seq_page_cost
 ---
  1
 (1 row)

 Time: 0.355 ms
 =# show random_page_cost;
  random_page_cost
 --
  2.2
 (1 row)

Given that you are testing your query from a cold start (and assuming
against odds that that is the correct thing to do), 2.2 is probably a
factor of 20 too small for this setting.

Cheers,

Jeff




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Commit hung bug

2013-08-21 Thread S H
Can i see list of commit related bugs in postgresql. In one of customer,  
commit returned successfully but operation is actually  committed after an hour 
or so successful ( Postgres version -8.1.18).
I am proposing customer to shift to  latest version as there is many fixes and 
major performance improvement in latest 9.x versions.

See http://archives.postgresql.org/pgsql-hackers/2006-10/msg00561.php

I am sorry i could not understand it. What will be impact to postgresql user in 
this bug.


 Date: Wed, 21 Aug 2013 00:37:08 -0400
 From: alvhe...@2ndquadrant.com
 To: msq...@live.com
 CC: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Commit hung bug
 
 S H wrote:
  There is one bug mentioned  commit hung for days..
  http://www.postgresql.org/message-id/1af3044fcab26f4db1ae551f8a33634b3d2...@mail.digital-rapids.com
The interesting thing would be to see the server logs, not the 
  application logs. Specifically, an issue that could look just likethis was 
  fixed in 8.1.7, in which case you would see weird error messages about 
  permission denied or such in the *server* logs. None of thatwould show up 
  in the client logs.
  Any idea what exactly is this bug.
  I could not make out relation between release notes mentioned in 
  http://www.postgresql.org/docs/8.1/static/release-8-1-7.html
  and above comment.
 
 Maybe it's this commit, which was part of 8.1.6:
 
 commit 9f1b531420ee13d04c7701b34bb4b874df7ff2fa
 Author: Teodor Sigaev teo...@sigaev.ru
 Date:   Fri Oct 13 14:00:17 2006 +
 
 Fix infinite sleep and failes of send in Win32.
 
 1) pgwin32_waitforsinglesocket(): WaitForMultipleObjectsEx now called with
 finite timeout (100ms) in case of FP_WRITE and UDP socket. If timeout 
 occurs
 then pgwin32_waitforsinglesocket() tries to write empty packet goes to
 WaitForMultipleObjectsEx again.
 
 2) pgwin32_send(): add loop around WSASend and 
 pgwin32_waitforsinglesocket().
 The reason is: for overlapped socket, 'ok' result from
 pgwin32_waitforsinglesocket() isn't guarantee that socket is still free,
 it can become busy again and following WSASend call will fail with
 WSAEWOULDBLOCK error.
 
 See http://archives.postgresql.org/pgsql-hackers/2006-10/msg00561.php
 
 
 It's troubling to be talking about a bug that was patched in 2006 for
 the 8.1.6 release, however.  Anything prior to that is not something
 anyone should be using anymore.  At the very least, you should have
 migrated to 8.1.23; but 8.1 has been unsupported altogether for more
 than two years now.  Even 8.2 is out of support.
 
 -- 
 Álvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services
  

[GENERAL] Bug in psql (\dd query)

2013-08-21 Thread Ivan Radovanovic

Hello,

I was checking for way to get object comments, and it seems that \dd has 
bug when it comes to extracting descriptions for constraints. Relevant 
part of query psql is executing is:


SELECT DISTINCT tt.nspname AS Schema, tt.name AS Name, tt.object AS 
Object, d.description AS Description

FROM (
  SELECT pgc.oid as oid, pgc.tableoid AS tableoid,
  n.nspname as nspname,
  CAST(pgc.conname AS pg_catalog.text) as name,  CAST('constraint' AS 
pg_catalog.text) as object

  FROM pg_catalog.pg_constraint pgc
JOIN pg_catalog.pg_class c ON c.oid = pgc.conrelid
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname  'pg_catalog'
  AND n.nspname  'information_schema'
  AND pg_catalog.pg_table_is_visible(c.oid)
/* more unions here */
) AS tt
  JOIN pg_catalog.pg_description d ON (tt.oid = d.objoid AND 
tt.tableoid = d.classoid AND d.objsubid = 0)

ORDER BY 1, 2, 3;

obviously it is trying to get description for (table_oid, 
constraint_oid, 0), while in fact it should read description for (oid of 
pg_catalog.pg_constaint, constraint_oid, 0).


At least last tuple is what comment statement is inserting into 
pg_description table


Regards,
Ivan


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Locale Issue

2013-08-21 Thread Don Parris
Hi all,

When I install the Kubuntu (13.04) postgresql (9.1) packages, the default
template1 encoding turns out to be ASCII, which is not really what I want.
My OS locale command reveals everything to be en_US.UTF-8, except for
LC_ALL, which is left open.

I am guessing that my best bet is to change the locale settings of my OS in
order to get postgresql server to use a UTF-8 locale setting from the
system.  But which setting should I change?

I did run pg_dropcluster, pg_createcluster (setting locale to C.UTF8) - and
that seemed to work, except that I could not thereafter make remote
connections (despite resetting the hba and postgres.conf files as they
should be).

Thanks!
Don
-- 
D.C. Parris, FMP, Linux+, ESL Certificate
Minister, Security/FM Coordinator, Free Software Advocate
http://dcparris.net/
https://www.xing.com/profile/Don_Parrishttp://www.linkedin.com/in/dcparris
GPG Key ID: F5E179BE


Re: [GENERAL] Strange message from pg_receivexlog

2013-08-21 Thread Fujii Masao
On Tue, Aug 20, 2013 at 3:17 PM, Sergey Konoplev gray...@gmail.com wrote:
 Hi all,

 My WAL archiving script based on pg_receivexlog reported the following
 error several days ago (just ignore everything before
 'pg_receivexlog', it's a message my script generates).

 Thu Aug 15 18:33:09 MSK 2013 ERROR archive_wal.sh: Problem occured
 during WAL archiving: pg_receivexlog: could not send feedback packet:
 server closed the connection unexpectedly

 At the same time postgres reported this error in log:

 2013-08-15 18:32:51 MSK 30945 postgres@[unknown] from [local]
 [vxid:53/0 txid:0] [streaming 2A97/6FA48000] LOG:  terminating
 walsender process due to replication timeout

 Both pg_receivexlog and postgres run at the same machive,
 pg_receivexlog connects to postgres locally. /var/log/messages has
 absolutely nothing about it. I also have a hot standby on another
 machine connecting to the same master, but there is nothing strange in
 its logs either.

 Any thoughts what it was?

Is the value of replication_timeout sufficiently-larger than the status-interval
of pg_receivexlog?

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug in psql (\dd query)

2013-08-21 Thread Tom Lane
Ivan Radovanovic radovano...@gmail.com writes:
 I was checking for way to get object comments, and it seems that \dd has 
 bug when it comes to extracting descriptions for constraints.

That code looks right to me, and it works according to a simple test:

d1=# create table foo (f1 int primary key);
CREATE TABLE
d1=# comment on constraint foo_pkey on foo is 'here is a comment';
COMMENT
d1=# \dd
Object descriptions
 Schema |   Name   |   Object   |Description
+--++---
 public | foo_pkey | constraint | here is a comment
(1 row)

What exactly do you think the problem is?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locale Issue

2013-08-21 Thread Tom Lane
Don Parris parri...@gmail.com writes:
 When I install the Kubuntu (13.04) postgresql (9.1) packages, the default
 template1 encoding turns out to be ASCII, which is not really what I want.
 My OS locale command reveals everything to be en_US.UTF-8, except for
 LC_ALL, which is left open.

initdb will absorb locale/encoding from its environment, unless told
otherwise through a --locale switch.  So the usual expectation would be
that it'd work like you want.  Perhaps the Ubuntu packager set LANG=C in
the postgres user's ~/.profile, or some such?  Poke around a bit in the
package's files, it shouldn't be too hard to find where the damage is
being done.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug in psql (\dd query)

2013-08-21 Thread Ivan Radovanovic

On 08/21/13 16:03, Tom Lane napisa:

Ivan Radovanovicradovano...@gmail.com  writes:

I was checking for way to get object comments, and it seems that \dd has
bug when it comes to extracting descriptions for constraints.


That code looks right to me, and it works according to a simple test:

d1=# create table foo (f1 int primary key);
CREATE TABLE
d1=# comment on constraint foo_pkey on foo is 'here is a comment';
COMMENT
d1=# \dd
 Object descriptions
  Schema |   Name   |   Object   |Description
+--++---
  public | foo_pkey | constraint | here is a comment
(1 row)

What exactly do you think the problem is?

regards, tom lane


Problem is if you create table in schema other than public (I am not 
sure if \dd should show comments only for objects in public schema, I 
assumed not?)


db=# create schema test;
CREATE SCHEMA
db=# create table test.foo (f1 int primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
foo_pkey for table foo

CREATE TABLE
db=# comment on constraint foo_pkey on test.foo is 'here is a comment';
COMMENT
db=# \dd
 Object descriptions
 Schema | Name | Object | Description
+--++-
(0 rows)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug in psql (\dd query)

2013-08-21 Thread Ivan Radovanovic

On 08/21/13 16:34, Ivan Radovanovic napisa:

On 08/21/13 16:03, Tom Lane napisa:

Ivan Radovanovicradovano...@gmail.com writes:

I was checking for way to get object comments, and it seems that \dd has
bug when it comes to extracting descriptions for constraints.


That code looks right to me, and it works according to a simple test:

d1=# create table foo (f1 int primary key);
CREATE TABLE
d1=# comment on constraint foo_pkey on foo is 'here is a comment';
COMMENT
d1=# \dd
Object descriptions
Schema | Name | Object | Description
+--++---
public | foo_pkey | constraint | here is a comment
(1 row)

What exactly do you think the problem is?

regards, tom lane


Problem is if you create table in schema other than public (I am not
sure if \dd should show comments only for objects in public schema, I
assumed not?)

db=# create schema test;
CREATE SCHEMA
db=# create table test.foo (f1 int primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index foo_pkey
for table foo
CREATE TABLE
db=# comment on constraint foo_pkey on test.foo is 'here is a comment';
COMMENT
db=# \dd
Object descriptions
Schema | Name | Object | Description
+--++-
(0 rows)



Obviously there is optional pattern argument for \dd which would show 
comments in different schema, so I it was my mistake after all.


Sorry for false alarm


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locale Issue

2013-08-21 Thread Don Parris
On Wed, Aug 21, 2013 at 10:08 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Don Parris parri...@gmail.com writes:
 SNIP



 initdb will absorb locale/encoding from its environment, unless told
 otherwise through a --locale switch.  So the usual expectation would be
 that it'd work like you want.  Perhaps the Ubuntu packager set LANG=C in
 the postgres user's ~/.profile, or some such?  Poke around a bit in the
 package's files, it shouldn't be too hard to find where the damage is
 being done.

 regards, tom lane


Thanks Tom,

I'll check that out.

-- 
D.C. Parris, FMP, Linux+, ESL Certificate
Minister, Security/FM Coordinator, Free Software Advocate
http://dcparris.net/
https://www.xing.com/profile/Don_Parrishttp://www.linkedin.com/in/dcparris
GPG Key ID: F5E179BE


Re: [GENERAL] Memory Issue with array_agg?

2013-08-21 Thread Robert Sosinski
Hi Pavel,

Here are the explains you asked for:

explain analyze select string_agg(id::text,',') from things group by guid;
QUERY PLAN

--
 GroupAggregate  (cost=400357.78..433784.93 rows=1337086 width=37) (actual
time=41434.485..53195.185 rows=2378626 loops=1)
   -  Sort  (cost=400357.78..403700.49 rows=1337086 width=37) (actual
time=41434.433..44992.736 rows=2378626 loops=1)
 Sort Key: guid
 Sort Method: quicksort  Memory: 284135kB
 -  Seq Scan on things  (cost=0.00..264304.86 rows=1337086
width=37) (actual time=0.027..21429.179 rows=2378626 loops=1)
 Total runtime: 56295.362 ms
(6 rows)


explain analyze select array_agg(id::text) from things group by guid;
   QUERY PLAN

-
 GroupAggregate  (cost=400357.78..433784.93 rows=1337086 width=37) (actual
time=23953.922..38157.059 rows=2378626 loops=1)
   -  Sort  (cost=400357.78..403700.49 rows=1337086 width=37) (actual
time=23953.847..27527.316 rows=2378626 loops=1)
 Sort Key: guid
 Sort Method: quicksort  Memory: 284135kB
 -  Seq Scan on things  (cost=0.00..264304.86 rows=1337086
width=37) (actual time=0.007..4941.752 rows=2378626 loops=1)
 Total runtime: 41280.897 ms
(6 rows)



These seem to be running on the machine now, and the memory is not
inflating, I just run this one, and it blew up.

explain with t as (select id, guid, md5(concat_ws(':', fields - 'a',
fields - 'b', fields - 'c', fields - 'd', fields - 'e', foo_id::text))
from things) select md5, count(id), array_agg(id) from t group by 1 having
count(id)  1;

-Robert


On Tue, Aug 20, 2013 at 1:53 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Can you send a EXPLAIN result in both use cases?

 Pavel


 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 At the moment, all guids are distinct, however before I zapped the
 duplicates, there were 280 duplicates.

 Currently, there are over 2 million distinct guids.

 -Robert


 On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:




 2013/8/19 Robert Sosinski rsosin...@ticketevolution.com

 Hi Pavel,

 What kind of example do you need?  I cant give you the actual data I
 have in the table, but I can give you an example query and the schema
 attached below.  From there, I would just put in 2 million rows worth 1.2
 Gigs of data.  Average size of the the extended columns (using the
 pg_column_size function) in bytes are:

 guid: 33
 name: 2.41
 currency: 4
 fields: 120.32

 example query:

 -- find duplicate records using a guid
 select guid, array_agg(id) from orders group by guid;


 how much distinct guid is there, and how much duplicates

 ??

 regards

 Pavel




 example schema:
  Table public.things

Column   |Type |
  Modifiers  | Storage  | Stats target | Description

 +-+-+--+--+-
  id | integer | not null default
 nextval('things_id_seq'::regclass) | plain|  |
  version| integer | not null
  | plain|  |
  created_at | timestamp without time zone | not null
  | plain|  |
  updated_at | timestamp without time zone | not null
  | plain|  |
  foo_id | integer | not null
  | plain|  |
  bar_id | integer | not null
  | plain|  |
  baz_id | integer | not null
  | plain|  |
  guid   | character varying   | not null
  | extended |  |
  name   | character varying   | not null
  | extended |  |
  price  | numeric(12,2)   | not null
  | main |  |
  currency   | character varying   | not null
  | extended |  |
  amount | integer | not null
  | plain|  |
  the_date   | date| not null
  | plain|  |
  fields | hstore  |
 | extended |  |
 Indexes:
 things_pkey PRIMARY KEY, btree (id)
 things_foo_id_idx btree (foo_id)
 things_bar_id_idx btree 

[GENERAL] Unique constraint and unique index

2013-08-21 Thread Ivan Radovanovic

Just to verify:
- when unique constraint is created using appropriate syntax rows are 
added to tables pg_constraint and pg_index (pg_constraint with type 'u' 
and referring to index with indisunique set to true)
- when unique index is created row is added only to pg_index table but 
not to pg_constraint table (although in fact that index is behaving like 
constraint on table)


Is that correct?

Regards,
Ivan


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] WAL Replication Working but Not Working

2013-08-21 Thread Joseph Marlin
We're having an issue with our warm standby server. About 9:30 last night, it 
stopped applying changes it received in WAL files that are shipped over to it 
as they are created. It is still reading WAL files as they delivered, as the 
startup_log.txt shows, but the changes in the primary database aren't actually 
being made to the standby, and haven't been since last night.

Is there any way we can figure out what is going on here? We'd like to recover 
somehow without having to restore from a base backup, and we'd like to figure 
out what is happening so we can prevent it in the future.

Thanks!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL Replication Working but Not Working

2013-08-21 Thread Lonni J Friedman
The first thing to do is look at your server logs around the time when
it stopped working.

On Wed, Aug 21, 2013 at 7:08 AM, Joseph Marlin jmar...@saucontech.com wrote:
 We're having an issue with our warm standby server. About 9:30 last night, it 
 stopped applying changes it received in WAL files that are shipped over to it 
 as they are created. It is still reading WAL files as they delivered, as the 
 startup_log.txt shows, but the changes in the primary database aren't 
 actually being made to the standby, and haven't been since last night.

 Is there any way we can figure out what is going on here? We'd like to 
 recover somehow without having to restore from a base backup, and we'd like 
 to figure out what is happening so we can prevent it in the future.

 Thanks!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL 9.2 Logging

2013-08-21 Thread carlosinfl .
I'm trying to understand how 9.2.4 logs common tasks and info in
CentOS 6.4 Linux. It appears everything is stored in
/var/lib/pgsql9/data/pg_log/postgresql-%a.log

My issue is the logging information is fairly missing / light. I only
see auth failures and nothing more. I tried to perform my 1st VACUUM
command on my database and I was hoping to see something in the logs
showing it PASSED / FAILED or even was manually initiated by a
superuser role but I see nothing...

I don't need to log every single verbose thing this database server
does but I would expect something like a VACUUM command would be
warranted to be logged, no?


#--
# ERROR REPORTING AND LOGGING
#--

# - When to Log -



Not sure what to change here...

# - What to Log -

#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off

Do you PG veterans have any log change recommendations / changes I can
make which would help me understand what my system is doing? I don't
want to log every little thing possible and choke my disk with
informative logging but just basic 'what's going on' logs would be
awesome.

Thanks for any help!

-- 
Carlos Mennens


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] No caching sql query

2013-08-21 Thread Prabhjot Sheena
Hi guys
  Is there a way to not cache sql query that just finished
executing. I have a scenario in which i want to run a sql query but i don't
want it to sit in cache. i want it to be cleared from cache as soon as it
displays me the result. Is that possible

Thanks


Re: [GENERAL] No caching sql query

2013-08-21 Thread Prabhjot Sheena
or is there a command to flush cache out

Thx


On Wed, Aug 21, 2013 at 1:28 PM, Prabhjot Sheena 
prabhjot.she...@rivalwatch.com wrote:

 Hi guys
   Is there a way to not cache sql query that just finished
 executing. I have a scenario in which i want to run a sql query but i don't
 want it to sit in cache. i want it to be cleared from cache as soon as it
 displays me the result. Is that possible

 Thanks



Re: [GENERAL] Strange message from pg_receivexlog

2013-08-21 Thread Sergey Konoplev
On Wed, Aug 21, 2013 at 5:09 AM, Fujii Masao masao.fu...@gmail.com wrote:
 Thu Aug 15 18:33:09 MSK 2013 ERROR archive_wal.sh: Problem occured
 during WAL archiving: pg_receivexlog: could not send feedback packet:
 server closed the connection unexpectedly

 At the same time postgres reported this error in log:

 2013-08-15 18:32:51 MSK 30945 postgres@[unknown] from [local]
 [vxid:53/0 txid:0] [streaming 2A97/6FA48000] LOG:  terminating
 walsender process due to replication timeout

 Is the value of replication_timeout sufficiently-larger than the 
 status-interval
 of pg_receivexlog?

The replication_timeout is 60s.

The archive_wal.sh (script-wrapper around pg_receivexlog) reports its
status straight away as it  falls with an error. Below is the
explanation of how it works.

This is the core of archive_wal.sh:

(
flock -xn 543 || exit 0
result=$($PGRECEIVEXLOG -n -D $WAL_ARCHIVE_DIR 21) || \
die Problem occured during WAL archiving: $result.
) 543$WAL_LOCK_FILE

And it is set to run by cron once a minute reporting me by email on occasions:

MAILTO=gray...@gmail.com

*  *  * * * /bin/bash /var/lib/pgsql/tmsdb/archive_wal.sh
/var/log/tmsdb/archive_wal.log

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No caching sql query

2013-08-21 Thread Nik Tek
Prabhjot,

You could clear the cache at the disk level, not at the db level.
Follow the link: http://linux-mm.org/Drop_Caches

You can look into the cache by using pg_buffercache, if you want to.

Hope it helps
Nik



On Wed, Aug 21, 2013 at 1:31 PM, Prabhjot Sheena 
prabhjot.she...@rivalwatch.com wrote:

 or is there a command to flush cache out

 Thx


 On Wed, Aug 21, 2013 at 1:28 PM, Prabhjot Sheena 
 prabhjot.she...@rivalwatch.com wrote:

 Hi guys
   Is there a way to not cache sql query that just finished
 executing. I have a scenario in which i want to run a sql query but i don't
 want it to sit in cache. i want it to be cleared from cache as soon as it
 displays me the result. Is that possible

 Thanks





[GENERAL] How to know detailed information about HOT(Heap-Only Tuples)?

2013-08-21 Thread 高健
Hi:

I have heard that Heap-Only Tuples is introduce from 8.3.
And I  am searching information for it.

How can I get a  detailed information of HOT?
For example:
according to  a table, How many tuples are heap only tuples , and how many
are not?

And also , Is there any options which can influence HOT occurrance?

Best Regards


Re: [GENERAL] PostgreSQL 9.2 Logging

2013-08-21 Thread Raghavendra

 My issue is the logging information is fairly missing / light. I only
 see auth failures and nothing more. I tried to perform my 1st VACUUM
 command on my database and I was hoping to see something in the logs
 showing it PASSED / FAILED or even was manually initiated by a
 superuser role but I see nothing...

 AFAIK, there's no separate flags written in logs for any utility commands
like succeeded or not.
Albeit, you can log the command executed and the time consumed.


 I don't need to log every single verbose thing this database server
 does but I would expect something like a VACUUM command would be
 warranted to be logged, no?

Do you PG veterans have any log change recommendations / changes I can
 make which would help me understand what my system is doing? I don't
 want to log every little thing possible and choke my disk with
 informative logging but just basic 'what's going on' logs would be
 awesome.


You can control the logging model with many parameters in postgresql.conf
file, however,
log_min_duration_statement will help you log only those statements which
are taking some X seconds.

For example: If I want to log only statement which are taking more than a
second, I might not bother what are
those statement then I would set like:

postgres=# show log_min_duration_statement ;
 log_min_duration_statement

 1s
(1 row)

This will log every statement equal or greater than the values passed to
it. Now in my logs:

2013-08-13 03:07:01.146 IST [14152]: [9-1] db=,user= LOG:  parameter
log_min_duration_statement changed to 1s
2013-08-13 03:08:03.297 IST [26559]: [1-1] db=d1,user=postgres LOG:
 duration: 2159.281 ms  statement: vacuum VERBOSE ANALYZE ;

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/