Re: [GENERAL] invalid connection type listen_addresses='*'

2014-07-10 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Aram Fingal
Sent: Thursday, July 10, 2014 4:40 PM
To: Postgres-General General
Subject: [GENERAL] invalid connection type listen_addresses='*'

I just tried to set up a PostgreSQL server on an existing instillation of 
Ubuntu 13.10 server but I am getting an error trying to start the server and I 
am not finding anything relevant to the error searching the web.

Here's what I did to install:

$ sudo apt-get install postgresql
$ sudo apt-get install postgresql-contrib

I set a password for the postgres user and edited the pg_hba.conf file as 
follows:

skipping a bunch of comments

# Put your actual configuration here
# --
#
# If you want to allow non-local connections, you need to add more
# host records.  In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.

listen_addresses='*'

# DO NOT DISABLE!
# If you change this first entry you will need to make sure that the
# database superuser can access the database using some other method.
# Noninteractive access to all databases is required during automatic
# maintenance (custom daily cronjobs, replication, and similar tasks).
#
# Database administrative login by Unix domain socket
local   all             postgres                                peer

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# local is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
host    all             all             all             md5

# IPv6 local connections:
host    all             all             ::1/128                 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication     postgres                                peer
#host    replication     postgres        127.0.0.1/32            md5
#host    replication     postgres        ::1/128                 md5


Then I try to restart the server:

$ sudo /etc/init.d/postgresql restart
 * Restarting PostgreSQL 9.1 database server                                    
                                                                                
                             * The PostgreSQL server failed to start. Please 
check the log output:
2014-07-10 16:34:39 EDT LOG:  invalid connection type listen_addresses='*'
2014-07-10 16:34:39 EDT CONTEXT:  line 75 of configuration file 
/etc/postgresql/9.1/main/pg_hba.conf
2014-07-10 16:34:39 EDT FATAL:  could not load pg_hba.conf


Adam,

listen_addresses='*'  parameter doesn't belong in pg_hba.conf

This parameter should be in postgresql.conf

Regards,
Igor Neyman



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting cache lookup failed for aggregate error

2014-06-25 Thread Igor Neyman



FWIW the SQL is

DROP AGGREGATE IF EXISTS array_cat_aggregate(anyarray);
CREATE AGGREGATE array_cat_aggregate(anyarray)  (
SFUNC = array_cat,
STYPE = anyarray,
INITCOND  = '{}'
);

Followed by the other statement given in my previous email. But, I think you've 
thoroughly answered by question. Thanks!


So, instead of dropping aggregate “if exists” why not check pg_catalog for 
aggregate existence, and create it only if it does NOT exist?

Regards,
Igor Neyman


Re: [GENERAL] what does pg_activity mean when the database is stuck?

2014-06-12 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Si Chen
Sent: Wednesday, June 11, 2014 10:44 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] what does pg_activity mean when the database is stuck?

Is there a way to configure postgresql to automatically release connections 
that have been idle for a set amount of time?

On Wed, Jun 11, 2014 at 3:41 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Wed, Jun 11, 2014 at 5:37 PM, Si Chen
sic...@opensourcestrategies.com wrote:
 The state is idle.  I don't have the state_change, but I will try to
 collect it if it happens again.
If they are idle, then the problem is probably with your application
-- you're grabbing new connections and not closing them or reusing
them.  It's a very common problem.  The 'query' when idle represents
the last query run -- the database finished it and is sitting around.

merlin

-- 
Si Chen
Open Source Strategies, Inc.
sic...@opensourcestrategies.com
http://www.OpenSourceStrategies.com
LinkedIn: http://www.linkedin.com/in/opentaps
Twitter: http://twitter.com/opentaps


The best solution for this is to use some connection pooler, such as PgBouncer.
B.t.w., PgBouncer can also disconnect idle client connections (if you really 
wish) based on configuration setting.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Cannot start Postgresql 9.3 as a service in Windows 2012 Server with a domain account

2014-06-12 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of boca2608
 Sent: Thursday, June 12, 2014 10:00 AM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Re: Cannot start Postgresql 9.3 as a service in Windows
 2012 Server with a domain account
 
 Krystian Bigaj replied this in a separate email, which led to some interesting
 information that I would like to share in this mailing list.
 
 He suggested the use of the Process Monitor app to log the process events
 during the startup of the service and look for ACCESS DENIED errors.  Here
 is what I found.  During the startup, there were indeed several ACCESS
 DENIED errors:
 
 Date  Time: 6/12/2014 9:27:41 AM
 Event Class: Registry
 Operation: RegOpenKey
 Result: ACCESS DENIED
 Path: HKLM\Software\Microsoft\Windows NT\CurrentVersion\Image File
 Execution Options
 TID: 1964
 Duration: 0.451
 Desired Access: Query Value, Enumerate Sub Keys
 
 
 Date  Time: 6/12/2014 9:27:41 AM
 Event Class: Registry
 Operation: RegOpenKey
 Result: ACCESS DENIED
 Path: HKLM\System\CurrentControlSet\Control\Session Manager
 TID: 1964
 Duration: 0.364
 Desired Access: Read
 
 Date  Time: 6/12/2014 9:27:41 AM
 Event Class: File System
 Operation: CreateFile
 Result: ACCESS DENIED
 Path: C:\Windows\System32
 TID: 1964
 Duration: 0.409
 Desired Access: Execute/Traverse, Synchronize
 Disposition: Open
 Options: Directory, Synchronous IO Non-Alert
 Attributes: n/a
 ShareMode: Read, Write
 AllocationSize: n/a
 
 
 Date  Time: 6/12/2014 9:27:41 AM
 Event Class: File System
 Operation: QueryOpen
 Result: ACCESS DENIED
 Path: D:\PostgreSQL\9.3\bin\ssleay32.dll
 TID: 1964
 Duration: 0.270
 
 I do not know how to give someone permission to a particular registry entry.
 But I suspect that the inability to access system32 might be the cause of the
 failure to start the service.  But when I tried to add the domain user to the
 permission for system32 (READ  EXECUTE), Windows would not allow me to
 proceed.  Has anybody seen such issues?  Any help would be greatly
 appreciated.
 
 Thanks,
 John
 

I missed the beginning of this thread.
Is there a specific reason NOT to use local account for Postgres service?

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Cannot start Postgresql 9.3 as a service in Windows 2012 Server with a domain account

2014-06-12 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of boca2608
Sent: Thursday, June 12, 2014 11:05 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Re: Cannot start Postgresql 9.3 as a service in Windows 2012 
Server with a domain account

Igor, 
  
Our network security policy requires that such database services run under a 
dedicated domain account.  (Postgresql does run successfully under local system 
account and the default NETWORK SERVICE account.) 
  
Thanks,
John 
  

I see.
So, did you try to explicitly make this domain account member of local Users 
group?

Regards,
Igor  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Cannot start Postgresql 9.3 as a service in Windows 2012 Server with a domain account

2014-06-12 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of boca2608
 Sent: Thursday, June 12, 2014 12:33 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Re: Cannot start Postgresql 9.3 as a service in Windows
 2012 Server with a domain account
 
 After adding the domain user account into the local users group, the
 postgresql service can be started successfully now.  We will do more testing
 to make sure that all postgresql functions are working.  But I want to give my
 big thanks to Krystian Bigaj, Igor Neyman and Raymond O'Donnell for
 offering timely help and making this user mailing list a great resource to the
 postgresql user community.
 
 Thanks,
 John
 

Just a heads-up:
These domain/network security people like to change accounts' passwords on 
regular basis, in which case your local Postgres service will stop working.
Pay attention.

Regards,
Igor


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what does pg_activity mean when the database is stuck?

2014-06-11 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Si Chen
Sent: Wednesday, June 11, 2014 4:34 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] what does pg_activity mean when the database is stuck?

I didn't see any from the log.  It was just a whole bunch of pretty standard 
looking SELECT queries.  There were no INSERT/COMMIT statements which were 
still active before the SELECT's, just a few which are waiting after a lot of 
SELECT statements.

Also, if the process just shows COMMIT, is there any way to see what it's 
trying to commit?  

On Wed, Jun 11, 2014 at 9:29 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Jun 11, 2014 at 8:59 AM, Si Chen sic...@opensourcestrategies.com 
wrote:
I have a problem where postgresql 9.3 got stuck, and the number of postgresql 
processes increased from about 15 to 225 in 10 minutes.

I ran the query:
select pid, query_start, waiting, state, query from pg_stat_activity order by 
query_start;

But it showed mostly select statements -- all of them the same one, with a 
couple of joins.  They are not in a waiting state but have been running for 
over 2 hours.  

I also checked for locks with the query on
http://wiki.postgresql.org/wiki/Lock_Monitoring

But it returned no locked tables.

So what does this mean?  Is the select query getting stuck?  

Do you have a huge chunk of newly insert, not yet committed, rows?  This sounds 
like the issue where all of the processes fight with each other over the right 
to check uncommitted rows in order to verify that they are actually uncommitted.

Cheers,

Jeff


-- 
Si Chen
Open Source Strategies, Inc.
sic...@opensourcestrategies.com
http://www.OpenSourceStrategies.com
LinkedIn: http://www.linkedin.com/in/opentaps
Twitter: http://twitter.com/opentaps


When you query pg_stat_activity, what do you see in state column, and how 
state_change compares to query_start?

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help on triggers - postgres 9.1.2

2014-05-22 Thread Igor Neyman
  -Original Message-
  From: Khangelani Gama [mailto:kg...@argility.com]
  Sent: Thursday, May 22, 2014 9:38 AM
  To: 'Adrian Klaver'; 'pgsql-general@postgresql.org'
  Subject: RE: [GENERAL] Need help on triggers - postgres 9.1.2
 
  Hi all
 
  Something it's tricky for me here, see my trigger I wrote below. What
  can I do to insert c_code from center table  INTO center_changed table
  with ONLY the c_code where the update was made or where an INSERT of
  the new entry INTO center table what made  .
 
 
  Let's say the center table has got the following values. When I try to
  change c_dsc from KITWE to KIT where c_code ='0204' the trigger should
  take cde 0204 and insert it into center_changed table with a time
  stamp. So the main problem I have it's to populate the table called
 center_changed.
 
 
  c_cde |c_desc | c_active
  ++--
   0094   | GABORONE WAREHOUSE | f
   0204   | KITWE  | t
 
 
 
 
 
 
  CREATE TABLE center_changed (
  c_cdetext   NOT NULL,
  stamp timestamp NOT NULL
  );
 
  CREATE OR REPLACE FUNCTION check_center_changes() RETURNS
 TRIGGER AS
  $center_changed$
 
  BEGIN
 
  IF (TG_OP = 'UPDATE') THEN
  INSERT INTO center_changed SELECT c_cde, now();
  RETURN NEW;
  ELSIF (TG_OP = 'INSERT') THEN
  INSERT INTO center_changed SELECT c_cde, now();
  RETURN NEW;
  END IF;
  RETURN NULL;
  END;
  $center_changed$ LANGUAGE plpgsql;
 
  CREATE TRIGGER center_changed
  AFTER INSERT OR UPDATE ON center
  FOR EACH ROW EXECUTE PROCEDURE check_center_changes();
 

This should work:

CREATE OR REPLACE FUNCTION check_center_changes() RETURNS
 TRIGGER AS  $center_changed$
 BEGIN
INSERT INTO center_changed VALUES(new.c_cde, now());
 RETURN NEW;
END;
 $center_changed$ LANGUAGE plpgsql;

 CREATE TRIGGER center_changed
 AFTER INSERT OR UPDATE ON center
 FOR EACH ROW EXECUTE PROCEDURE check_center_changes();

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't delete role because of unknown object

2014-04-22 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Craig Libscomb
Sent: Tuesday, April 22, 2014 4:27 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Can't delete role because of unknown object

On Tue, Apr 22, 2014 at 3:06 PM, Raymond O'Donnell r...@iol.ie wrote:
On 22/04/2014 20:47, Craig Libscomb wrote:
 The following command:
 DROP USER IF EXISTS jpate;

 generates the following output:
 ERROR:  role jpate cannot be dropped because some objects depend on it
 DETAIL:  1 object in database products

 It would be most helpful to know what object in the products database
 depends on the jpate role, but I am unable to find anything that even
 begins to offer a clue. What command will show me this mysterious
 object, please?
I'd hazard a guess that there is another role which is a member of this
one connect to the database using psql, and then \du will give you a
list of all roles - in the output from \du, look at the column Member of.

All of the roles have {} under member of, so I assume that means no members?
 

HTH,

Ray.


You could try:

SELECT C.relname, C.reltype
FROM pg_class C, pg_authid O
WHERE O.rolname = 'jpate'
AND C.relowner = O.oid;

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Victor Sterpu
Sent: Wednesday, April 02, 2014 11:19 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Lock problem

Hello
 
I have a problem that it seems to be very hard to debug.
Problem is from some postgresql locks. I use PostgreSQL 9.1.8.
 
I runned this query to fid the locks:
SELECT bl.pid  AS blocked_pid,
   a.usename   AS blocked_user,
   kl.pid  AS blocking_pid,
   ka.usename  AS blocking_user,
   a.current_query AS blocked_statement
FROM  pg_catalog.pg_locksbl
JOIN pg_catalog.pg_stat_activity a  ON a.procpid = bl.pid
JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND 
kl.pid != bl.pid
JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid
WHERENOT bl.granted;
The result is a recursive lock.
Pid 10665 is blocked by pid 9844 and pid 9844 is blocked by 10665.
These 2 inserts are in 2 separate transactions.
Can this be a postgresql bug?
 
blocked_pid
blocked_user
blocking_statement
blocking_duration
blocking_pid
blocking_user
blocked_statement
blocked_duration
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:47:33.995919
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:37:36.175607
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:37:36.175607
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:47:33.995919
10665
postgres
IDLE in transaction
00:55:42.876538
9830
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:37:36.175607
10680
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:37:36.175607
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:31:47.211123
9844
postgres
IDLE in transaction
00:55:42.876538
9830
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:47:33.995919
10706
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:47:33.995919
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:18:45.763758
 
I never use LOCK command in my application.
All locks are made by postgresql.
I use transactional support a lot.
Can someoane give some advice about how can I prevent this locking?
 
Thank you.
 

So, did you check (in pg_stat_activity) what pid 9830 is doing, because looks 
like this session is holding other sessions.
I don't see  recursive lock in your query output.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Igor Neyman


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Victor Sterpu
 Sent: Wednesday, April 02, 2014 2:25 PM
 To: Victor Sterpu; Merlin Moncure
 Cc: PostgreSQL General
 Subject: Re: [GENERAL] Lock problem
 
 I'm sure is not right, but is a there a server side solution for such 
 sitations?
 A configuration - timeout for idle transactions.
 

I don't think PG has such configuration parameter.
But, you could easily write a function (say in PgPlSQL) and run it on schedule, 
where you could check IDLE IN TRANSACTION session and compare their 
start_time to system time, and then based on your criteria you could kill 
suspect session/transaction.
But this could be dangerous;  some long-running transactions could be perfectly 
valid.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unattended Installation

2014-04-01 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of shetty65
 Sent: Tuesday, April 01, 2014 3:13 AM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Unattended Installation
 
 Hello
 
 I am using Postgres 9.3.3.1 on Windows (32-Bit Windows 7 Professional).
 I use the installer executable postgresql-9.3.3-1-windows.exe with the
 option --optionfile file
 
 The option file has the following content (the ${..} are replaced with correct
 values before execution):
  #mode=unattended
  datadir=${program.base}data
  prefix=${program.base}
  serverport=${postgres.port}
  superaccount=postgres
  superpassword=${postgres.passwd}
  unattendedmodeui=minimalWithDialogs
  servicename=${postgres.service.name}
 
 1) If the mode property is enabled to mode=unattended, the installation
 works silent but the whole 'data' folder is missing after installation (the 
 first
 error message will occur when the server-startup failes after installation).
 
 2) If the mode property is disabled, the Installer interface appears.
 Everything is correctly set, I need only to press the next button. The
 Installation completes correctly.
 
 Why does my unattended installation as described in 1) fail, but 2) succeeds?
 
 Thank you in advance for the help
 Peter.
 

Here are options that I use for unattended install (and it creates cluster with 
Postgres, template0, and template1 databases, which means data directory and 
all its subdirectories are created too):

SET INSTALLOPTIONS=--unattendedmodeui none
SET INSTALLOPTIONS=%INSTALLOPTIONS% --mode unattended

Also, you are missing settings for servicesaccount, servicepassword, and 
locale, e.g.:
SET INSTALLOPTIONS=%INSTALLOPTIONS% --locale C
SET INSTALLOPTIONS=%INSTALLOPTIONS% --serviceaccount postgres
SET INSTALLOPTIONS=%INSTALLOPTIONS% --servicepassword pg_password123

In general, to diagnose (silent or not) installation problems find bitrock 
installer log somewhere under DocumentsandSettings for OS user that runs 
installation.

Regards,
Igor Neyman



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update query stuck

2014-04-01 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Si Chen
Sent: Tuesday, April 01, 2014 3:51 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] simple update query stuck

Hello,

I'm using postgresql 9.0.13, and I have a simple query that seems to be stuck.  
I did a
postgres=# select procpid, query_start, waiting, current_query from 
pg_stat_activity;

 procpid |          query_start          | waiting |                            
                                                                                
         current_query                                                          
                      
                                     
   32605 | 2014-04-01 12:39:46.957666-07 | t       | UPDATE 
public.GL_ACCOUNT_ORGANIZATION SET ROLE_TYPE_ID=$1, FROM_DATE=$2, THRU_DATE=$3, 
POSTED_BALANCE=$4, LAST_UPDATED_STAMP=$5, LAST_UPDATED_TX_STAMP=$6, 
CREATED_STAMP=$7, CREATED_TX_STAMP=$8 WHERE GL_ACCOUNT
_ID=$9 AND ORGANIZATION_PARTY_ID=$10

   32685 | 2014-04-01 12:25:10.378481-07 | t       | UPDATE 
public.GL_ACCOUNT_ORGANIZATION SET ROLE_TYPE_ID=$1, FROM_DATE=$2, THRU_DATE=$3, 
POSTED_BALANCE=$4, LAST_UPDATED_STAMP=$5, LAST_UPDATED_TX_STAMP=$6, 
CREATED_STAMP=$7, CREATED_TX_STAMP=$8 WHERE GL_ACCOUNT
_ID=$9 AND ORGANIZATION_PARTY_ID=$10

As you can see this is a pretty simple update query, but it's been running 24 
minutes in one thread and nearly 10 minutes in the other thread.  What's also 
strange is it's not trigger a transaction timeout either.

I've already re-started the database recently, and there's not too many threads 
executing -- just 38 threads total.  Is it possible that the table is corrupted 
or needs repair?

-- 
Si Chen
Open Source Strategies, Inc.
sic...@opensourcestrategies.com
http://www.OpenSourceStrategies.com
LinkedIn: http://www.linkedin.com/in/opentaps
Twitter: http://twitter.com/opentaps

Both queries are waiting.  Your table must be locked.
Check pg_locks.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex query

2014-03-31 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Leonardo M. Ramé
 Sent: Monday, March 31, 2014 2:38 PM
 To: PostgreSql-general
 Subject: [GENERAL] Complex query
 
 Hi, I'm looking for help with this query.
 
 Table Tasks:
 
 IdTask  StatusCode  StatusName
 --
 1   R   Registered
 1   S   Started
 1   D   Dictated
 1   F   Finished
 1   T   Transcribed
 --
 2   R   Registered
 2   S   Started
 2   T   Transcribed
 2   F   Finished
 
 As you can see, I have a table containing tasks and statuses. What I would 
 like
 to get is the list of tasks, including all of its steps, for only those tasks 
 where
 the StatusCode sequence was S followed by T.
 
 In this example, the query should only return task Nº 2:
 
 2   R   Registered
 2   S   Started
 2   T   Transcribed
 2   F   Finished
 
 Can anybody help me with this?.
 
 Regards,
 --
 Leonardo M. Ramé
 Medical IT - Griensu S.A.
 Av. Colón 636 - Piso 8 Of. A
 X5000EPT -- Córdoba
 Tel.: +54(351)4246924 +54(351)4247788 +54(351)4247979 int. 19
 Cel.: +54 9 (011) 40871877
 
 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

Leonardo,

Unless you add one more column to your Tasks table, specifically: 
StatusTimestamp as in:

IdTask  StatusCode  StatusName StatusTimestamp

You cannot find which record in the table follows which, because order in which 
records returned from the database is not guaranteed until you add ORDER BY 
clause to your SELECT statement.

Regards,
Igor Neyman
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex query

2014-03-31 Thread Igor Neyman
 -Original Message-
 From: Leonardo M. Ramé [mailto:l.r...@griensu.com]
 Sent: Monday, March 31, 2014 2:56 PM
 To: Igor Neyman
 Cc: PostgreSql-general
 Subject: Re: [GENERAL] Complex query
 
 On 2014-03-31 18:48:58 +, Igor Neyman wrote:
   -Original Message-
   From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
   ow...@postgresql.org] On Behalf Of Leonardo M. Ramé
   Sent: Monday, March 31, 2014 2:38 PM
   To: PostgreSql-general
   Subject: [GENERAL] Complex query
  
   Hi, I'm looking for help with this query.
  
 
  Leonardo,
 
  Unless you add one more column to your Tasks table, specifically:
 StatusTimestamp as in:
 
  IdTask  StatusCode  StatusName StatusTimestamp
 
  You cannot find which record in the table follows which, because order in
 which records returned from the database is not guaranteed until you add
 ORDER BY clause to your SELECT statement.
 
  Regards,
  Igor Neyman
 
 
 
 You are right, let's add the Id column. This is just an example, the real 
 table (a
 view) contains both, the Id and a timestamp:
 
 Id IdTask  StatusCode  StatusName
 --
 1  1   R   Registered
 2  1   S   Started
 3  1   D   Dictated
 4  1   F   Finished
 5  1   T   Transcribed
 --
 6  2   R   Registered
 7  2   S   Started
 8  2   T   Transcribed
 9  2   F   Finished
 
 After adding the Id column, can I use a window function to get what I need?.
 
 Regards,
 --
 Leonardo M. Ramé
 Medical IT - Griensu S.A.
 Av. Colón 636 - Piso 8 Of. A
 X5000EPT -- Córdoba
 Tel.: +54(351)4246924 +54(351)4247788 +54(351)4247979 int. 19
 Cel.: +54 9 (011) 40871877

D.Johnston showed how to use windows function in this case.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting the current query from pg_stat_activity

2014-03-31 Thread Igor Neyman
 didn't have this ability.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with exclusion constraint

2014-03-28 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Moshe Jacobson
Sent: Friday, March 28, 2014 10:31 AM
To: pgsql-general
Subject: [GENERAL] Help with exclusion constraint

Take the following table:
CREATE TABLE exclusion_example AS
(
 pk_col  integer primary key,
 fk_col integer not null references other_table,
 bool_col boolean not null
);
I want to ensure that for any given value of fk_col that there is a maximum of 
one row with bool_col = true. I wanted to write an exclusion constraint such as 
this:
alter table exclusion_example add exclude using btree ( fk_col with = , 
bool_col with and );
..
..
..
Moshe Jacobson
Manager of Systems Engineering, Nead Werx Inc.
2323 Cumberland Parkway * Suite 201 * Atlanta, GA 30339
Quality is not an act, it is a habit. - Aristotle 

For this: 

any given value of fk_col that there is a maximum of one row with bool_col = 
true.

why don't you (instead) create partial unique index:

CREATE UNIQUE INDEX on exclusion_example(fk_col, bool_col) WHERE bool_col IS 
TRUE;

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Increase in max_connections

2014-03-11 Thread Igor Neyman

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Anand Kumar, Karthik
Sent: Monday, March 10, 2014 9:04 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Increase in max_connections

Hi all,

We're running postgres 9.3.2, server configuration below. 

Seemingly randomly, we will see the number of active queries in postgres go up 
until we hit max_connections. The DB will recover after a few minutes.

We had the issue a couple of times in Feb 2014. We then upgraded the postgres 
server from 9.1 to 9.3.2, and the occurrence has gone up significantly - to 
several times a day. 

The user CPU goes up as well to a 100%, no increase in I/O or system CPU.
We have slow query logging, and there is no dramatic change in the slow queries 
either. 
There is a corresponding spike in shared locks, but that seems to be an effect 
not a cause - it corresponds to an increase in the number of running processes 
at the time.

We had a similar issue in the past - that was solved by disabling 
transparent_huge_pages - but the difference there was that we'd see queries 
slow down dramatically. Currently, we don't. Also, transparent_huge_pages is 
still disabled.

I do realize the issue would be caused by a spurt in incoming connections - we 
do not yet have conclusive evidence on whether that's happening (active queries 
climbs up, however no conclusive proof on whether thats because of slow down, 
or because of increase in traffic). Working on getting the information, will 
update with that information as soon as we have it.

I thought I'd send a post out to the group before then, to see if anyone has 
run into anything similar.

Thanks,
Karthik

site=# SELECT version();
PostgreSQL 9.3.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 
20120305 (Red Hat 4.4.6-4), 64-bit

site=# SELECT name, current_setting(name), source
site-#   FROM pg_settings
site-#   WHERE source NOT IN ('default', 'override');
application_name|psql|client
archive_command|/usr/bin/archiver.sh %f %p|configuration file
archive_mode|on|configuration file
autovacuum_freeze_max_age|25000|configuration file
autovacuum_max_workers|6|configuration file
bgwriter_lru_maxpages|1000|configuration file
bgwriter_lru_multiplier|4|configuration file
checkpoint_completion_target|0.8|configuration file
checkpoint_segments|250|configuration file
checkpoint_timeout|15min|configuration file
checkpoint_warning|6min|configuration file
client_encoding|UTF8|client
commit_siblings|25|configuration file
cpu_tuple_cost|0.03|configuration file
DateStyle|ISO, MDY|configuration file
default_statistics_target|300|configuration file
default_text_search_config|pg_catalog.english|configuration file
effective_cache_size|568GB|configuration file
fsync|on|configuration file
lc_messages|en_US.UTF-8|configuration file
lc_monetary|en_US.UTF-8|configuration file
lc_numeric|en_US.UTF-8|configuration file
lc_time|en_US.UTF-8|configuration file
listen_addresses|*|configuration file
log_autovacuum_min_duration|0|configuration file
log_checkpoints|on|configuration file
log_connections|on|configuration file
log_destination|syslog|configuration file
log_directory|pg_log|configuration file
log_filename|postgresql-%a.log|configuration file
log_line_prefix|user=%u,db=%d,ip=%h |configuration file
log_min_duration_statement|100ms|configuration file
log_min_messages|debug1|configuration file
log_rotation_age|1d|configuration file
log_rotation_size|0|configuration file
log_timezone|US/Pacific|configuration file
log_truncate_on_rotation|on|configuration file
logging_collector|off|configuration file
maintenance_work_mem|1GB|configuration file
max_connections|1500|configuration file
max_locks_per_transaction|1000|configuration file
max_stack_depth|2MB|environment variable
max_wal_senders|5|configuration file
port|5432|command line
random_page_cost|2|configuration file
shared_buffers|8GB|configuration file
synchronous_commit|off|configuration file
syslog_facility|local0|configuration file
syslog_ident|postgres|configuration file
TimeZone|US/Pacific|configuration file
vacuum_freeze_table_age|0|configuration file
wal_buffers|32MB|configuration file
wal_keep_segments|250|configuration file
wal_level|hot_standby|configuration file
wal_sync_method|fsync|configuration file
work_mem|130MB|configuration file


You don't specify how many CPU cores you have, but I'm pretty sure there is not 
enough to support this:

max_connections|1500|configuration file

Try connection pooler, it should help.
The simplest to install and configure would be PgBouncer, and it does the job 
very well.

Regards,
Igor Neyman






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] excution time for plpgsql function and subfunction

2014-01-10 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rémi Cura
Sent: Friday, January 10, 2014 4:10 AM
To: PostgreSQL General
Subject: [GENERAL] excution time for plpgsql function and subfunction

Hey List,
kind of a simple question :

I'm using the postgis_topology extension,
and I'm trying to figure where the slowness comes from when importing data.
It involves plpgsql function calling other plpgsql functions, insert, update, 
etc etc.

I know I can use explain analyze for one querry, but I don't know how to get 
details of how much time takes each sub-function called by a main function.

Thus it is very difficult to guess where is the bottleneck.
Thanks ,cheers,

Rémi-C 


You could use auto_explain module to get info you are looking for.  It'll be 
logged in pg_log.
Read about it in the docs:

http://www.postgresql.org/docs/9.2/static/auto-explain.html


Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating an index alters the results returned

2014-01-03 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Clemens Eisserer
 Sent: Friday, January 03, 2014 3:00 PM
 To: pgsql-general@postgresql.org; pgsql-j...@postgresql.org
 Subject: [GENERAL] Creating an index alters the results returned
 
 Hi,
 
 Running postgres 9.1.11 + postgresql-9.3-1100.jdbc41.jar, I experience a
 strange phenomenon using indexes.
 
 I have the following schema:
 CREATE TABLE wplog (id serial NOT NULL,ts timestamp without time zone,
 sensor1 real, sensor2 real, noiselevel smallint, CONSTRAINT wplog_pkey
 PRIMARY KEY (id))
 
 and execute the following query on it:
 
 SELECT sensor1, sensor2, EXTRACT(EPOCH from ts) AS epoche FROM wplog
 WHERE EXTRACT(EPOCH from ts) BETWEEN 1388712180::double precision
 AND 1388780572::double precision ORDER BY id
 
 However, the results differ, depending on whether I've created an index on
 ts or not:
 
 With index:
 ResultSet Size: 6651
 minTS: 1388730187145 maxTs: 1388796688388  txdiff: 66501243
 
 Without index:
 ResultSet Size: 6830
 minTS: 1388712182800 maxTs: 1388780567963  txdiff: 68385163
 
 The index looks like: CREATE INDEX ON wplog (CAST(EXTRACT(EPOCH from
 ts) AS double precision))
 
 Even more puzzling to me is the fact, that I can only observe this difference
 when using the JDBC driver, using pgadmin to execute the query I get
 consistent results.
 
 Is this behaviour expected?
 
 Thank you in advance, Clemens
 

The fact that it works as expected in pgadmin, probably indicates that the 
problem is on client/jdbc side.
Turn on backend logging (log_statement = 'all') for your jdbc connection, and 
after executing your query (through jdbc) look for it in pg_log.
You probably will find it slightly different from original.  I'm not using 
jdbc, so can't commect on why this could happen.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of size ???

2013-11-22 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Brian Wong
Sent: Monday, November 18, 2013 11:30 PM
To: bricklen
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of size 
???

I've tried any work_mem value from 1gb all the way up to 40gb, with no effect 
on the error.  I'd like to think of this problem as a server process memory 
(not the server's buffers) or client process memory issue, primarily because 
when we tested the error there was no other load whatsoever.  Unfortunately,  
the error doesn't say what kinda memory ran out.

-

You are testing with work_mem set to between 1GB and 40GB.
You were asked to lower this setting. 
Even 1GB is too much, try something like work_mem=64MB.

Regards,
Igor Neyman



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: changing port numbers so pgbouncer can read geoserver and postgres

2013-11-01 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Birta Levente
 Sent: Friday, November 01, 2013 9:50 AM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Re: changing port numbers so pgbouncer can read
 geoserver and postgres
 
 On 01/11/2013 15:39, si24 wrote:
  have you specified in your pgbouncer.ini auth_type and auth_file ?
 
  my auth_type is md5 and my auth_file is D:\Program
  Files\PostgreSQL\etc\userlist.txt
 
 
  And I think you really need to check pgbouncer log !!!
 
  I get a lot of this im my pgbouncer log
  2013-11-01 12:17:49.228 2860 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0
  us
  2013-11-01 14:38:38.490 2860 WARNING lookup failed: localhost:
 result=11001
  2013-11-01 14:38:38.491 2860 LOG S-0188d930: manifold/postgrest@(bad-
 af):0
  closing because: server dns lookup failed (age=5)
 
 
 Looks like dns error.
 
 The postgresql server is on the same host with pgbouncer?
 
 How you specify the postgresql server host in pgbouncer.ini? as IP
 address or hostname?
 
 show the [databases] section in the pgbouncer.ini
 
 
 
  I don't know what is this geoserver, but this port 8080 I think it's not
  relevant in this.
 
  geoserver is my map webpage that uses the postgres database to make
 the map
  in away.
 


Obviously, your PgBouncer is not connecting to your Postgres.

I assume, you run them both on the same machine.
So, do you you have localhost mapped to 127.0.0.1 in your hosts file under 
windows\system32\drivers\ets directory?
If not, change your pgbouncer.ini config file to use IP address 127.0.0.1 
instead of localhost in [database] section.

Regards,
Igor Neyman



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explanantion on pgbouncer please

2013-11-01 Thread Igor Neyman
Again, this output indicates that pgbouncer is not connecting to postgres
server.

Regards,
Igor Neyman


On Thu, Oct 31, 2013 at 11:02 AM, si24 smrcoutt...@gmail.com wrote:

 I don't know if this will help in terms of my problem?

 this came from the admin pgbouncer console.



 pgbouncer=# show servers;
  type | user | database | state | addr | port | local_addr | local_port |
 connect_time | request_time | ptr | link

 --+--+--+---+--+--+++--+--+-+--
 (0 rows)




Re: [GENERAL] Explanantion on pgbouncer please

2013-10-31 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of si24
 Sent: Thursday, October 31, 2013 10:25 AM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Explanantion on pgbouncer please
 
 Can some one please give me a bit more of a better explanation on how
 exactly the pgbouncer works as I am now lost.
 
 I'm not sure if it is pooling the connections cause surely if its not being 
 used
 the connections should go down not up i.e i run the webpage which has my
 map running which is an open layers map reading geoserver all my data on
 geoserver is from a database in postgres. When you start the web page it
 goes to 46 connections and after moving around for a while and selecting the
 different overlays that I have on the map it goes up to 75 connections after
 not touching it for a while nothing happens the connections don't go up or
 down, but when I move the map around and zoom then the connections
 increase again to 84 connections.
 
 Please help I'm stuck
 

First, when you are describing your situation, please be specific what 
connections you are talking about: client connections to PgBouncer or pool 
connections from PgBouncer to Postgres server.

Second, you could learn a lot about status of your connections, when you 
connect as administrator to PgBouncer and use commands such as show pools, 
show clients, etc...

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection pooling

2013-10-30 Thread Igor Neyman


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of si24
 Sent: Wednesday, October 30, 2013 10:14 AM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Connection pooling
 
 I have geoserver connected to a postgres database and so far I have been
 reading that I would need some sort of connection pooling but how do I go
 about doing that when it seems most of the connection pooling
 aplications/programs seem to only run on linux. I have a windows machine.
 
 otherwise is there some other sort of way that i can get the connections to
 close if they are not being used so as not to use all the connection on
 postgresql which is currently at 100 even if I set the max connections to
 1 I don't think that's going to be enough and I don't think thats a proper
 way to solve the problem I am having currently.
 

PgBouncer works fine on Windows, and does pretty good job.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I need more specific instructions for switching to digest mode for this list

2013-10-09 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Raymond O'Donnell
 Sent: Wednesday, October 09, 2013 11:54 AM
 To: Bob Futrelle
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] I need more specific instructions for switching to
 digest mode for this list
 
 On 09/10/2013 16:47, Bob Futrelle wrote:
  Trying to switch to the digest didn't work.
  How do I find more specific details about switching?
 
 
  On Wed, Oct 9, 2013 at 12:27 AM, Bob Futrelle bob.futre...@gmail.com
  mailto:bob.futre...@gmail.com wrote:
 
set pgsql-general digest
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

It says it all right at the bottom:

 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query - CPU issue

2013-09-18 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Jayadevan M
Sent: Wednesday, September 18, 2013 9:08 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Query - CPU issue

Hi,
I have this query

explain analyze 
select  distinct geoip_city(src_ip) , src_ip 
from alert where timestamp=1378512000 and timestamp  1378598400

The explain takes forever, and CPU goes upto 100%. So I end up killing the 
query/explain.

This one, without the function call, comes back in under a second - 

explain analyze 
select  distinct 
 src_ip 
from alert where timestamp=1378512000 and timestamp  1378598400
HashAggregate  (cost=493.94..494.40 rows=46 width=8) (actual 
time=38.669..38.684 rows=11 loops=1)
  -  Index Scan using idx_alert_ts on alert  (cost=0.29..468.53 rows=10162 
width=8) (actual time=0.033..20.436 rows=10515 loops=1)
    Index Cond: ((timestamp = 1378512000) AND (timestamp  
1378598400))
Total runtime: 38.740 ms
The function doesn't do much, code given below - 
CREATE OR REPLACE FUNCTION geoip_city(IN p_ip bigint, OUT loc_desc character 
varying)
  RETURNS character varying AS
$BODY$
SELECT l.id || l.country ||l.region || l.city  FROM blocks b JOIN locations l 
ON (b.location_id = l.id)
 WHERE $1 = start_ip and $1 = end_ip limit 1 ;
$BODY$
  LANGUAGE sql IMMUTABLE
  COST 100;
There are indexes on the start_ip and end_ip and an explain tells me the 
indexes are being used (if I execute the SELECT in the function using a valid 
value for the ip value.
Regards,
Jayadevan

---

Did you try to do just EXPLAIN, not EXPLAIN ANALYZE, to see what's coming 
without actually executing the query?


Regards,
Igor Neyman





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to compare the results of two queries?

2013-09-17 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Juan Daniel Santana Rodés
 Sent: Tuesday, September 17, 2013 11:00 AM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] How to compare the results of two queries?
 
 I am developing a task in which I need to know how to compare the results of
 two queries ...
 I thought about creating a procedure which both queries received by
 parameters respectively. Then somehow able to run queries and return if
 both have the same result. As a feature of the problem, both queries are
 selection.
 Here I leave a piece of code I want to do.
 
 create or replace function compare(sql1 character varying, sql2 character
 varying) returns boolean as $body$ Declare Begin --here in some way to run
 both queries and then compare End; $body$ language 'plpgsql';
 
 I've been studying and I found that there EXECUTE but to use it, first you
 should have used PREPARE, and in this case the values ​​of the parameters are
 already made ​​inquiries.
 For example the execution of the function would be something like ...
 
 select compare('select * from table1', 'select * from table2');
 
 For this case the result is false, then the queries are executed on different
 tables.
 Thanks in advance.
 Best regards from Cuba.

EXECUTE in PgPlsql does not require PREPARE.
So, something like this:

create or replace function compare(sql1 character varying, sql2 character  
varying) returns boolean as 
$body$ 
Declare lCount int := 0; 
Begin 

EXECUTE 'SELECT COUNT(Res.*) FROM (  (' || sql1 || ' EXCEPT ' || sql2 || ') 
UNION (' || sql2 || ' EXCEPT ' || sql1 || ') ) Res' INTO lCount;
IF (lCount = 0) 
  RETURN TRUE;
ELSE
  RETURN FALSE;
END IF;

End; 
$body$ language 'plpgsql';


should work.  Be aware, I didn't test it.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to compare the results of two queries?

2013-09-17 Thread Igor Neyman


 -Original Message-
 From: Juan Daniel Santana Rodés [mailto:jdsant...@estudiantes.uci.cu]
 Sent: Tuesday, September 17, 2013 11:54 AM
 To: Igor Neyman
 Subject: Re: [GENERAL] How to compare the results of two queries?
 
 El 17/09/13 11:27, Igor Neyman escribió:
  create or replace function compare(sql1 character varying, sql2
  character  varying) returns boolean as $body$ Declare lCount int := 0;
  Begin
 
  EXECUTE 'SELECT COUNT(Res.*) FROM (  (' || sql1 || ' EXCEPT ' || sql2
  || ') UNION (' || sql2 || ' EXCEPT ' || sql1 || ') ) Res' INTO lCount; IF 
  (lCount
 = 0)
 RETURN TRUE;
  ELSE
 RETURN FALSE;
  END IF;
 
  End;
  $body$ language 'plpgsql';
 
 Hi, thank for your help...
 I'm trying to execute your code but, when I run the the sentence, it throw a
 exception.
 For example, I run this line...
 
 select compare('select * from point limit 2', 'select * from point');
 
 And, postgres throw the follow exceptio...
 
 ERROR:  syntax error at or near EXCEPT
 LINE 1: ...COUNT(Res.*) FROM (  (select * from point limit 2 EXCEPT sel...
   ^
 QUERY:  SELECT COUNT(Res.*) FROM (  (select * from point limit 2 EXCEPT
 select * from point) UNION (select * from point EXCEPT select * from point
 limit 2) ) Res
 CONTEXT:  PL/pgSQL function compare line 5 at EXECUTE statement
 
 ** Error **
 
 ERROR: syntax error at or near EXCEPT
 Estado SQL:42601
 Contexto:PL/pgSQL function compare line 5 at EXECUTE statement
 

limit 2 does not work with EXCEPT.

In the future reply to the list (Reply All) in order to keep the list in the 
conversation.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to compare the results of two queries?

2013-09-17 Thread Igor Neyman


 -Original Message-
 From: Juan Daniel Santana Rodés [mailto:jdsant...@estudiantes.uci.cu]
 Sent: Tuesday, September 17, 2013 12:51 PM
 To: Igor Neyman
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] How to compare the results of two queries?
 
 
 I want to know if there are other way to compare the result of two queries.
 Because the arguments will represent a query to execute and it can use
 everything sentence of SQL.
 If not there are other way, I wish know who are the limitations of EXCEPT.
 Greatens!!
 __


In the modified function I put both queries in parenthesis, so this should 
allow pretty much anything in the query.

Igor Neyman 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to compare the results of two queries?

2013-09-17 Thread Igor Neyman


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Igor Neyman
 Sent: Tuesday, September 17, 2013 12:02 PM
 To: Juan Daniel Santana Rodés; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] How to compare the results of two queries?
 
 
 
  -Original Message-
  From: Juan Daniel Santana Rodés [mailto:jdsant...@estudiantes.uci.cu]
  Sent: Tuesday, September 17, 2013 11:54 AM
  To: Igor Neyman
  Subject: Re: [GENERAL] How to compare the results of two queries?
 
  El 17/09/13 11:27, Igor Neyman escribió:
   create or replace function compare(sql1 character varying, sql2
   character  varying) returns boolean as $body$ Declare lCount int :=
   0; Begin
  
   EXECUTE 'SELECT COUNT(Res.*) FROM (  (' || sql1 || ' EXCEPT ' ||
   sql2
   || ') UNION (' || sql2 || ' EXCEPT ' || sql1 || ') ) Res' INTO
   || lCount; IF (lCount
  = 0)
  RETURN TRUE;
   ELSE
  RETURN FALSE;
   END IF;
  
   End;
   $body$ language 'plpgsql';
 
  Hi, thank for your help...
  I'm trying to execute your code but, when I run the the sentence, it
  throw a exception.
  For example, I run this line...
 
  select compare('select * from point limit 2', 'select * from point');
 
  And, postgres throw the follow exceptio...
 
  ERROR:  syntax error at or near EXCEPT
  LINE 1: ...COUNT(Res.*) FROM (  (select * from point limit 2 EXCEPT sel...
^
  QUERY:  SELECT COUNT(Res.*) FROM (  (select * from point limit 2
  EXCEPT select * from point) UNION (select * from point EXCEPT select *
  from point limit 2) ) Res
  CONTEXT:  PL/pgSQL function compare line 5 at EXECUTE statement
 
  ** Error **
 
  ERROR: syntax error at or near EXCEPT
  Estado SQL:42601
  Contexto:PL/pgSQL function compare line 5 at EXECUTE statement
 
 
 limit 2 does not work with EXCEPT.
 
 In the future reply to the list (Reply All) in order to keep the list in the
 conversation.
 
 Regards,
 Igor Neyman
 

Well, if you really want to use limit clause in your queries, the following 
should work (even with the limit):

create or replace function compare(sql1 character varying, sql2 character  
varying) returns boolean 
as $body$ 
Declare lCount int := 0; 
Begin

 EXECUTE 'SELECT COUNT(Res.*) FROM (  ((' || sql1 || ')  EXCEPT (' || sql2  || 
')) UNION ((' || sql2 || ')  EXCEPT  (' || sql1 || ')) ) Res' INTO || lCount; 
IF (lCount = 0)
RETURN TRUE;
 ELSE
RETURN FALSE;
 END IF;

 End;
$body$ language 'plpgsql';

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connect postgres to SQLSERVER

2013-09-17 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Agustin Larreinegabe
Sent: Tuesday, September 17, 2013 2:06 PM
To: PostgreSQL mailing lists
Subject: [GENERAL] Connect postgres to SQLSERVER

HI,

Is there a way to connect to a sqlserver like dblink?
I just need to execute a Procedure in sqlserver when something happen

-- 
Gracias
-
Agustín Larreinegabe 

-

One option is to use Linked Server feature to connect from MS SQL Server to 
Postgres through ODBC driver.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to compare the results of two queries?

2013-09-17 Thread Igor Neyman


 -Original Message-
 From: Juan Daniel Santana Rodés [mailto:jdsant...@estudiantes.uci.cu]
 Sent: Tuesday, September 17, 2013 1:38 PM
 To: Igor Neyman
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] How to compare the results of two queries?
 
 El 17/09/13 12:56, Igor Neyman escribió:
 
  -Original Message-
  From: Juan Daniel Santana Rodés [mailto:jdsant...@estudiantes.uci.cu]
  Sent: Tuesday, September 17, 2013 12:51 PM
  To: Igor Neyman
  Cc: pgsql-general@postgresql.org
  Subject: Re: [GENERAL] How to compare the results of two queries?
 
  I want to know if there are other way to compare the result of two
 queries.
  Because the arguments will represent a query to execute and it can
  use everything sentence of SQL.
  If not there are other way, I wish know who are the limitations of EXCEPT.
  Greatens!!
  __
 
  In the modified function I put both queries in parenthesis, so this should
 allow pretty much anything in the query.
 
  Igor Neyman
 Thanks.
 I tested your code and worked fine.
 Now I only should catch the exception when the results of the querires has
 diferents munbers of columns.
 God bless you.
 __
 Todos el 12 de Septiembre con una Cinta Amarilla
 FIN A LA INJUSTICIA, LIBERENLOS YA!!
 http://www.antiterroristas.cu
 http://justiciaparaloscinco.wordpress.com

And of course, not just number of columns in the result sets, but their types 
should match as well.

Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Major upgrade of PostgreSQL and MySQL

2013-09-13 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Patrick Dung
Sent: Friday, September 13, 2013 1:55 PM
To: Stephen Frost; pgsql-general@postgresql.org
Cc: Ivan Voras; Tom Lane
Subject: Re: [GENERAL] Major upgrade of PostgreSQL and MySQL

3. But the way, if users is using Windows, is the link option still works?

Thanks,
Patrick


It definitely works.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Major upgrade of PostgreSQL and MySQL

2013-09-13 Thread Igor Neyman


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Stephen Frost
 Sent: Friday, September 13, 2013 2:06 PM
 To: Patrick Dung
 Cc: pgsql-general@postgresql.org; Ivan Voras; Tom Lane
 Subject: Re: [GENERAL] Major upgrade of PostgreSQL and MySQL
 
 
  3. But the way, if users is using Windows, is the link option still works?
 
 Don't think so, but not sure.  pg_upgrade could be made to work in a truely
 in-place method if there's demand for it and someone wants to work on it.
 It'd clearly be a bit more *dangerous*, of course..
 
   Thanks,
 
   Stephen

Like I said in the other message, actually in-place upgrade using symbolic 
links work quite fine under Windows.
I tested it carefully many times, and used it even more upgrading production 
systems.
I don't feel it's *dangerous*, especially considering that my whole upgrade 
process always starts with backing up existing cluster.

Regards,
Igor Neyman 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Major upgrade of PostgreSQL and MySQL

2013-09-13 Thread Igor Neyman


From: Patrick Dung [mailto:patrick_...@yahoo.com.hk] 
Sent: Friday, September 13, 2013 3:50 PM
To: Igor Neyman; Stephen Frost
Cc: pgsql-general@postgresql.org; Ivan Voras; Tom Lane
Subject: Re: [GENERAL] Major upgrade of PostgreSQL and MySQL


For Windows, is it using symbolic links or hard links for the upgrade?
If symbolic links is used, would users have difficultly when deleting the old 
cluster?

Thanks,
Patrick

Symbolic links being used.
It also creates batch file that could be used to delete old cluster after 
upgrade.
It's all in the docs.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any method to limit resource usage in PG?

2013-08-26 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ??
Sent: Monday, August 26, 2013 2:08 AM
To: pgsql-general
Subject: [GENERAL] Is there any method to limit resource usage in PG?

Hello:

Sorry for disturbing.

I am now encountering a serious problem: memory is not enough.

My customer reported that when they run a program they found the totall memory 
and disk i/o usage all reached to threshold value(80%).

That program is written by Java.
It is to use JDBC to pull out data from DB, while the query joined some table 
together,  It will return about  3000,000 records.
Then the program will use JDBC  again to write the records  row by row , to 
inert into another table in the DB.

My first question is:

Currently, my customer can not update there applications , and also they have 
no budget to buy new hardware. 

In fact, my customer want the program 's JDBC related part can be dealed with 
by PG at  this way:  
    The PG can control all of its process to run under usage  threshold.
    If  the resource will reach the threshold, then PG will slow processes down 
under the threshold.


I know that in Oracle, that there are resource plan etc. to approach this.
But in PG, I haven't find it.

And  I also want to know:

When there are so many records need to be  thrown to client, 
will PG use as much memory as it can?  
If  the total amount of records is larger than shared_buffer, will PG hung? or 
just do its work and communicate with client for a few times till all the data 
is transferred?


Best Regards
  

No, PG does not have feature similar to Oracle's Resource Manager.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] system catalog to check if auto vacuum is disabled for a particular table

2013-08-08 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Prabhjot Sheena
Sent: Thursday, August 08, 2013 2:36 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] system catalog to check if auto vacuum is disabled for a 
particular table

Guys i am using postgresql 9.2. How can i check if a particular table has auto 
vacuum disabled manually or not. Which system catalog can get me this 
information?

Thanks



You can query reloptions (it has type of text[]) in  pg_class for your relname.
If autovacuum was disabled one of the elements of the array will be: 
'autovacuum_enabled=false'.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Self referencing composite datatype

2013-08-07 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of David Johnston
 Sent: Wednesday, August 07, 2013 10:35 AM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Self referencing composite datatype
 
 Sameer Thakur wrote
  Hello,
  I wanted to create a composite datatype to represent a Node. So it
  would have a few attributes and an array of type Node which is the
  children of this node.
  create type Node as (r integer, s integer, children Node []); But i
  get error type Node[] does not exist. I understand that Node is not
  defined hence the error.
  But how do i get around this problem?
 
 In theory if you are using 9.1 or later you can first create the node type and
 then alter it to include a children attribute with the self-referencing type-
 array.
 
 begin;
 
 create type node as (r integer, s integer); alter type node add attribute
 children node[];
 
 end;
 
 I'm running 9.0 so cannot readily test this at the moment.
 
 David J.
 

Under 9.2.2 I'm getting an error:

ERROR:  composite type node cannot be made a member of itself

** Error **

ERROR: composite type node cannot be made a member of itself
SQL state: 42P16

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Different transaction log for database/schema

2013-07-22 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of John R Pierce
 Sent: Monday, July 22, 2013 2:32 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Different transaction log for database/schema
 
 On 7/22/2013 9:33 AM, Ondrej Chaloupka wrote:
  thank you for the idea. I didn't know about the function.
  Just this probably won't work for my use case. I do not access to
  different databases in one transaction. I access different databases
  simultaneously each with it's own xa transaction. What I understand
  the switch_xlog servers for using new file for transaction log. What I
  would understand is supposed to be used e.g. for cleaning.
 
 postgres servers don't really have a 'transaction log' in the same sense
 as Oracle etc.   they have a 'write-ahead log' which works quite
 differently.
 
 in the context of postgres's WAL logs, your question doesn't make any sense
 at all.
 
 
 
 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast
 

Oracle doesn't have transaction log either, MVCC implemented through Redo Logs 
and Undo tablespaces, and those are per database, not per schema.
On the other hand, MS SQL Server has Transaction Logs, and they are per 
database.

Still, I don't quite understand, what goal original poster is trying to achieve.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] installer woes, 9.1 on windows 2008 R2

2013-06-27 Thread Igor Neyman


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of John R Pierce
 Sent: Wednesday, June 26, 2013 4:13 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] installer woes, 9.1 on windows 2008 R2
 
 On 6/26/2013 12:49 PM, Igor Neyman wrote:
  Look for bitrock_installer.log in the
 \Users\account_you_are_using\AppData\Local\Temp.
  Or just do search for bitrock_installer.log file.
 
 ahhh. two weirdnesses below...
 
 A) why is it using 'myusername' (the account I ran the installer from) instead
 of the postgres service account?

This is correct.  User 'myusername' running Install should have permissions to 
the whole D:\PostgreSQL (with subdirectories).
User Postgres needs permissions only to D:\PostgreSQL\9.1\data.
So, according to this:

 fixing permissions on existing directory D:/PostgreSQL/9.1/data ...
 initdb: could not change permissions of directory
 D:/PostgreSQL/9.1/data: Permission denied

'myusername' couldn't grant permissions on this folder to postgres user.

 B) why can't it change the permissions?

Try to look at Windows Event Log, m.b. there will be some useful info.
M.b. you need to run installer local (not corporate active directory) account, 
still member of local Administrators group.

Also, from your original posting:
 I added the LOCAL\postgres user to the permissions on the D:\postgresql\9.1 
directory and gave it full control

You don't have to create Postgres account manually and grant him privileges - 
installer does it for you.  Could this be your problem?

Regards,
Igor Neyman


 
 ..
 [11:35:51] Running the post-installation/upgrade actions:
 [11:35:51] Delete the temporary scripts directory...
 [11:35:51] Write the base directory to the ini file...
 [11:35:51] Write the version number to the ini file...
 Initialising the database cluster (this may take a few minutes)...
 Executing cscript //NoLogo
 C:\PostgreSQL\9.1/installer/server/initcluster.vbs postgres
 postgres  C:\PostgreSQL\9.1 D:\PostgreSQL\9.1\data 5432
 DEFAULT
 Script exit code: 1
 
 Script output:
   WScript.Shell Initialized...
 Scripting.FileSystemObject initialized...
 
 Called CreateDirectory(D:\PostgreSQL\9.1\data)...
 Called CreateDirectory(D:\PostgreSQL\9.1)...
 Called CreateDirectory(D:\PostgreSQL)...
 Called CreateDirectory(D:\)...
 WScript.Network initialized...
 Called IsVistaOrNewer()...
  'winmgmts' object initialized...
  Version:6.1
  MajorVersion:6
 Ensuring we can read the path D: (using icacls) to myusername:
  Executing batch file 'rad72A5A.bat'...
  processed file: D:\
 Successfully processed 1 files; Failed processing 0 files
 
 Called IsVistaOrNewer()...
  'winmgmts' object initialized...
  Version:6.1
  MajorVersion:6
 Ensuring we can read the path D:\PostgreSQL (using icacls) to myusername:
  Executing batch file 'rad72A5A.bat'...
  processed file: D:\PostgreSQL
 Successfully processed 1 files; Failed processing 0 files
 
 Called IsVistaOrNewer()...
  'winmgmts' object initialized...
  Version:6.1
  MajorVersion:6
 Ensuring we can read the path D:\PostgreSQL\9.1 (using icacls) to
 myusername:
  Executing batch file 'rad72A5A.bat'...
  processed file: D:\PostgreSQL\9.1
 Successfully processed 1 files; Failed processing 0 files
 
 Called IsVistaOrNewer()...
  'winmgmts' object initialized...
  Version:6.1
  MajorVersion:6
 Ensuring we can read the path D:\PostgreSQL\9.1\data (using icacls) to
 myusername:
  Executing batch file 'rad72A5A.bat'...
  processed file: D:\PostgreSQL\9.1\data Successfully processed 1 files;
 Failed processing 0 files
 
 Called IsVistaOrNewer()...
  'winmgmts' object initialized...
  Version:6.1
  MajorVersion:6
 Ensuring we can write to the data directory (using icacls) to myusername:
  Executing batch file 'rad72A5A.bat'...
  processed file: D:\PostgreSQL\9.1\data Successfully processed 1 files;
 Failed processing 0 files
 
  Executing batch file 'rad72A5A.bat'...
  The files belonging to this database system will be owned by user
 myusername.
 This user must also own the server process.
 
 The database cluster will be initialized with locale English_United 
 States.1252.
 The default text search configuration will be set to english.
 
 fixing permissions on existing directory D:/PostgreSQL/9.1/data ...
 initdb: could not change permissions of directory
 D:/PostgreSQL/9.1/data: Permission denied
 
 Called Die(Failed to initialise the database cluster with initdb)...
 Failed to initialise the database cluster with initdb
 
 Script stderr:
   Program ended with an error exit code
 
 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] installer woes, 9.1 on windows 2008 R2

2013-06-26 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Wednesday, June 26, 2013 3:38 PM
To: PostgreSQL
Subject: [GENERAL] installer woes, 9.1 on windows 2008 R2

trying to install 9.1.9 64bit on a win2008 r2 server and getting a failure of 
the initdb phase, with no clue why.

I've installed the application on C:\postgresql\9.1\  and the data on 
D:\postgresql\9.1\data  ... after the first try, I added the LOCAL\postgres 
user to the permissions on the D:\postgresql\9.1 directory and gave it full 
control, this propagated to the .\data dir too, then I tried running the 
installer again (since there's no clues how to manually invoke the initdb 
process properly).   same error, no clues.
'problem running post-install step.   
The database cluster initialization failed'.

is there an installation log somewhere?  I haven't found it in either 
\postgresql\ path.

if it matters, this server is a member of the corporate active directory, as is 
my user account that I ran the installer from (but said user account is a 
member o the local Administrators, and the PG installer invoked the privilege 
elevation thing when it started)


-- 
john r pierce  37N 122W
somewhere on the middle of the left coast

Look for bitrock_installer.log in the 
\Users\account_you_are_using\AppData\Local\Temp.
Or just do search for bitrock_installer.log file.

Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade link mode

2013-05-16 Thread Igor Neyman

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of AI Rumman
Sent: Thursday, May 16, 2013 1:56 PM
To: Fabio Rueda Carrascosa
Cc: pgsql-general
Subject: Re: [GENERAL] pg_upgrade link mode

I always think its a bit risky to use link instead of copying. However, I'd 
suggest to try the  --check at first of pg_upgrade.

--

Why?
Do you have specific experience, when link mode caused any problems?
Could you share?

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade link mode

2013-05-16 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Lonni J Friedman
 Sent: Thursday, May 16, 2013 2:23 PM
 To: Igor Neyman
 Cc: AI Rumman; Fabio Rueda Carrascosa; pgsql-general
 Subject: Re: [GENERAL] pg_upgrade link mode
 
 On Thu, May 16, 2013 at 11:03 AM, Igor Neyman iney...@perceptron.com
 wrote:
 
  From: pgsql-general-ow...@postgresql.org
  [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of AI Rumman
  Sent: Thursday, May 16, 2013 1:56 PM
  To: Fabio Rueda Carrascosa
  Cc: pgsql-general
  Subject: Re: [GENERAL] pg_upgrade link mode
 
  I always think its a bit risky to use link instead of copying.
 However, I'd suggest to try the  --check at first of pg_upgrade.
 
  --
 
  Why?
  Do you have specific experience, when link mode caused any problems?
  Could you share?
 
 I assume what he's referring to is if the upgrade gets partially
 completed and fails for any reason, then you have a broken mess, with
 no simple rollback path.  Since your database is only about 1GB in
 size, it shouldn't take very long to run a base backup before doing the
 upgrade.  You can send that backup over the network to a remote system,
 so that you have a fallback solution if the upgrade fails.
 
 
 --

Well, you should have current (and tested in regards to restore procedure) 
database backup regardless whether you upgrade your Postgres or just run it in 
normal everyday mode.
And, if you plan to upgrade, you double check that your backup is absolutely 
valid and restorable.

Now, pg_upgrade in link mode checks a lot of things before getting to the point 
of irreversible changes, and if anything's wrong you can start older cluster, 
fix any problems pg_upgrade found and re-run pg_upgrade.
And if anything still breaks during pg_upgrade, you should have your database 
backup (that I talked about before) as a last resort.
Also, no one should do such things as upgrading Postgres (or any software for 
that matter) on production system, before ironing out all kinks on the test 
system.

I tested my upgrade process (which includes pg_upgrade in link mode) on test 
system, and now I run it on production systems (note plural) without fear of 
breaking anything, besides the fact that every production system of course has 
restorable backup.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Igor Neyman
 Subject: [GENERAL] Running out of memory on vacuum
 
 Hi all,
 
 I have a production database that sometimes runs out of memory=at
 nightly vacuum.
 
 The application runs typically with around 40 post=res connections but
 there are times when the connections increase because =f some queries
 going on. The reason is that the operations are slow, the t=rminals
 time out and try to reconnect using new connections. Some time ago=I
 started to have problems with too many connections being open so I
 lower=d the limit to
 300 connections. It was all good until recently when even w=th 180
 connections I am running out of memory on vacuum... So the connecti=ns
 increase to 180 and the system still runs properly for other 2 days
 but=then at nightly vacuum runs out of memory.
 The fix is to restart postgre= ... If I only close the connections the
 problem is still these so I need =o restart postgres.
 If I don't restart postgres then the system will run=out of memory on
 queries at a point...
 Another important thing is that d=ring vacuum at 1am nothing else is
 going on that server so all the connect=ons are idle.
 
 2013-05-14 06:53:51.449
 CST,postgres,abrazo,8=41,[local],519233dc.2225,3,VACUUM,2013-
 05-14
 06:53:48 CST,174/67143,0,=RROR,53200,out of memory,Failed on request
 of size 668141820.,vac=um;,,,psql
 
 OS:
 
 SUSE Linux Enterprise Server 11 (x86_64) VER=ION = 11 PATCHLEVEL = 2
 
 Suse compiled postgre version :
  =A0=A0=A0 version
 --=
 --=--
  PostgreSQL 9.0.3 on x86_64-suse-linux-gnu, compiled by GCC gcc (SU=E
 Linux) 4.3.4 [gcc-4_3-branch revision 152973], 64-bit
 
 Postgre= configuration parameters:
 
 max_connections = 300
 shared_buffers =3D 2GB
 maintenance_work_mem = 2GB
 effective_cache_size = 4GB
 wor=_mem = 2MB
 wal_buffers = 1MB
 
 
 checkpoint_segments = 16
 au=ovacuum = on
 
 
 wal_level = hot_standby
 archive_mode = on
 a=chive_command = 'test -f
 /cbnDBscripts/tmp/PITR_primarydb_stop_backup ||=rsync --timeout=30 -atz
 %p spsnidb1:/data01/wal_files/%f'
 archive_time=ut = 60
 
 
 free
 
     total  =A0
 used   free
 shared    buffer= cached
 Mem:
 7880512    7825136 =A0
 55376
 0  72376  =A0 4537644 -/+
 buffers/cache:    3215116    4665396
 Swap:=A0
 2097148
 0    2097148
 
 =etc/sysctl.conf
 
 kernel.shmmax=2684354560
 kernel.shmall=26843545=0
 vm.swappiness=0
 vm.overcommit_memory=2
 
.
 
 Thanks,
 ioana


1. You have autovacuum running.  Is there specific reason to run manual 
vacuum nightly?
2. Reduce maintenance_work_mem, you may have manual vacuum and autovacuum 
running at the same time, each requesting 2GB (your current setting).
3. Use connection pooling software (e.g. PgBouncer) and reduce max_connections.

Igor Neyman




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Igor Neyman


 -Original Message-
 From: Ioana Danes [mailto:ioanasoftw...@yahoo.ca]
 Sent: Tuesday, May 14, 2013 10:30 AM
 To: Igor Neyman; PostgreSQL General
 Subject: Re: [GENERAL] Running out of memory on vacuum
 
 Hi Igor,
 
 1. I could remove the nightly vacuum but I think that is  not the
 cause. The vacuum is only catching the problem. If I ignore the  vacuum
 message for few days the system is gonna run out of memory on
 queries...
 2. There is no autovacuum running in the same time. I tried to run
 vacuum verbose manually and checked what else was going on on the
 server.
 I also reduced the maintenance work mem to 1 GB but I get the same
 error.
 3.
  I do use connection pooling. I have 1500 terminals selling and at busy
 times I might need more than 100 active connections but just
 occationally...
 
 Thanks for quick response,
 
 

You still didn't explain, why do you need manual vacuuming.
You have autovacuum set on, so it'll wake up every so often do its job based 
on other autovacuum config parameters.

What kind of connection pooling are you using?
Is it set for session pooling, or transaction, or statement pooling?

Having more than 100 active connections at a time does not mean that all of 
them executing queries at the same time.
Unless you have a lot of processor cores (and I mean a LOT!), it does not make 
sense to set pool at 180, even less so at 300.

Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory on vacuum

2013-05-14 Thread Igor Neyman


 -Original Message-
 From: Ioana Danes [mailto:ioanasoftw...@yahoo.ca]
 Sent: Tuesday, May 14, 2013 11:29 AM
 To: Igor Neyman; PostgreSQL General
 Subject: Re: [GENERAL] Running out of memory on vacuum
 
 
 
 Hi Igor,
 
 I don't need the manual vacuum. I just don't want to remove it now
 because it gives me a clue that something is wrong and I need to
 restart postgres to free up the memory. Otherwise I run out of memory
 later in the day and that is something I want to avoid.
 
 Even if I don't need it I don't think it is normal to run out of
 memory in these conditions. (I had it off for a while and I did run out
 of memory on pg_dump).
 
 
 We use jdbc connection pooling and it does make sense to have it higher
 than 100 at some points during the day. Anyhow at the time of the
 vacuum there is nothing else going on on the database. Sales are off.
 
 Thanks,
 

Ioana,

You are probably correct that manual vacuum just highlights the problem, 
causing it to occur sooner rather than later.

The real problem is the number of connections, your hardware isn't good enough 
to support over a 100 of connections.
Connection pooler is needed, but not JDBC connection pooling, which probably 
does nothing for you.

I'd suggest that server-side connection pooler would fix your problem, if set 
up properly.
Take a look at PgBouncer.  It is lightweight, very easy to install and 
configure.  I've been using it for years and have nothing but the praise for it.

Igor Neyman 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update from select

2013-05-13 Thread Igor Neyman
 PG 8.4
 
 Having trouble putting together an update query to update multiple
 columns in tbl1 from columns in tbl2.
 
 update tbl1
 set col3,col4,col5
 from
 (select col3, col4,col5 from tbl2 where col1=criteria)
 
 
 
 Can someone add to the Postgres Docs (shown below) to help me with
 this.
 
 UPDATE employees SET sales_count = sales_count + 1 WHERE id =
   (SELECT sales_person FROM accounts WHERE name = 'Acme Corporation');
 
 
 Many thanks
 
 Bret Stern


Your question isn't very clear.
Are you updating all records in tbl1? 
Or col1 in tbl1 should also match col1 in tbl2?

If that's the case:

UPDATE tbl1 t1 
  SET col3 = t2.col3, col4 = t2.col4, col5 = t2.col5
  FROM tbl2 t2
  WHERE t1.col1 = t2.col1
AND t.col1 = criteria;

Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade fails, mismatch of relation OID - 9.1.9 to 9.2.4

2013-05-08 Thread Igor Neyman

 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Evan D. Hoffman
 Sent: Wednesday, May 08, 2013 2:27 PM
 To: Postgresql Mailing List
 Subject: [GENERAL] pg_upgrade fails, mismatch of relation OID - 9.1.9
 to 9.2.4
 
 I've tried several times to upgrade a test database (with real data,
 ~500 GB) from 9.1 to 9.2 using pg_upgrade and every time it fails with
 the same error.  I've tried a few different options to pg_upgrade but
 always the same result.  Nothing really useful has turned up in Google.
 Any thoughts?  Complete output is below:
 
 
 
 Linking user relation files
   /var/lib/pgsql/9.1/data/base/16406/3016054
 Mismatch of relation OID in database dbname: old OID 2938685, new OID
 299721 Failure, exiting
 
 

Is it always the same file, same OIDs (old/new)?
If it's the same file, did you try to find out what relation it belongs to?

Igor Neyman
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade fails, mismatch of relation OID - 9.1.9 to 9.2.4

2013-05-08 Thread Igor Neyman


 -Original Message-
 From: Evan D. Hoffman [mailto:evandhoff...@gmail.com]
 Sent: Wednesday, May 08, 2013 3:35 PM
 To: Igor Neyman
 Subject: Re: [GENERAL] pg_upgrade fails, mismatch of relation OID -
 9.1.9 to 9.2.4
 
 Looks like it IS the same OID every time, referencing an index.  I
 already reindexed the entire DB in case it was some problem with a
 corrupt index.  Here's the index info, if it's of any use.
 
 
 Interestingly, if I query which that relation's in, it's not the one
 that it complained about:
 
 db=# select pg_relation_filepath(2938685);  pg_relation_filepath
 --
  base/16407/21446253
 (1 row)
 
 db=#
 
 (The file referenced in the error was
 /var/lib/pgsql/9.1/data/base/16406/3016054)
 
 On Wed, May 8, 2013 at 2:35 PM, Igor Neyman iney...@perceptron.com
 wrote:
 
  -Original Message-
  From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
  ow...@postgresql.org] On Behalf Of Evan D. Hoffman
  Sent: Wednesday, May 08, 2013 2:27 PM
  To: Postgresql Mailing List
  Subject: [GENERAL] pg_upgrade fails, mismatch of relation OID -
  9.1.9 to 9.2.4
 
 
  Linking user relation files
/var/lib/pgsql/9.1/data/base/16406/3016054
  Mismatch of relation OID in database dbname: old OID 2938685, new
  OID
  299721 Failure, exiting
 
 
 
  Is it always the same file, same OIDs (old/new)?
  If it's the same file, did you try to find out what relation it
 belongs to?
 
  Igor Neyman
 

Is it the same file though?
And, if it is what do you get when you run:

Select relname from pg_class where relfilenode = 3016054::oid;

Please, reply to the list (reply to all), so that other people who may have 
better ideas/solutions for could see it.

Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade fails, mismatch of relation OID - 9.1.9 to 9.2.4

2013-05-08 Thread Igor Neyman


 -Original Message-
 From: Evan D. Hoffman [mailto:evandhoff...@gmail.com]
 Sent: Wednesday, May 08, 2013 4:22 PM
 To: Igor Neyman
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] pg_upgrade fails, mismatch of relation OID -
 9.1.9 to 9.2.4
 
 Well, each time it fails it refers to the file
 /var/lib/pgsql/9.1/data/base/16406/3016054, but that's not the file
 associated with OID 2938685.
 
 Here's the output of that query:
 
 db=# Select relname from pg_class where relfilenode = 3016054::oid;
 relname
 -
 (0 rows)
 
 db=#
 
 

And that is before running pg_upgrade, right?

Seems like some kind of pg_catalog corruption.
I guess, Bruce Momjian would know better, what's going on here.

Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to INSERT INTO one table from another table, WHERE

2013-05-03 Thread Igor Neyman
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Kirk Wythers
 Sent: Friday, May 03, 2013 1:51 PM
 To: POSTGRES
 Subject: [GENERAL] How to INSERT INTO one table from another table,
 WHERE
 
 I am trying to insert data from 2 columns in tableB (colX and colY)
 into the same two columns of tableB, with a join like where clause. Is
 this possible?
 
 For example:
 
 INSERT INTO tableA (colX, colY)
 (SELECT colX, colY
 FROM tableB
 WHERE
   tableA.blockname = tableB.block_name
   AND tableA.timestamp = tableB.timestamp) ;
 

If it's not the whole record but just some columns, you UPDATE them not INSERT:

UPDATE tableA A 
SET colX = B.colx, colY = B.colY
FROM table B B
WHERE A. blockname = B.block_name
AND A.timestamp = B.timestamp;

Note the use of aliases (A, B).

b.t.w. timestamp - isn't a good choice for column name, being a data type 
it's on the list of reserved words.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows query weird result

2013-04-30 Thread Igor Neyman
 
 
 Seems like REINDEX INDEX rescored; did the trick.
 Any idea how to find out whether a index is corrupted?
 
 --
 Lutz Fischer
 lfisc...@staffmail.ed.ac.uk
 +44 131 6517057
 
 
 The University of Edinburgh is a charitable body, registered in
 Scotland, with registration number SC005336.

The only way I know is to execute SELECT on the table that will cause full 
index scan.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows query weird result

2013-04-30 Thread Igor Neyman


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Igor Neyman
 Sent: Tuesday, April 30, 2013 9:30 AM
 To: Lutz Fischer; pgsql-general@postgresql.org
 Cc: Aaron Abreu
 Subject: Re: [GENERAL] Windows query weird result
 
  
 
  Seems like REINDEX INDEX rescored; did the trick.
  Any idea how to find out whether a index is corrupted?
 
  --
  Lutz Fischer
  lfisc...@staffmail.ed.ac.uk
  +44 131 6517057
 
 
  The University of Edinburgh is a charitable body, registered in
  Scotland, with registration number SC005336.
 
 The only way I know is to execute SELECT on the table that will cause
 full index scan.
 
 Regards,
 Igor Neyman
 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
 make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

As a side not, one more thing.
Since you mentioned that corruption happened in Windows environment, 
I'd suggest that Postgres data directory should be excluded from being scanned 
by whatever anti-virus software you have running on that system (I suspect 
there is one).  Scanning PG data directory by anti-virus software could reduce 
performance and also in severe cases - corruption.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows query weird result

2013-04-29 Thread Igor Neyman


 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Lutz Fischer
 Sent: Monday, April 29, 2013 1:52 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Windows query weird result
 
 Hi,
 
 had a bit of weird result for a query:
 SELECT id FROM spectrum_match WHERE search_id in (788,694,693,685) AND
 rescored IS NOT NULL and dynamic_rank = true ORDER BY ID;
 
 returns (among some 127K other lines):
 ...
 32694548
 32694860
 ...
 
 But if I change the query to:
 SELECT id FROM spectrum_match WHERE search_id in (788,694,693,685) AND
 rescored IS NOT NULL and dynamic_rank = true and id= 32694801; I get
 
 32694801
 
 which is omitted from the previous result.
 
 The database is running under windows (I know that's bad - but we had
 reasons...).
 It only became apparent after we made a copy of the database and run it
 under Linux (Debian wheezy).
 There the first query returned 136k lines and this id was the first
 difference.
 
 Does anybody has an idea what is going on?
 
 It's postgresql 9.2.1 running under a windows 2008 R2 server
 
 
 Lutz
 
 --
 Lutz Fischer
 lfisc...@staffmail.ed.ac.uk
 +44 131 6517057
 
 
 The University of Edinburgh is a charitable body, registered in
 Scotland, with registration number SC005336.
 
 

 ORDER BY ID  - do you have an index in this column (ID)?
Is it being used? What explain analyze says?
M.b. index is corrupt. Try to rebuild it and see if this fixes the problem.

Regards,
Igor Neyman



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore from split files

2013-04-22 Thread Igor Neyman
How exactly did you create split dump?

Igor Neyman

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of akp geek
Sent: Monday, April 22, 2013 3:36 PM
To: pgsql-general
Subject: [GENERAL] pg_restore from split files

Hi All -

I have created a dump of a big table into 5 split files.  What is the 
procedure to restore them using pg_dump.

I am getting the following error

pg_restore: [custom archiver] could not read from input file: end of file
pg_restore: *** aborted because of error

Appreciate your help.

Regards


Re: [GENERAL] pg_restore from split files

2013-04-22 Thread Igor Neyman
I don't think pg_restore is able to deal with your split files.
What was the reasoning for doing this?  Did you try first on smaller 
table/files to see if your dump/split/restore procedure would work?

Igor Neyman


From: akp geek [mailto:akpg...@gmail.com]
Sent: Monday, April 22, 2013 3:47 PM
To: Igor Neyman
Cc: pgsql-general
Subject: Re: [GENERAL] pg_restore from split files

pg_dump dbname -n schemaname -t table_name -Fc | split -b 500m -t table.dump

On Mon, Apr 22, 2013 at 3:41 PM, Igor Neyman 
iney...@perceptron.commailto:iney...@perceptron.com wrote:
How exactly did you create split dump?

Igor Neyman

From: 
pgsql-general-ow...@postgresql.orgmailto:pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.orgmailto:pgsql-general-ow...@postgresql.org]
 On Behalf Of akp geek
Sent: Monday, April 22, 2013 3:36 PM
To: pgsql-general
Subject: [GENERAL] pg_restore from split files

Hi All -

I have created a dump of a big table into 5 split files.  What is the 
procedure to restore them using pg_dump.

I am getting the following error

pg_restore: [custom archiver] could not read from input file: end of file
pg_restore: *** aborted because of error

Appreciate your help.

Regards



Re: [GENERAL] Can you spot the difference?

2013-04-16 Thread Igor Neyman
Statistics on your original and copy databases must be different.
For the same condition (li.tracking_number = '10137378459') optimizer expects 
to find 7915 rows in tb_line_item table on the copy database while only 54 
rows on the original database.
Also, the other table (tb_order_location)  could have bad statistics as well.
That could cause different execution plans.

Regards,
Igor Neyman

From: Moshe Jacobson [mailto:mo...@neadwerx.com]
Sent: Tuesday, April 16, 2013 3:07 PM
To: pgsql-general
Subject: Can you spot the difference?

Hi PostgreSQL friends,

I have two databases in the same cluster that are almost identical. One is a 
copy of the other as we are developing some new features in the copy.

My problem is that the exact same simple query performs great in the original 
database (ises) and dismally in the copy database (ises_coelacanth). The 
problem is that in ises, it uses an index scan, but in ises_coelacanth it uses 
a sequential scan:

postgres@moshe=devmain:ises=# explain analyze SELECT count(*)  FROM 
tb_order_location ol   JOIN tb_line_item li on li.order_location = 
ol.order_location   WHERE  li.tracking_number = '10137378459';

QUERY PLAN
--
 Aggregate  (cost=671.48..671.49tel:671.48..671.49 rows=1 width=0) (actual 
time=0.272..0.272 rows=1 loops=1)
   -  Nested Loop  (cost=0.00..671.34 rows=54 width=0) (actual 
time=0.124..0.265 rows=16 loops=1)
 -  Index Scan using tb_line_item_tracking_number_key on tb_line_item 
li  (cost=0.00..219.17 rows=54 width=4) (actual time=0.087..0.161 rows=16 
loops=1)
   Index Cond: ((tracking_number)::text = '10137378459'::text)
 -  Index Scan using tb_order_location_pkey on tb_order_location ol  
(cost=0.00..8.36 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=16)
   Index Cond: (order_location = li.order_location)
 Total runtime: 0.343 ms
(7 rows)

postgres@moshe=devmain:ises_coelacanth=# explain analyze SELECT count(*)  FROM 
tb_order_location ol   JOIN tb_line_item li on li.order_location = 
ol.order_location   WHERE  li.tracking_number = '10137378459';
QUERY 
PLAN
---
 Aggregate  (cost=50467.40..50467.41 rows=1 width=0) (actual 
time=333.490..333.491 rows=1 loops=1)
   -  Hash Join  (cost=26551.11..50447.62 rows=7915 width=0) (actual 
time=332.045..333.481 rows=16 loops=1)
 Hash Cond: (li.order_location = ol.order_location)
 -  Bitmap Heap Scan on tb_line_item li  (cost=177.82..20715.03 
rows=7915 width=4) (actual time=0.128..0.209 rows=16 loops=1)
   Recheck Cond: ((tracking_number)::text = '10137378459'::text)
   -  Bitmap Index Scan on tb_line_item_tracking_number_key  
(cost=0.00..175.84 rows=7915 width=0) (actual time=0.108..0.108 rows=16 loops=1)
 Index Cond: ((tracking_number)::text = '10137378459'::text)
 -  Hash  (cost=13190.24..13190.24 rows=803524 width=4) (actual 
time=324.114..324.114 rows=803553 loops=1)
   Buckets: 4096  Batches: 32  Memory Usage: 887kB
   -  Seq Scan on tb_order_location ol  (cost=0.00..13190.24 
rows=803524 width=4) (actual time=0.024..144.581 rows=803553 loops=1)
 Total runtime: 333.766 ms
(11 rows)

Both of these queries return 16 rows, as you can see.

Below I've included the information on each of these tables. They have the same 
indexes and are identical for the purposes of this query.

Can you help me figure out what is going on here?? Thank you!

postgres@moshe=devmain:ises=# \d tb_line_item
   Table public.tb_line_item
   Column   |Type | 
  Modifiers
+-+---
 line_item  | integer | not null default 
nextval('sq_pk_line_item'::regclass)
(...)
 order_location | integer | not null
(...)
 tracking_number| character varying(512)  |
(...)
Indexes:
tb_line_item_pkey PRIMARY KEY, btree (line_item)
tb_line_item_order_catalog_article_key UNIQUE CONSTRAINT, btree 
(order_catalog_article, order_location, project, creator)
tb_line_item_order_vendor_article_key UNIQUE CONSTRAINT, btree 
(order_vendor_article, order_location, project, creator)
idx_line_item_canceled btree (canceled)
ix_line_item_project btree (project)
ix_line_item_reset btree (reset)
tb_line_item_order_location_key btree (order_location

Re: [GENERAL] PostgreSQL Backup Booklet

2013-04-03 Thread Igor Neyman


 -Original Message-
 From: Shaun Thomas [mailto:stho...@optionshouse.com]
 Sent: Wednesday, April 03, 2013 10:09 AM
 To: PostgreSQL General
 Subject: PostgreSQL Backup Booklet
 
 Hey!
 
 So, Packt approached me a few months ago and asked me to put together a
 very basic series of short step-by-step instructions on backing up
 PostgreSQL. The title is Instant PostgreSQL Backup and Restore How-
 to.
 
 I tried to cover all of the basic approaches used by most admins, and I
 tested it on a guy at work who's a SQL Server DBA but recently took
 over care and feeding of a PG database. He said it was easier to
 understand than the documentation, at least for just getting everything
 working and sorting out more advanced details later.
 
 I know it's not exactly Greg Smith's performance book, but I'm glad to
 contribute how I can. I'm not entirely sure it's worth adding to the
 book page:
 
 http://www.postgresql.org/docs/books/
 
 But if it is, I'll provide any help or extra information necessary. If
 anyone has questions, I'm here to answer them. :)
 
 --
 Shaun Thomas
 OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
 312-676-8870
 stho...@optionshouse.com
 
 __
 
 See http://www.peak6.com/email_disclaimer/ for terms and conditions
 related to this email

Shaun,

Is there plans for e-book edition?

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Backup Booklet

2013-04-03 Thread Igor Neyman


 -Original Message-
 From: Shaun Thomas [mailto:stho...@optionshouse.com]
 Sent: Wednesday, April 03, 2013 1:51 PM
 To: Igor Neyman
 Cc: PostgreSQL General
 Subject: Re: PostgreSQL Backup Booklet
 
 On 04/03/2013 12:49 PM, Igor Neyman wrote:
 
  Is there plans for e-book edition?
 
 I believe there already is one. Packt sells it directly, and it's also
 listed on Amazon. Way cheaper than the print version, I think. About
 five bucks, as opposed to 20.
 
 --
 Shaun Thomas
 OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
 312-676-8870
 stho...@optionshouse.com
 

Just got it from Packt for $5.09.
Amazon has only paperback edition.

Thank you,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] configuring timezone

2013-02-07 Thread Igor Neyman
Terence,

Thank you for the offer.
But, I will probably be creating custom install scripts to run at destination 
location to modify parameter in Postgresql.conf.

Regards,
Igor Neyman

From: Terence Ferraro [mailto:terencejferr...@gmail.com]
Sent: Wednesday, February 06, 2013 6:47 PM
To: Igor Neyman
Cc: pgsql-general@postgresql.org
Subject: Re: configuring timezone

9.2.1 was the version standard when I was building and deploying...so no, I 
probably will not (personally) be updating anytime soon...

However, if you're interested, I'll see if I can find a place tonight or 
tomorrow to put these binaries (they are 32-bit as well), source, etc 
(sourceforge maybe?). I can also include to inno setup script that builds an 
installer similar to the EnterpriseDB version; that is, it automatices the 
service setup, creates a postgres user, etc. Hell, I may as well include the 
pre-built installer, too, if you don't want to customize anything..

In addition to the timezone fix, I (originally) wanted to build my own Windows 
installer because the EnterpriseDB version does NOT link against zlib with 
respect to openssl. In other words, no compressed ssl connections are possible 
with the currently distributed windows version. This one is linked against zlib 
(and the speed increase is quite significant).

T.J.
On Wed, Feb 6, 2013 at 3:23 PM, Igor Neyman 
iney...@perceptron.commailto:iney...@perceptron.com wrote:
I am on Windows (both 32 and 64 bit)  using 32-bit Postgres.
So, your binaries are for 9.2.1, you aren't planning to go to 9.2.2?


From: Terence Ferraro 
[mailto:terencejferr...@gmail.commailto:terencejferr...@gmail.com]
Sent: Wednesday, February 06, 2013 3:07 PM
To: Igor Neyman
Cc: pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org
Subject: Re: [GENERAL] configuring timezone

Sorry, but from what I understand the change is permanent. If recompile is not 
an option but you're on Windows let me know; I do have binaries available..
On Wed, Feb 6, 2013 at 2:05 PM, Igor Neyman 
iney...@perceptron.commailto:iney...@perceptron.com wrote:
Terence,

Thanks for quick reply, I read your thread (Dec, 2012) before posting my 
question.
But, recompile is not an option for me.  Was hoping, that something regarding 
this issue changed since...

Igor Neyman

From: Terence Ferraro 
[mailto:terencejferr...@gmail.commailto:terencejferr...@gmail.com]
Sent: Wednesday, February 06, 2013 1:45 PM
To: Igor Neyman
Cc: pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org
Subject: Re: [GENERAL] configuring timezone

See the archived thread here: 
http://www.postgresql.org/message-id/CAEghcWD8DXjroBYCZsdGrx+cHTCbCbW9es2uQ+o7a8NZ61JT=q...@mail.gmail.com

Short version: Sorry, but you're going to need to recompile if you want that 
behavior. Here's a diff applied against 9.2.1 http://pastebin.com/5AyaX2RF. 
I've deployed the patched version a couple dozen times now and it is working 
flawlessly.

T.J.
On Wed, Feb 6, 2013 at 1:32 PM, Igor Neyman 
iney...@perceptron.commailto:iney...@perceptron.com wrote:
Timezone configuration parameter (defaulting to system timezone)  worked fine 
for us before upgrading from 8.4. to 9.2.

Now we've got a problem.
9.2 Release Notes says:

*  Identify the server time zone during initdb, and set postgresql.conf entries 
timezonehttp://www.postgresql.org/docs/9.2/static/runtime-config-client.html#GUC-TIMEZONE
 and 
log_timezonehttp://www.postgresql.org/docs/9.2/static/runtime-config-logging.html#GUC-LOG-TIMEZONE
 accordingly (Tom Lane)

This avoids expensive time zone probes during server start.

Question: is there any way to revert back to old behavior so that server will 
probe system's timezone on startup (default to OS timezone on startup) instead 
setting it during initdb?
Obviously, without recompiling/rebuilding Postgres.

I'm dealing with the situation, where system is being built in one timezone 
(could be anywhere around the globe), and then moved to other (not known during 
system build) location with different timezone.
After relocation, OS timezone will change, but we can't allow user to edit 
timezone parameter in  Postgresql.conf.

Regards,
Igor Neyman





[GENERAL] configuring timezone

2013-02-06 Thread Igor Neyman
Timezone configuration parameter (defaulting to system timezone)  worked fine 
for us before upgrading from 8.4. to 9.2.

Now we've got a problem.
9.2 Release Notes says:

*  Identify the server time zone during initdb, and set postgresql.conf entries 
timezonehttp://www.postgresql.org/docs/9.2/static/runtime-config-client.html#GUC-TIMEZONE
 and 
log_timezonehttp://www.postgresql.org/docs/9.2/static/runtime-config-logging.html#GUC-LOG-TIMEZONE
 accordingly (Tom Lane)

This avoids expensive time zone probes during server start.

Question: is there any way to revert back to old behavior so that server will 
probe system's timezone on startup (default to OS timezone on startup) instead 
setting it during initdb?
Obviously, without recompiling/rebuilding Postgres.

I'm dealing with the situation, where system is being built in one timezone 
(could be anywhere around the globe), and then moved to other (not known during 
system build) location with different timezone.
After relocation, OS timezone will change, but we can't allow user to edit 
timezone parameter in  Postgresql.conf.

Regards,
Igor Neyman


Re: [GENERAL] configuring timezone

2013-02-06 Thread Igor Neyman
Terence,

Thanks for quick reply, I read your thread (Dec, 2012) before posting my 
question.
But, recompile is not an option for me.  Was hoping, that something regarding 
this issue changed since...

Igor Neyman

From: Terence Ferraro [mailto:terencejferr...@gmail.com]
Sent: Wednesday, February 06, 2013 1:45 PM
To: Igor Neyman
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] configuring timezone

See the archived thread here: 
http://www.postgresql.org/message-id/CAEghcWD8DXjroBYCZsdGrx+cHTCbCbW9es2uQ+o7a8NZ61JT=q...@mail.gmail.com

Short version: Sorry, but you're going to need to recompile if you want that 
behavior. Here's a diff applied against 9.2.1 http://pastebin.com/5AyaX2RF. 
I've deployed the patched version a couple dozen times now and it is working 
flawlessly.

T.J.
On Wed, Feb 6, 2013 at 1:32 PM, Igor Neyman 
iney...@perceptron.commailto:iney...@perceptron.com wrote:
Timezone configuration parameter (defaulting to system timezone)  worked fine 
for us before upgrading from 8.4. to 9.2.

Now we've got a problem.
9.2 Release Notes says:

*  Identify the server time zone during initdb, and set postgresql.conf entries 
timezonehttp://www.postgresql.org/docs/9.2/static/runtime-config-client.html#GUC-TIMEZONE
 and 
log_timezonehttp://www.postgresql.org/docs/9.2/static/runtime-config-logging.html#GUC-LOG-TIMEZONE
 accordingly (Tom Lane)

This avoids expensive time zone probes during server start.

Question: is there any way to revert back to old behavior so that server will 
probe system's timezone on startup (default to OS timezone on startup) instead 
setting it during initdb?
Obviously, without recompiling/rebuilding Postgres.

I'm dealing with the situation, where system is being built in one timezone 
(could be anywhere around the globe), and then moved to other (not known during 
system build) location with different timezone.
After relocation, OS timezone will change, but we can't allow user to edit 
timezone parameter in  Postgresql.conf.

Regards,
Igor Neyman



Re: [GENERAL] configuring timezone

2013-02-06 Thread Igor Neyman
I am on Windows (both 32 and 64 bit)  using 32-bit Postgres.
So, your binaries are for 9.2.1, you aren't planning to go to 9.2.2?


From: Terence Ferraro [mailto:terencejferr...@gmail.com]
Sent: Wednesday, February 06, 2013 3:07 PM
To: Igor Neyman
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] configuring timezone

Sorry, but from what I understand the change is permanent. If recompile is not 
an option but you're on Windows let me know; I do have binaries available..
On Wed, Feb 6, 2013 at 2:05 PM, Igor Neyman 
iney...@perceptron.commailto:iney...@perceptron.com wrote:
Terence,

Thanks for quick reply, I read your thread (Dec, 2012) before posting my 
question.
But, recompile is not an option for me.  Was hoping, that something regarding 
this issue changed since...

Igor Neyman

From: Terence Ferraro 
[mailto:terencejferr...@gmail.commailto:terencejferr...@gmail.com]
Sent: Wednesday, February 06, 2013 1:45 PM
To: Igor Neyman
Cc: pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org
Subject: Re: [GENERAL] configuring timezone

See the archived thread here: 
http://www.postgresql.org/message-id/CAEghcWD8DXjroBYCZsdGrx+cHTCbCbW9es2uQ+o7a8NZ61JT=q...@mail.gmail.com

Short version: Sorry, but you're going to need to recompile if you want that 
behavior. Here's a diff applied against 9.2.1 http://pastebin.com/5AyaX2RF. 
I've deployed the patched version a couple dozen times now and it is working 
flawlessly.

T.J.
On Wed, Feb 6, 2013 at 1:32 PM, Igor Neyman 
iney...@perceptron.commailto:iney...@perceptron.com wrote:
Timezone configuration parameter (defaulting to system timezone)  worked fine 
for us before upgrading from 8.4. to 9.2.

Now we've got a problem.
9.2 Release Notes says:

*  Identify the server time zone during initdb, and set postgresql.conf entries 
timezonehttp://www.postgresql.org/docs/9.2/static/runtime-config-client.html#GUC-TIMEZONE
 and 
log_timezonehttp://www.postgresql.org/docs/9.2/static/runtime-config-logging.html#GUC-LOG-TIMEZONE
 accordingly (Tom Lane)

This avoids expensive time zone probes during server start.

Question: is there any way to revert back to old behavior so that server will 
probe system's timezone on startup (default to OS timezone on startup) instead 
setting it during initdb?
Obviously, without recompiling/rebuilding Postgres.

I'm dealing with the situation, where system is being built in one timezone 
(could be anywhere around the globe), and then moved to other (not known during 
system build) location with different timezone.
After relocation, OS timezone will change, but we can't allow user to edit 
timezone parameter in  Postgresql.conf.

Regards,
Igor Neyman




Re: [GENERAL] configuring timezone

2013-02-06 Thread Igor Neyman

 -Original Message-
 From: Adrian Klaver [mailto:adrian.kla...@gmail.com]
 Sent: Wednesday, February 06, 2013 4:40 PM
 To: Igor Neyman
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] configuring timezone
 
 On 02/06/2013 10:32 AM, Igor Neyman wrote:
  Timezone configuration parameter (defaulting to system timezone)
  worked fine for us before upgrading from 8.4. to 9.2.
 
  Now we've got a problem.
 
  9.2 Release Notes says:
 
  *  Identify the server time zone during initdb, and set
  postgresql.conf entries timezone
  http://www.postgresql.org/docs/9.2/static/runtime-config-
 client.html#
  GUC-TIMEZONE
  and log_timezone
  http://www.postgresql.org/docs/9.2/static/runtime-config-
 logging.html
  #GUC-LOG-TIMEZONE
  accordingly (Tom Lane)
 
  This avoids expensive time zone probes during server start.
 
  Question: is there any way to revert back to old behavior so that
  server will probe system's timezone on startup (default to OS
 timezone
  on
  startup) instead setting it during initdb?
 
  Obviously, without recompiling/rebuilding Postgres.
 
  I'm dealing with the situation, where system is being built in one
  timezone (could be anywhere around the globe), and then moved to
 other
  (not known during system build) location with different timezone.
 
  After relocation, OS timezone will change, but we can't allow user to
  edit timezone parameter in  Postgresql.conf.
 
 
 It is not possible to change the postgresql.conf just before the
 relocate? In other words do you have no idea where the server will end
 up?
 
 
  Regards,
 
  Igor Neyman
 
 
 
 --
 Adrian Klaver
 adrian.kla...@gmail.com

Sometimes, but not always.

Going back to the reason for this change in Release Notes:

This avoids expensive time zone probes during server start.

How expensive? How often Postgres is restarted?
We aren't restarting Postgres for months.  Doesn't seem to be very valid 
reason, at least not for us :)

Igor Neyman




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] configuring timezone

2013-02-06 Thread Igor Neyman
Thank you for explaining.

Regards,
Igor Neyman


From: Tom Lane [t...@sss.pgh.pa.us]
Sent: Wednesday, February 06, 2013 5:11 PM
To: Igor Neyman
Cc: Adrian Klaver; pgsql-general@postgresql.org
Subject: Re: [GENERAL] configuring timezone

Igor Neyman iney...@perceptron.com writes:
 Going back to the reason for this change in Release Notes:
 This avoids expensive time zone probes during server start.
 How expensive?

The time zone probe logic involves reading every file under
/usr/share/zoneinfo (or wherever you have the Olson tz database
installed).  There are a couple thousand of those in a typical Linux
installation.  In a cold-boot situation where none of that data is
already swapped in, it's not unusual for this to take five seconds or
more.  Now that may or may not seem like a lot, but it's more than
enough to cause many startup scripts to conclude that the postmaster has
failed.  The hacks we'd built up to deal with this eventually became
insupportable.

We're not going back.  I suggest you consider ways to adjust your
server-migration process.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-18 Thread Igor Neyman
Yes, my mistake, I meant to ask about fifteen_min_stacked_view definition, and 
Postgres parameters from postgresql.conf configuration file, at least those - 
modified from default setting and related to resource consumption and query 
tuning.

Regards,
Igor Neyman


 -Original Message-
 From: Kirk Wythers [mailto:kwyth...@umn.edu]
 Sent: Thursday, January 17, 2013 5:05 PM
 To: Igor Neyman
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] speeding up a join query that utilizes a view
 
 
 On Jan 17, 2013, at 3:51 PM, Igor Neyman iney...@perceptron.com
 wrote:
 
  What about index definition, Postgres version, config parameters?
  Hardware configuration would be helpful too.
 
 
 Sorry
 
 pg 9.1
 
 OS X 10.8 server.
 32 G ram 8 cores
 
 I thought what you meant by index definition is at the bottom of the \d
 table-name. For example:
  Indexes:
 data_key_pkey PRIMARY KEY, btree (variable_id)
 data_key_lower_idx btree (lower(block_name::text))
 data_key_lower_idx1 btree (lower(variable_channel::text))
 
 on data_key.
 
 I'm not sure what you mean by config parameters? Output from pg_config?
 
 ~$ pg_config
 BINDIR = /usr/bin
 DOCDIR = /usr/share/doc/postgresql
 HTMLDIR = /usr/share/postgresql
 INCLUDEDIR = /usr/include
 PKGINCLUDEDIR = /usr/include/postgresql
 INCLUDEDIR-SERVER = /usr/include/postgresql/server LIBDIR = /usr/lib
 PKGLIBDIR = /usr/lib/postgresql LOCALEDIR = /usr/share/locale MANDIR =
 /usr/share/man SHAREDIR = /usr/share/postgresql SYSCONFDIR =
 /private/etc/postgresql PGXS =
 /usr/lib/postgresql/pgxs/src/makefiles/pgxs.mk
 CONFIGURE = '--infodir=/usr/share/info' '--disable-dependency-tracking'
 '--prefix=/usr' '--sbindir=/usr/libexec' '--sysconfdir=/private/etc' '-
 -mandir=/usr/share/man' '--localstatedir=/private/var/pgsql' '--
 htmldir=/usr/share/postgresql' '--enable-thread-safety' '--enable-
 dtrace' '--with-tcl' '--with-perl' '--with-python' '--with-gssapi' '--
 with-krb5' '--with-pam' '--with-ldap' '--with-bonjour' '--with-openssl'
 '--with-libxml' '--with-libxslt' '--with-system-
 tzdata=/usr/share/zoneinfo'
 'CC=/Applications/Xcode.app/Contents/Developer/Toolchains/OSX10.8.xctoo
 lchain/usr/bin/cc' 'CFLAGS=-arch x86_64 -pipe -Os -g -Wall -Wno-
 deprecated-declarations' 'LDFLAGS=-arch x86_64 -pipe -Os -g -Wall -Wno-
 deprecated-declarations' 'LDFLAGS_EX=-mdynamic-no-pic'
 CC =
 /Applications/Xcode.app/Contents/Developer/Toolchains/OSX10.8.xctoolcha
 in/usr/bin/cc
 CPPFLAGS = -I/usr/include/libxml2
 CFLAGS = -arch x86_64 -pipe -Os -g -Wall -Wno-deprecated-declarations -
 Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement
 -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv CFLAGS_SL
 = LDFLAGS = -arch x86_64 -pipe -Os -g -Wall -Wno-deprecated-
 declarations -Wl,-dead_strip_dylibs LDFLAGS_EX = -mdynamic-no-pic
 LDFLAGS_SL = LIBS = -lpgport -lxslt -lxml2 -lpam -lssl -lcrypto -
 lgssapi_krb5 -lz -lreadline -lm VERSION = PostgreSQL 9.1.4
 
 Does that help?
 
 
  -Original Message-
  From: Kirk Wythers [mailto:kwyth...@umn.edu]
  Sent: Thursday, January 17, 2013 3:59 PM
  To: Igor Neyman
  Cc: Kirk Wythers; pgsql-general@postgresql.org
  Subject: Re: [GENERAL] speeding up a join query that utilizes a view
 
 
  Not enough information:
 
  Postgres version?
  OS?
  Some Postgres configuration parameters, specifically related to
  RESOURCE USAGE and  QUERY TUNING?
  Table structures (including indexes) for:
  fifteen_min_stacked_propper, fifteen_min, and data_key?
  View definition for fifteen_min_stacked_view?
 
 
 
  Here is some additional information:
 
  b4warmed3=# \d data_key
  Table public.data_key
 Column| Type  |
  Modifiers
  --+---+-
 -
  --+---+--
  -
  --+---+-
  site | character varying(6)  |
  canopy   | character varying(24) |
  block| character(2)  |
  plot | character(2)  |
  measurement_interval | interval  |
  warming_treatment| character varying(24) |
  treatment_code   | character varying(24) |
  treatment_abbr   | character varying(24) |
  water_treatment  | character varying(24) |
  block_name   | character varying(24) |
  variable_name| character varying(24) |
  variable_channel | character varying(24) |
  variable_id  | character varying(24) | not null default
  NULL::character varying
  Indexes:
 data_key_pkey PRIMARY KEY, btree (variable_id)
 data_key_lower_idx btree (lower(block_name::text))
 data_key_lower_idx1 btree (lower(variable_channel::text))
 
  b4warmed3=# SELECT COUNT(*) FROM data_key;  count
  ---
   4728
  (1 row)
 
  b4warmed3=# \d fifteen_min
   Table public.fifteen_min
Column

Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-18 Thread Igor Neyman
Kirk,

Are you doing un-pivoting in most of your queries?
Did you try normalized design for fifteen_minute table?
Is there specific reason for de-normalization?

Regards,
Igor Neyman

 -Original Message-
 From: Kirk Wythers [mailto:kwyth...@umn.edu]
 Sent: Friday, January 18, 2013 10:50 AM
 To: Igor Neyman
 Cc: Kirk Wythers; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] speeding up a join query that utilizes a view
 
 
 On Jan 18, 2013, at 8:10 AM, Igor Neyman iney...@perceptron.com
 wrote:
 
  Yes, my mistake, I meant to ask about fifteen_min_stacked_view
 definition, and Postgres parameters from postgresql.conf configuration
 file, at least those - modified from default setting and related to
 resource consumption and query tuning.
 
  Regards,
  Igor Neyman
 
 Here some extra bits form the postgresql.conf file.  As you can see, I
 have not changed much from the default settings.
 
 #--
 
 # RESOURCE USAGE (except WAL)
 #--
 
 
 # - Memory -
 
 shared_buffers = 3GB # 7GB  # min 128kB
 # (change requires restart)
 temp_buffers = 80MB # 8MB   # min 800kB
 #max_prepared_transactions = 0  # zero disables the feature
 # (change requires restart) #
 Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
 memory # per transaction slot, plus lock space (see
 max_locks_per_transaction).
 # It is not advisable to set max_prepared_transactions nonzero unless
 you # actively intend to use prepared transactions.
 work_mem = 64MB #8MB# min 64kB
 maintenance_work_mem = 128MB# min 1MB
 #max_stack_depth = 2MB  # min 100kB
 
 # - Kernel Resource Usage -
 
 #max_files_per_process = 1000   # min 25
 # (change requires restart)
 #shared_preload_libraries = ''  # (change requires restart)
 
 # - Cost-Based Vacuum Delay -
 
 #vacuum_cost_delay = 0ms# 0-100 milliseconds
 #vacuum_cost_page_hit = 1   # 0-1 credits
 #vacuum_cost_page_miss = 10 # 0-1 credits
 #vacuum_cost_page_dirty = 20# 0-1 credits
 #vacuum_cost_limit = 200# 1-1 credits
 
 # - Background Writer -
 
 #bgwriter_delay = 200ms # 10-1ms between rounds
 #bgwriter_lru_maxpages = 100# 0-1000 max buffers
 written/round
 #bgwriter_lru_multiplier = 2.0  # 0-10.0 multipler on buffers
 scanned/round
 
 # - Asynchronous Behavior -
 
 #effective_io_concurrency = 1   # 1-1000. 0 disables
 prefetching
 
 #--
 
 # QUERY TUNING
 #--
 
 
 # - Planner Method Configuration -
 
 #enable_bitmapscan = on
 #enable_hashagg = on
 #enable_hashjoin = on
 #enable_indexscan = on
 #enable_material = on
 #enable_mergejoin = on
 #enable_nestloop = on
 #enable_seqscan = on
 #enable_sort = on
 #enable_tidscan = on
 
 # - Planner Cost Constants -
 
 #seq_page_cost = 1.0# measured on an arbitrary
 scale
 #random_page_cost = 4.0 # same scale as above
 #cpu_tuple_cost = 0.01  # same scale as above
 #cpu_index_tuple_cost = 0.005   # same scale as above
 #cpu_operator_cost = 0.0025 # same scale as above
 effective_cache_size = 6GB #13GB
 
 # - Genetic Query Optimizer -
 
 #geqo = on
 #geqo_threshold = 12
 #geqo_effort = 5# range 1-10
 #geqo_pool_size = 0 # selects default based on
 effort
 #geqo_generations = 0   # selects default based on
 effort
 #geqo_selection_bias = 2.0  # range 1.5-2.0
 #geqo_seed = 0.0# range 0.0-1.0
 
 # - Other Planner Options -
 
 #default_statistics_target = 100# range 1-1
 #constraint_exclusion = partition   # on, off, or partition
 #cursor_tuple_fraction = 0.1# range 0.0-1.0
 #from_collapse_limit = 8
 #join_collapse_limit = 8# 1 disables collapsing of
 explicit
 # JOIN clauses
 
 
 Here is a snip from earlier that includes info about both the table
 that is used to build the view and the view. In short, I use the UNNEST
 function to un-pivot all the variables of interest in the fifteen_min
 table into the columns variable and value in the
 fifteen_min_stacked_proper view.
 
 Thanks again.
 
 Kirk
 
 
 b4warmed3=# \d fifteen_min
 Table public.fifteen_min
  Column|Type | Modifiers
 -+-+---
 rowid   | character varying

Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-18 Thread Igor Neyman
Kirk,

Are you limited to pure SQL or procedural language (PgPlSQL) allowed?
If PgPlSQL is allowed, you could normalize fifteen_min table, break it into 
several tables (one for a_dc, another for a_dif, another for a_targettemp, and 
so on...) and use dynamic sql inside PlPgSQL function to join with the proper 
table.
In that case you could index normalized tables properly, also not having table 
rows as wide as they are now helps.
Thus you'll avoid sequencial scan on a big and wide table.

Also increasing default_statistics_target may help, this:

Index Scan using fifteen_min_pkey on fifteen_min  (cost=0.00..525136.58 
rows=1798711 width=1072) (actual time=0.034..96077.588 rows=428093218 loops=1)

Shows to big of a difference between estimated and actual row counts. Are these 
tables analyzed often enough?

Regards,
Igor Neyman

 -Original Message-
 From: Kirk Wythers [mailto:kwyth...@umn.edu]
 Sent: Friday, January 18, 2013 11:15 AM
 To: Igor Neyman
 Cc: Kirk Wythers; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] speeding up a join query that utilizes a view
 
 
 On Jan 18, 2013, at 10:05 AM, Igor Neyman iney...@perceptron.com
 wrote:
 
  Kirk,
 
  Are you doing un-pivoting in most of your queries?
  Did you try normalized design for fifteen_minute table?
  Is there specific reason for de-normalization?
 
  Regards,
  Igor Neyman
 
 Thanks Igor. The only reason I'm de-normalizing with unnest, is so I
 can perform a join on variable_name with the table data_key. I't kind
 of a crazy design, but it is what I was given to work with. Here is the
 join that takes so dang long to perform:
 
 SELECT
   data_key.site,
   data_key.canopy,
   data_key.measurement_interval,
   data_key.treatment_code,
   data_key.treatment_abbr,
   data_key.plot,
   fifteen_min_stacked_propper.*
 FROM
   data_key,
   fifteen_min_stacked_propper
 WHERE
   data_key.variable_channel = fifteen_min_stacked_propper.variable
 AND data_key.block_name = fifteen_min_stacked_propper.block_name
 --AND 2012 = EXTRACT(YEAR FROM time2)
 --AND fifteen_min_stacked_propper.block_name ~ 'b4warm_[ace]'
 --AND fifteen_min_stacked_propper.value IS NOT NULL AND
 fifteen_min_stacked_propper.variable ~ 'tsoil'
 
 The whole point of the de-normalized table
 fifteen_min_stacked_propper is so that variable names in
 fifteen_min_stacked_propper.variable can be used to join on
 data_key.variable_channel.
 
 Does that make sense?
 
 Kirk
 
 
 
  -Original Message-
  From: Kirk Wythers [mailto:kwyth...@umn.edu]
  Sent: Friday, January 18, 2013 10:50 AM
  To: Igor Neyman
  Cc: Kirk Wythers; pgsql-general@postgresql.org
  Subject: Re: [GENERAL] speeding up a join query that utilizes a view
 
 
  On Jan 18, 2013, at 8:10 AM, Igor Neyman iney...@perceptron.com
  wrote:
 
  Yes, my mistake, I meant to ask about fifteen_min_stacked_view
  definition, and Postgres parameters from postgresql.conf
  configuration file, at least those - modified from default setting
  and related to resource consumption and query tuning.
 
  Regards,
  Igor Neyman
 
  Here some extra bits form the postgresql.conf file.  As you can see,
  I have not changed much from the default settings.
 
  #---
 -
  --
  
  # RESOURCE USAGE (except WAL)
  #---
 -
  --
  
 
  # - Memory -
 
  shared_buffers = 3GB # 7GB  # min 128kB
 # (change requires restart)
  temp_buffers = 80MB # 8MB   # min 800kB
  #max_prepared_transactions = 0  # zero disables the feature
 # (change requires restart) #
  Note:  Increasing max_prepared_transactions costs ~600 bytes of
  shared memory # per transaction slot, plus lock space (see
  max_locks_per_transaction).
  # It is not advisable to set max_prepared_transactions nonzero
 unless
  you # actively intend to use prepared transactions.
  work_mem = 64MB #8MB# min 64kB
  maintenance_work_mem = 128MB# min 1MB
  #max_stack_depth = 2MB  # min 100kB
 
  # - Kernel Resource Usage -
 
  #max_files_per_process = 1000   # min 25
 # (change requires restart)
  #shared_preload_libraries = ''  # (change requires restart)
 
  # - Cost-Based Vacuum Delay -
 
  #vacuum_cost_delay = 0ms# 0-100 milliseconds
  #vacuum_cost_page_hit = 1   # 0-1 credits
  #vacuum_cost_page_miss = 10 # 0-1 credits
  #vacuum_cost_page_dirty = 20# 0-1 credits
  #vacuum_cost_limit = 200# 1-1 credits
 
  # - Background Writer -
 
  #bgwriter_delay = 200ms # 10-1ms between rounds
  #bgwriter_lru_maxpages = 100# 0-1000 max buffers
  written/round

Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-17 Thread Igor Neyman
 -Original Message-
 From: Kirk Wythers [mailto:kwyth...@umn.edu]
 Sent: Thursday, January 17, 2013 12:16 AM
 To: pgsql-general@postgresql.org
 Subject: speeding up a join query that utilizes a view
 
 I am looking for advice on a performance problem. I'm pretty sure that
 the culprit of my slow performance is a view that is several hundred
 million records in size. Because it is a view, I can only index the
 underlying table, but because the view generates an un-pivoted
 version of the underlying table with un unnest function, I can't index
 the important column in the underlying table, because it doesn't exist
 until after the un-pivot or stacking function of the view... I know... this
 is all very circular.
 
 Here is the join query that uses the view. I have
 
 SELECT
   data_key.site,
   data_key.canopy,
   data_key.measurement_interval,
   data_key.treatment_code,
   data_key.treatment_abbr,
   data_key.plot,
   fifteen_min_stacked_view.*
 FROM
   data_key,
   fifteen_min_stacked_view
 WHERE
   data_key.variable_channel =
 fifteen_min_stacked_view.variable AND data_key.block_name =
 fifteen_min_stacked_view.block_name
   AND fifteen_min_stacked_view.variable ~ 'tsoil'
 
 I have tried adding indexes where I can on the join colums in the
 data_key table Here is the EXPLAIN.
 
 
 QUERY PLAN
 ---
 ---
 
  Hash Join  (cost=195.20..548004.70 rows=196 width=192) (actual
 time=3.295..443523.222 rows=28779376 loops=1)
Hash Cond: ((fifteen_min_stacked_propper.variable =
 (data_key.variable_channel)::text) AND
 ((fifteen_min_stacked_propper.block_name)::text =
 (data_key.block_name)::text))
-  Subquery Scan on fifteen_min_stacked_propper
 (cost=0.00..547620.47 rows=2878 width=156) (actual
 time=0.247..424911.643 rows=28779376 loops=1)
  Filter: (fifteen_min_stacked_propper.variable ~ 'tsoil'::text)
  -  Index Scan using fifteen_min_pkey on fifteen_min
 (cost=0.00..525136.58 rows=1798711 width=1072) (actual
 time=0.034..96077.588 rows=428093218 loops=1)
-  Hash  (cost=124.28..124.28 rows=4728 width=55) (actual
 time=3.036..3.036 rows=4728 loops=1)
  Buckets: 1024  Batches: 1  Memory Usage: 437kB
  -  Seq Scan on data_key  (cost=0.00..124.28 rows=4728
 width=55) (actual time=0.007..1.277 rows=4728 loops=1)  Total runtime:
 444912.792 ms
 (9 rows)
 
 
 Any ideas would be much appreciated

Not enough information:

Postgres version?
OS?
Some Postgres configuration parameters, specifically related to RESOURCE 
USAGE and  QUERY TUNING?
Table structures (including indexes) for: fifteen_min_stacked_propper, 
fifteen_min, and data_key?
View definition for fifteen_min_stacked_view?


Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] speeding up a join query that utilizes a view

2013-01-17 Thread Igor Neyman
What about index definition, Postgres version, config parameters?
Hardware configuration would be helpful too.


 -Original Message-
 From: Kirk Wythers [mailto:kwyth...@umn.edu]
 Sent: Thursday, January 17, 2013 3:59 PM
 To: Igor Neyman
 Cc: Kirk Wythers; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] speeding up a join query that utilizes a view
 
 
  Not enough information:
 
  Postgres version?
  OS?
  Some Postgres configuration parameters, specifically related to
 RESOURCE USAGE and  QUERY TUNING?
  Table structures (including indexes) for:
 fifteen_min_stacked_propper, fifteen_min, and data_key?
  View definition for fifteen_min_stacked_view?
 
 
 
 Here is some additional information:
 
 b4warmed3=# \d data_key
  Table public.data_key
 Column| Type  |
 Modifiers
 --+---+
 -
 --+---+-
  site | character varying(6)  |
  canopy   | character varying(24) |
  block| character(2)  |
  plot | character(2)  |
  measurement_interval | interval  |
  warming_treatment| character varying(24) |
  treatment_code   | character varying(24) |
  treatment_abbr   | character varying(24) |
  water_treatment  | character varying(24) |
  block_name   | character varying(24) |
  variable_name| character varying(24) |
  variable_channel | character varying(24) |
  variable_id  | character varying(24) | not null default
 NULL::character varying
 Indexes:
 data_key_pkey PRIMARY KEY, btree (variable_id)
 data_key_lower_idx btree (lower(block_name::text))
 data_key_lower_idx1 btree (lower(variable_channel::text))
 
 b4warmed3=# SELECT COUNT(*) FROM data_key;  count
 ---
   4728
 (1 row)
 
 b4warmed3=# \d fifteen_min
   Table public.fifteen_min
Column|Type | Modifiers
 -+-+---
  rowid   | character varying(48)   | not null
  time2   | timestamp without time zone |
  timestamp   | timestamp without time zone |
  block_name  | character varying(8)|
  stat_name   | character varying(8)|
  table_name  | character varying(10)   |
  program | character varying(48)   |
  a_dc_avg1   | real|
  a_dc_avg2   | real|
  a_dc_avg3   | real|
  a_dc_avg4   | real|
  a_dif_avg1  | real|
  a_dif_avg2  | real|
  a_dif_avg3  | real|
  a_dif_avg4  | real|
  a_targettemp_avg1   | real|
  a_targettemp_avg2   | real|
  a_targettemp_avg3   | real|
  a_targettemp_avg4   | real|
  a_targettemp_avg5   | real|
  a_targettemp_avg6   | real|
  a_targettemp_avg7   | real|
  a_targettemp_avg8   | real|
  a_tc_avg1   | real|
  a_tc_avg10  | real|
  a_tc_avg11  | real|
  a_tc_avg12  | real|
  a_tc_avg2   | real|
  a_tc_avg3   | real|
  a_tc_avg4   | real|
  a_tc_avg5   | real|
  a_tc_avg6   | real|
  a_tc_avg7   | real|
  a_tc_avg8   | real|
  a_tc_avg9   | real|
  a_tc_std1   | real|
  a_tc_std10  | real|
  a_tc_std11  | real|
  a_tc_std12  | real|
  a_tc_std2   | real|
  a_tc_std3   | real|
  a_tc_std4   | real|
  a_tc_std5   | real|
  a_tc_std6   | real|
  a_tc_std7   | real|
  a_tc_std8   | real|
  a_tc_std9   | real|
  airtc_avg   | real|
  airtemp_avg | real|
  airtemp_max | real|
  airtemp_min | real|
  all_avgt| real|
  am25tref1   | real

Re: [GENERAL] Large number of rows in pg_type and slow gui (pgadmin) refresh

2013-01-04 Thread Igor Neyman


 -Original Message-
 From: Thomas Kellerer [mailto:spam_ea...@gmx.net]
 Sent: Thursday, January 03, 2013 12:31 PM
 To: pgsql-general@postgresql.org
 Subject: Re: Large number of rows in pg_type and slow gui (pgadmin)
 refresh
 
 Robert Klaus wrote on 03.01.2013 16:50:
  We have 36,000+ rows returned by  SELECT oid, format_type(oid,
  typtypmod) AS typname FROM pg_type.
 
  My manager says this is only a small number compared to what is
  expected by next summer.
 
 Why do you need so many types?
 That sounds like something in your design is not right.
 
 Thomas
 

Probably those are not the types Robert created explicitly.
There must be lots of tables/views (m.b. lots of partitions) in the database.  
Every table/view adds couple records to pg_type: one type for table/view record 
and one type the set (array) of table/view records.

Regards,
Igor Neyman




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unlooged tables

2012-12-07 Thread Igor Neyman
 -Original Message-
 From: Sergey Konoplev [mailto:gray...@gmail.com]
 Sent: Thursday, December 06, 2012 4:52 PM
 To: Igor Neyman
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] unlooged tables
 
 Hi,
 
 On Thu, Dec 6, 2012 at 7:08 AM, Igor Neyman iney...@perceptron.com
 wrote:
  Is there any way to change regular table to unlogged without
  dropping said table and recreating it as unlogged?
 
 AFAIK it is impossible currently.
 
 The best way to do such transformation that comes to my mind is:
 
 CREATE TABLE table1 (
 id bigserial PRIMARY KEY,
 data text
 );
 
 INSERT INTO table1 (data)
 SELECT 'bla' || i::text
 FROM generate_series(1, 10) AS i;
 
 SELECT * FROM table1;
 
 CREATE UNLOGGED TABLE tmp (LIKE table1 INCLUDING ALL); ALTER TABLE
 table1 INHERIT tmp;
 
 BEGIN;
 ALTER TABLE table1 RENAME TO table1_old; ALTER TABLE tmp RENAME TO
 table1; END;
 
 So new rows will be inserted into the new unlogged table and old rows
 will be available from the old one.
 
 INSERT INTO table1 (data)
 SELECT 'bla' || i::text
 FROM generate_series(11, 15) AS i;
 
 UPDATE table1 SET data = 'mla' || i::text WHERE i = 5;
 
 SELECT * FROM table1;
 
 And then all we need is move the data to the new table and finish with
 the old one.
 
 ALTER SEQUENCE table1_id_seq OWNED BY table1.id;
 
 BEGIN;
 INSERT INTO table1 SELECT * FROM table1_old; DROP TABLE table1_old
 CASCADE; END;
 
 SELECT * FROM table1;
 
 Correct me if I misunderstand something, please.
 
 
  Didn't find the answer in the docs.  Looks like alter table ... does
  not support unlogged.
 
 
 
  TIA,
 
  Igor Neyman
 
 
 
 --
 Sergey Konoplev
 Database and Software Architect
 http://www.linkedin.com/in/grayhemp

I was hoping, may be hacking pg_catalog, like setting pg_class.relpersistence 
to 'u' will do the trick (or something like this).

b.t.w. there will be no other active connections, so there is no risk of 
needing to add/update/delete records in the table while changing to unlogged.

Regards,
Igor Neyman




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unlooged tables

2012-12-06 Thread Igor Neyman
Hello,

Is there any way to change regular table to unlogged without dropping said 
table and recreating it as unlogged?
Didn't find the answer in the docs.  Looks like alter table ... does not 
support unlogged.

TIA,
Igor Neyman


Re: [GENERAL] pg_listening_channels()

2012-11-30 Thread Igor Neyman
 -Original Message-
 From: Greg Sabino Mullane [mailto:g...@turnstep.com]
 Sent: Thursday, November 29, 2012 11:34 PM
 To: pgsql-general@postgresql.org
 Subject: Re: pg_listening_channels()
 
 
 On the contrary, it was very well discussed and designed. Why do you
 even care if the anyone is listening or not? Simply remove the check
 if anyone listens step and send the NOTIFY.
 

Well, I guess we disagree on this.

Why trashing the system with NOTIFYs no one listens to?
Of course, like Tom Lane suggested, I could create a table similar to now 
obsolete pg_listener and manage it from the client that LISTENs and gets 
notifications.

Also, what sense pg_listening_channels() function makes, if it returns channels 
that I created (in my current session/connection)?  
I don't need this function to know whether I issued LISTEN my_channel or not.

Regards,
Igor Neyman 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_listening_channels()

2012-11-29 Thread Igor Neyman
 -Original Message-
 From: Tom Lane [mailto:t...@sss.pgh.pa.us]
 Sent: Wednesday, November 28, 2012 6:13 PM
 To: Igor Neyman
 Cc: Greg Sabino Mullane; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] pg_listening_channels()
 
 Igor Neyman iney...@perceptron.com writes:
  With PG 9.0 changes I lost ability to check if anyone is interested
 in the NOTIFY signal and payload I'm about to send.
  Seems like this change was not thought through completely.
 
 [ shrug... ] It was debated extensively and the advantages of the new
 implementation were deemed to outweigh the disadvantages.
 
   regards, tom lane

Tom,

Are you saying that these two features: attached payload and being able to find 
which channels are being listened to - are incompatible?  That they cannot 
coexist?

Regards,
Igor Neyman



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] set value var via execute

2012-11-29 Thread Igor Neyman
From: Peter Kroon [mailto:plakr...@gmail.com] 
Sent: Thursday, November 29, 2012 11:01 AM
To: pgsql-general@postgresql.org
Subject: set value var via execute

Is it possible to set the value of a var via execute?

drop table if exists __test;
create unlogged table __test(
id int
);

DO $$

DECLARE
v_holder int;
v_table text = 'table';
v_record_0 text[];
v_id int;

BEGIN

execute '
insert into __test(id)
select id from '||v_table||' order by random() limit 2
';
v_id = (select id from __test limit 1);

   --begin this 
fails--
        v_holder = execute 'select id from '||v_table||' order by random() 
limit 1';
        --end this 
fails---

v_record_0 := array(
SELECT id FROM table order by random() --limit 2
);

raise notice '%', v_record_0;
END;

$$ LANGUAGE plpgsql;


Peter,

Instead of:

v_holder = execute 'select id from '||v_table||' order by random() limit 1';

do this:

execute 'select id from '||v_table||' order by random() limit 1' INTO v_holder;

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When does CLUSTER occur?

2012-11-29 Thread Igor Neyman

From: Mike Christensen [mailto:m...@kitchenpc.com] 
Sent: Thursday, November 29, 2012 1:35 PM
To: stho...@optionshouse.com
Cc: Jeff Janes; Schnabel, Robert D.; pgsql-general@postgresql.org
Subject: Re: When does CLUSTER occur?


On Thu, Nov 29, 2012 at 10:28 AM, Shaun Thomas stho...@optionshouse.com wrote:
On 11/29/2012 12:20 PM, Jeff Janes wrote:
It would maintain an imperfect clustering, but still much better than
current behavior.

I thought about that, too. The imperfect clustering made me erase everything 
I'd written. If the clustering is imperfect, it's not really clustering. It 
would mean less random reads to restart the seek chain though, so it would be a 
perceptible gain. But it's still not real clustering until the order is 
maintained indefinitely.

So far as I know, that simply can't be done with MVCC. Especially on an 
insert-only table that's clustered on a column unrelated to insert order.


How is this implemented in MS SQL then?  Obviously, MS SQL supports real 
clustering and has MVCC..

--

The way MVCC implemented in Postgres (not allowing in place updates) is 
obviously complicates things in regards to clustering.

As for MS SQL, here is how it implements clustering:
The index on which the table is clustered actually becomes a physical holder 
(for the lack of better term) of the table, meaning the complete table tuples 
are stored in the leaf blocks of the index it is clustered on.  There is no 
other place in the database where they are stored physically.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_listening_channels()

2012-11-28 Thread Igor Neyman


From: Greg Sabino Mullane [g...@turnstep.com]
Sent: Wednesday, November 28, 2012 10:26 AM
To: pgsql-general@postgresql.org
Subject: Re: pg_listening_channels()

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


Igor Neyman asked:

[Postgres 9.2]
 How (if it's even possible) can I get listening channels
 for all sessions/ connections in the database?

No, this is not possible. You can only see items
from your own session.

- --

IMHO, that's a step back comparing to older implementation of LISTEN/NOTIFY.

Of course allowing payload to be attached to NOTIFY signal (instead of being 
passed through some user defined table) is a good thing.
But, I (and probably many others) use LISTEN/NOTIFY mechanism to notify client 
program connected to postgres database about changes made in the database
Implementation prior to PG 9.0:

INTERESTED_CLIENT: LISTEN my_alert;
SOME_OTHER_PROGRAM: INSERTS/UPDATES/DELETES table_client_interested_in;
ON INSERT/UPDATE/DELETE TRIGGER: check if anyone listens on my_alert channel 
by querying pg_listener table, sends NOTIFY my_alert signal and inserts message 
into user_message_table;
INTERESTED_CLIENT: upon receiving NOTIFY signal reads message from  
user_message_table;

With PG 9.0 changes I lost ability to check if anyone is interested in the 
NOTIFY signal and payload I'm about to send.
Seems like this change was not thought through completely.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_listening_channels()

2012-11-27 Thread Igor Neyman
Hello,

In Postgresql 9.2 this function pg_listening_channels() returns the channels 
that the current session/connection listens to.

How (if it's even possible) can I get listening channels for all sessions/ 
connections in the database?
With older versions I could just query pg_listener WHERE relname = 
listener_name.  Not any more, the table doesn't exist in newer versions.

Regards,
Igor Neyman


Re: [GENERAL] How to verify pg_dump files

2012-11-07 Thread Igor Neyman


 -Original Message-
 From: Gary [mailto:listgj...@yahoo.co.uk]
 Sent: Wednesday, November 07, 2012 8:02 AM
 To: pgsql-general@postgresql.org
 Subject: How to verify pg_dump files
 
 Can anyone suggest how I could verify that the files created by pg_dump
 are okay? They are being created for backup purposes, and the last
 thing I want to do is find out that the backups themselves are in some
 way corrupt.
 
 I know I can check the output of the command itself, but what if.. I
 don't know... if there are problems with the disc it writes to, or
 something like that. Is there any way to check whether the output file
 is valid in the sense that it is complete and syntactically correct?
 
 --
 GaryPlease do NOT send me 'courtesy' replies off-list.
 


The only 100% fool-proof test would be to restore from your backup files.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres will not start due to corrupt index

2012-10-03 Thread Igor Neyman

From: Robert Sosinski [mailto:rsosin...@ticketevolution.com] 
Sent: Wednesday, October 03, 2012 10:54 AM
To: Merlin Moncure
Cc: pgsql-general@postgresql.org; Spike Grobstein
Subject: Re: Postgres will not start due to corrupt index

Hey Merlin,

Thanks.  Starting postgres with -P was something that I did not try.  Does 
postgres have any GIN or GIST system indexes though?

I would love to try it out, but the database has already been restored.  Will 
definitely keep this in mind for the future though.

Thanks again for the help,

-- 
Robert Sosinski


I wonder if there is a column in pg catalog, that indicates the type of the 
index.  I couldn't find one.
So, I ran the following sql trying to find system indexes of gin or gist type:

select * from pg_indexes where schemaname = 'pg_catalog' and (indexdef like 
'%USING gist%' OR indexdef like '%USING gin%');

and it returned zero rows.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres will not start due to corrupt index

2012-10-03 Thread Igor Neyman
 -Original Message-
 From: Tom Lane [mailto:t...@sss.pgh.pa.us]
 Sent: Wednesday, October 03, 2012 2:47 PM
 To: Igor Neyman
 Cc: Robert Sosinski; Merlin Moncure; pgsql-general@postgresql.org;
 Spike Grobstein
 Subject: Re: [GENERAL] Postgres will not start due to corrupt index
 
 Igor Neyman iney...@perceptron.com writes:
  I wonder if there is a column in pg catalog, that indicates the type
 of the index.  I couldn't find one.
 
 join relam to pg_am.oid
 
  So, I ran the following sql trying to find system indexes of gin or
 gist type:
 
 There aren't any.
 
   regards, tom lane

Tom, thank you.

In this case:

select i.indexname, a.amname, i.tablename from pg_indexes i JOIN
(pg_class c join pg_am a ON (c.relam = a.oid) ) ON (i.indexname = c.relname)
WHERE i.schemaname = 'pg_catalog';

Regards, 
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running CREATE only on certain Postgres versions

2012-09-25 Thread Igor Neyman
 -Original Message-
 From: Daniele Varrazzo [mailto:daniele.varra...@gmail.com]
 Sent: Tuesday, September 25, 2012 11:26 AM
 To: Adrian Klaver
 Cc: David Johnston; Robert James; Igor Neyman; Postgres General
 Subject: Re: [GENERAL] Running CREATE only on certain Postgres versions
 
 On Tue, Sep 25, 2012 at 3:47 PM, Adrian Klaver
 adrian.kla...@gmail.com wrote:
 
  To elaborate:
  test= SELECT current_setting('server_version_num');
   current_setting
  -
   90009
 
 Yes, but knowing that, how does he run a statement only if version e.g.
 = 80400? Is there a better way than the proposed create/call/drop
 function before PG 9.0? (since 9.0 there is the DO statement).
 
 -- Daniele

For PG versions prior to 9.0 (without DO statement) I wrote and use extensively 
this little function:

CREATE OR REPLACE FUNCTION exec_pgplsql_block(exec_string text)
RETURNS BOOLEAN
AS $THIS$
DECLARE lRet BOOLEAN;
BEGIN
EXECUTE 'CREATE OR REPLACE FUNCTION any_block()
RETURNS VOID
AS $BODY$ ' || exec_string || ' $BODY$LANGUAGE PLPGSQL;' ;
PERFORM any_block();
RETURN TRUE;
END;
$THIS$LANGUAGE PLPGSQL;

which accepts as a parameter (exec_string) any anonymous PlPgSQL block 
(what DO does in later versions),
creates a function with this PlPgSQL block as a body, and executes it.

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running CREATE only on certain Postgres versions

2012-09-24 Thread Igor Neyman
 -Original Message-
 From: Robert James [mailto:srobertja...@gmail.com]
 Sent: Monday, September 24, 2012 9:33 AM
 To: Postgres General
 Subject: Running CREATE only on certain Postgres versions
 
 I have some code which creates a function in Postgres, taken from
 http://wiki.postgresql.org/wiki/Array_agg .
 
 DROP AGGREGATE IF EXISTS array_agg(anyelement); CREATE AGGREGATE
 array_agg(anyelement) ( SFUNC=array_append, STYPE=anyarray,
 INITCOND='{}'
 );
 
 The function was added in 8.4, and so the code fails when run on 8.4 or
 higher.
 
 How can I make the code cross-version compatible? For instance, how
 can I tell it to check the version, and only run if 8.3 or lower?   Or
 another way to make it cross-version?

Find your PG version with: 
SELECT version();

and continue accordingly...

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best free tool for relationship extraction

2012-09-13 Thread Igor Neyman
From: Alexander Gataric [mailto:gata...@usa.net] 
Sent: Thursday, September 13, 2012 12:52 PM
To: pgsql-general@postgresql.org
Subject: Best free tool for relationship extraction

I need to determine relationships between several tables. Is there a free tool 
to extract these from catalog tables? Is there an SQL that also does this?

Thanks
Alex


Try this SQL:

WITH RECURSIVE FK_recursive(distance, child_table, parent_table, 
FK_constraint_name, unique_constraint_name,
  ON_DELETE, ON_UPDATE, is_deferrable, 
FK_path) AS (
SELECT 1, tc.table_name, ctu.table_name, ctu.constraint_name, 
rc.unique_constraint_name,
   rc.delete_rule, rc.update_rule, tc.is_deferrable, 
quote_ident(ctu.table_name)
FROM information_schema.constraint_table_usage ctu, 
information_schema.table_constraints tc, 
information_schema.referential_constraints rc
WHERE ctu.table_name = 'gp_part_space' and ctu.table_catalog = 'vector'
  and tc.constraint_name = ctu.constraint_name and tc.constraint_type = 
'FOREIGN KEY' and tc.constraint_catalog = 'vector'
  and ctu.constraint_name = rc.constraint_name
  UNION ALL
SELECT er.distance + 1, tc.table_name, ctu.table_name, ctu.constraint_name, 
rc.unique_constraint_name, 
   rc.delete_rule, rc.update_rule, tc.is_deferrable, er.FK_path || ' - 
' || quote_ident(ctu.table_name)
FROM FK_recursive er, information_schema.constraint_table_usage ctu, 
information_schema.table_constraints tc, 
information_schema.referential_constraints rc
WHERE er.child_table = ctu.table_name and ctu.table_catalog = 'vector'
  and tc.constraint_name = ctu.constraint_name and tc.constraint_type = 
'FOREIGN KEY' and tc.constraint_catalog = 'vector'
  and ctu.constraint_name = rc.constraint_name
  )
SELECT distance, child_table, parent_table, FK_constraint_name, 
unique_constraint_name, 
   ON_DELETE, ON_UPDATE, is_deferrable, FK_path || ' - ' || 
quote_ident(child_table) AS FK_path
  FROM FK_recursive ORDER BY distance, parent_table;

If you get an error like this (possible on 8.4.5):

ERROR:  operator is not unique: smallint[] @ smallint[]
LINE 1: select $1 @ $2 and $2 @ $1
  ^
HINT:  Could not choose a best candidate operator. You might need to add 
explicit type casts.
QUERY:  select $1 @ $2 and $2 @ $1
CONTEXT:  SQL function _pg_keysequal during inlining

Then recompile the function:

SET search_path TO information_schema;
CREATE OR REPLACE FUNCTION _pg_keysequal(smallint[], smallint[]) RETURNS boolean
LANGUAGE sql IMMUTABLE
AS 'select $1 operator(pg_catalog.@) $2 and $2 operator(pg_catalog.@) $1';
SET search_path TO public;


And then re-run original recursive query.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] information_schema.referential_constraints broken?

2012-07-27 Thread Igor Neyman
This query:

select * from information_schema.referential_constraints;

under PG 8.4.5 (Windows platform) produces this error message:

ERROR:  operator is not unique: smallint[] @ smallint[]
LINE 1: select $1 @ $2 and $2 @ $1
  ^
HINT:  Could not choose a best candidate operator. You might need to add 
explicit type casts.
QUERY:  select $1 @ $2 and $2 @ $1
CONTEXT:  SQL function _pg_keysequal during inlining

** Error **

ERROR: operator is not unique: smallint[] @ smallint[]
SQL state: 42725
Hint: Could not choose a best candidate operator. You might need to add 
explicit type casts.
Context: SQL function _pg_keysequal during inlining


I don't have more recent 8.4 releases to try it on.
It works fine on PG 9.1.3.

Did anyone else experienced this problem?

Regards,
Igor Neyman


Re: [GENERAL] information_schema.referential_constraints broken?

2012-07-27 Thread Igor Neyman
Answering my own question.
Replacing original definition of _pg_keysequal (in information_schema):

CREATE FUNCTION _pg_keysequal(smallint[], smallint[]) RETURNS boolean
LANGUAGE sql IMMUTABLE  -- intentionally not STRICT, to allow inlining
AS 'select $1 @ $2 and $2 @ $1';

with this one:

CREATE FUNCTION _pg_keysequal(smallint[], smallint[]) RETURNS boolean
LANGUAGE sql IMMUTABLE  -- intentionally not STRICT, to allow inlining
AS 'select $1 operator(pg_catalog.@) $2 and $2 operator(pg_catalog.@) $1';

makes queries like:

select * from information_schema.referential_constraints;

against information_schema.referential_constraints work without errors.

Sorry, for the noise.
Igor Neyman



From: Igor Neyman
Sent: Friday, July 27, 2012 9:27 AM
To: pgsql-general@postgresql.org
Subject: information_schema.referential_constraints broken?

This query:

select * from information_schema.referential_constraints;

under PG 8.4.5 (Windows platform) produces this error message:

ERROR:  operator is not unique: smallint[] @ smallint[]
LINE 1: select $1 @ $2 and $2 @ $1
  ^
HINT:  Could not choose a best candidate operator. You might need to add 
explicit type casts.
QUERY:  select $1 @ $2 and $2 @ $1
CONTEXT:  SQL function _pg_keysequal during inlining

** Error **

ERROR: operator is not unique: smallint[] @ smallint[]
SQL state: 42725
Hint: Could not choose a best candidate operator. You might need to add 
explicit type casts.
Context: SQL function _pg_keysequal during inlining


I don't have more recent 8.4 releases to try it on.
It works fine on PG 9.1.3.

Did anyone else experienced this problem?

Regards,
Igor Neyman


Re: [GENERAL] TYPE TABLE OF NUMBER

2012-06-05 Thread Igor Neyman


 -Original Message-
 From: utsav [mailto:utsav.ps...@tcs.com]
 Sent: Monday, June 04, 2012 1:00 AM
 To: pgsql-general@postgresql.org
 Subject: Re: TYPE TABLE OF NUMBER
 
 No.. in this we have to define domain and than used it . Our
 requirenment is like application pass the array of integer and that we
 have to use in PostgreSQL functions.
 
 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/TYPE-TABLE-OF-NUMBER-
 tp5710983p576.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.

PostgreSQL has already integer array type: _int4 or _int2 (or whatever 
integer length you need).
No need to create new type.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stability in Windows?

2012-02-24 Thread Igor Neyman
125_connections - this is a problem related to older PG versions
(and in my own experience, the number is closer to 230, not 125), and I
believe it was fixed in newer versions.

Besides, it's much more efficient to use connection pulling (PgBouncer,
PgPool), when dealing with that many connections.

 

For last 5 years we are running PG on many, many (hundreds) servers
under WIN, and we have no complains regarding PG stability.

 

Regards,

Igor Neyman

 

From: Serge Fonville [mailto:serge.fonvi...@gmail.com] 
Sent: Friday, February 24, 2012 11:07 AM
To: mgo...@isstrucksoftware.net
Cc: Durumdara; pgsql-general@postgresql.org
Subject: Re: Stability in Windows?

 

Doesn't
http://wiki.postgresql.org/wiki/Running_%26_Installing_PostgreSQL_On_Nat
ive_Windows#I_cannot_run_with_more_than_about_125_connections_at_once.2C
_despite_having_capable_hardware  apply?

 

Kind regards/met vriendelijke groet,

 

Serge Fonville

 

http://www.sergefonville.nl

Convince Google!!
They need to add GAL support on Android (star to agree)
http://code.google.com/p/android/issues/detail?id=4602



2012/2/24 Serge Fonville serge.fonvi...@gmail.com




Kind regards/met vriendelijke groet,

 

Serge Fonville

 

http://www.sergefonville.nl

Convince Google!!
They need to add GAL support on Android (star to agree)
http://code.google.com/p/android/issues/detail?id=4602



2012/2/24 mgo...@isstrucksoftware.net

 

We are using it on a rather beefy server with no problems with a
Win32
client/server app.  There are additonal things you can do to
tune the
database.  I've not seen any stability problems.  Remember it's
been
several years since version 8 came out and the current version
is 9.1.2.

Michael Gould
Intermodal Software Solutions, LLC
904-226-0978



 Original Message 
Subject: [GENERAL] Stability in Windows?
From: Durumdara durumd...@gmail.com
Date: Fri, February 24, 2012 8:39 am
To: pgsql-general@postgresql.org

Hi!



We planned to port some very old DBASE db into PGSQL.


But somebody said in a developer list that he tried with PGSQL
(8.x) and
it was very unstable in Windows (and it have problem when many
users use
it).


Another people also said that they used PGSQL only in Linux -
and there
is no problem with it, only some patches needed for speeding up
writes...


What is your experience in this theme?


Do you also have same experience in Windows?


The user number is from 20 to up 100 (concurrently).


Thanks for your every idea, help, link, information about this.


Regards:
  dd



--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

 

 



Re: [GENERAL] Vacuum and Large Objects

2012-01-06 Thread Igor Neyman
 -Original Message-
 From: Stefan Keller [mailto:sfkel...@gmail.com]
 Sent: Friday, January 06, 2012 1:12 AM
 To: Igor Neyman
 Cc: Simon Windsor; pgsql-general@postgresql.org
 Subject: Re: Vacuum and Large Objects
 
 Hi Igor
 2011/12/16 Igor Neyman iney...@perceptron.com wrote:  But I think,
 your problem is right here:
 
   running VACUUM FULL  pg_largeobject
 
  If you are running VACUUM FULL ... on the table, you should follow
 it with the REINDEX TABLE ..., at least on PG versions prior to 9.0.
 
 I'm pretty sure that VACUUM FULL builds new indexes. That's at least of
 how I understand the docs, especially the first tip here
 http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html
 
 Yours, Stefan
 
 
 2011/12/16 Igor Neyman iney...@perceptron.com:
  From: Simon Windsor [mailto:simon.wind...@cornfield.me.uk]
  Sent: Wednesday, December 14, 2011 3:02 PM
  To: pgsql-general@postgresql.org
  Subject: Vacuum and Large Objects
 
  Hi
 
  I am having problems recovering storage from a Postgres 9.05 database
 that is used to hold large XML blocks for a week, before they are
 archived off line.
 
  The main tables are partitioned in daily partitions, and these are
 easy to manage, however the DB keeps growing despite using Vacuum
 (daily at 0700) and autovacuum (this does not seem to run, although the
 process is running). The system is insert only, and partitions are
 dropped when over 7 days of age.
 
  I believe the issue lies with pg_largeobject, it is split between 88
 files of approx. 1G each.
 
  The Postgres settings are default, EXCEPT
 
  grep ^[a-z] postgresql.conf
  listen_addresses = '*'  # what IP address(es) to listen on;
  port = 5432 # (change requires restart)
  max_connections = 1000  # (change requires restart)
  shared_buffers = 256MB  # min 128kB work_mem =
 4MB
  # min 64kB maintenance_work_mem = 256MB    # min 1MB
  vacuum_cost_delay = 20ms    # 0-100 milliseconds
  checkpoint_segments = 32    # in logfile segments, min 1,
  16MB each checkpoint_completion_target = 0.9  # checkpoint target
  duration, 0.0 - 1.0 checkpoint_warning = 60s    # 0
  disables archive_mode = off  # allows archiving to be
 done
  constraint_exclusion = partition    # on, off, or partition
  log_destination = 'stderr'      # Valid values are
  combinations of logging_collector = on  # Enable capturing of
  stderr and csvlog silent_mode = on    # Run
 server silently.
  log_checkpoints = on
  log_line_prefix = '%t %d %u '   # special values:
  log_statement = 'none'  # none, ddl, mod, all
  track_activities = on track_counts = on autovacuum =
 on
  # Enable autovacuum subprocess?  'on'
  log_autovacuum_min_duration = 250   # -1 disables, 0 logs all
  actions and autovacuum_max_workers = 3  # max number of
  autovacuum subprocesses autovacuum_naptime = 3min   #
 time
  between autovacuum runs autovacuum_vacuum_threshold = 500   # min
  number of row updates before autovacuum_analyze_threshold = 100
 #
  min number of row updates before autovacuum_vacuum_scale_factor =
 0.1
  # fraction of table size before vacuum
 autovacuum_analyze_scale_factor
  = 0.05  # fraction of table size before analyze
  autovacuum_vacuum_cost_delay = 5ms  # default vacuum cost delay
  for autovacuum_vacuum_cost_limit = 200  # default vacuum cost
  limit for statement_timeout = 0   # in milliseconds,
 0 is disabled datestyle = 'iso, dmy'
  lc_messages = 'en_GB.UTF-8' # locale for system
  error message lc_monetary = 'en_GB.UTF-8' #
 locale
  for monetary formatting lc_numeric = 'en_GB.UTF-
 8'
  # locale for number formatting lc_time = 'en_GB.UTF-
 8'
  # locale for time formatting default_text_search_config =
 'pg_catalog.english'
 
  Besides running VACUUM FULL  pg_largeobject;, is there a way I can
 get autovacuum to start and clear this up?
 
  All the best
 
  Simon
 
  Simon Windsor
  Eml: simon.wind...@cornfield.org.uk
  Tel: 01454 617689
  Mob: 07590 324560
 
  There is nothing in the world that some man cannot make a little
 worse and sell a little cheaper, and he who considers price only is
 that man's lawful prey.
 
 
 
 
  I might be a bit late in this discussion.
  But I think, your problem is right here:
 
   running VACUUM FULL  pg_largeobject
 
  If you are running VACUUM FULL ... on the table, you should follow
 it with the REINDEX TABLE ..., at least on PG versions prior to 9.0.
 
  Regards,
  Igor Neyman
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
  make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general

Please, disregard my previous message.
I missed  Postgres 9.05 database in original post.

Regards,
Igor Neyman

-- 
Sent via pgsql-general

Re: [GENERAL] Vacuum and Large Objects

2012-01-06 Thread Igor Neyman
Stefan,

 -Original Message-
 From: Stefan Keller [mailto:sfkel...@gmail.com]
 Sent: Friday, January 06, 2012 1:12 AM
 To: Igor Neyman
 Cc: Simon Windsor; pgsql-general@postgresql.org
 Subject: Re: Vacuum and Large Objects
 
 Hi Igor
 2011/12/16 Igor Neyman iney...@perceptron.com wrote:  But I think,
 your problem is right here:
 
   running VACUUM FULL  pg_largeobject
 
  If you are running VACUUM FULL ... on the table, you should follow
 it with the REINDEX TABLE ..., at least on PG versions prior to 9.0.
 
 I'm pretty sure that VACUUM FULL builds new indexes. That's at least of
 how I understand the docs, especially the first tip here
 http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html
 
 Yours, Stefan
 
 
 2011/12/16 Igor Neyman iney...@perceptron.com:
  From: Simon Windsor [mailto:simon.wind...@cornfield.me.uk]
  Sent: Wednesday, December 14, 2011 3:02 PM
  To: pgsql-general@postgresql.org
  Subject: Vacuum and Large Objects
 
  Hi
 
  I am having problems recovering storage from a Postgres 9.05 database
 that is used to hold large XML blocks for a week, before they are
 archived off line.
 
  The main tables are partitioned in daily partitions, and these are
 easy to manage, however the DB keeps growing despite using Vacuum
 (daily at 0700) and autovacuum (this does not seem to run, although the
 process is running). The system is insert only, and partitions are
 dropped when over 7 days of age.
 
  I believe the issue lies with pg_largeobject, it is split between 88
 files of approx. 1G each.
 
  The Postgres settings are default, EXCEPT
 
  grep ^[a-z] postgresql.conf
  listen_addresses = '*'  # what IP address(es) to listen on;
  port = 5432 # (change requires restart)
  max_connections = 1000  # (change requires restart)
  shared_buffers = 256MB  # min 128kB work_mem =
 4MB
  # min 64kB maintenance_work_mem = 256MB    # min 1MB
  vacuum_cost_delay = 20ms    # 0-100 milliseconds
  checkpoint_segments = 32    # in logfile segments, min 1,
  16MB each checkpoint_completion_target = 0.9  # checkpoint target
  duration, 0.0 - 1.0 checkpoint_warning = 60s    # 0
  disables archive_mode = off  # allows archiving to be
 done
  constraint_exclusion = partition    # on, off, or partition
  log_destination = 'stderr'      # Valid values are
  combinations of logging_collector = on  # Enable capturing of
  stderr and csvlog silent_mode = on    # Run
 server silently.
  log_checkpoints = on
  log_line_prefix = '%t %d %u '   # special values:
  log_statement = 'none'  # none, ddl, mod, all
  track_activities = on track_counts = on autovacuum =
 on
  # Enable autovacuum subprocess?  'on'
  log_autovacuum_min_duration = 250   # -1 disables, 0 logs all
  actions and autovacuum_max_workers = 3  # max number of
  autovacuum subprocesses autovacuum_naptime = 3min   #
 time
  between autovacuum runs autovacuum_vacuum_threshold = 500   # min
  number of row updates before autovacuum_analyze_threshold = 100
 #
  min number of row updates before autovacuum_vacuum_scale_factor =
 0.1
  # fraction of table size before vacuum
 autovacuum_analyze_scale_factor
  = 0.05  # fraction of table size before analyze
  autovacuum_vacuum_cost_delay = 5ms  # default vacuum cost delay
  for autovacuum_vacuum_cost_limit = 200  # default vacuum cost
  limit for statement_timeout = 0   # in milliseconds,
 0 is disabled datestyle = 'iso, dmy'
  lc_messages = 'en_GB.UTF-8' # locale for system
  error message lc_monetary = 'en_GB.UTF-8' #
 locale
  for monetary formatting lc_numeric = 'en_GB.UTF-
 8'
  # locale for number formatting lc_time = 'en_GB.UTF-
 8'
  # locale for time formatting default_text_search_config =
 'pg_catalog.english'
 
  Besides running VACUUM FULL  pg_largeobject;, is there a way I can
 get autovacuum to start and clear this up?
 
  All the best
 
  Simon
 
  Simon Windsor
  Eml: simon.wind...@cornfield.org.uk
  Tel: 01454 617689
  Mob: 07590 324560
 
  There is nothing in the world that some man cannot make a little
 worse and sell a little cheaper, and he who considers price only is
 that man's lawful prey.
 
 
 
 
  I might be a bit late in this discussion.
  But I think, your problem is right here:
 
   running VACUUM FULL  pg_largeobject
 
  If you are running VACUUM FULL ... on the table, you should follow
 it with the REINDEX TABLE ..., at least on PG versions prior to 9.0.
 
  Regards,
  Igor Neyman
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
  make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general

I qualified my reply with PG version specifics, since original poster didn't 
specify Postgres version he's using.

Regards,
Igor Neyman

Re: [GENERAL] Vacuum and Large Objects

2011-12-16 Thread Igor Neyman
From: Simon Windsor [mailto:simon.wind...@cornfield.me.uk] 
Sent: Wednesday, December 14, 2011 3:02 PM
To: pgsql-general@postgresql.org
Subject: Vacuum and Large Objects

Hi

I am having problems recovering storage from a Postgres 9.05 database that is 
used to hold large XML blocks for a week, before they are archived off line.

The main tables are partitioned in daily partitions, and these are easy to 
manage, however the DB keeps growing despite using Vacuum (daily at 0700) and 
autovacuum (this does not seem to run, although the process is running). The 
system is insert only, and partitions are dropped when over 7 days of age.

I believe the issue lies with pg_largeobject, it is split between 88 files of 
approx. 1G each.

The Postgres settings are default, EXCEPT

grep ^[a-z] postgresql.conf
listen_addresses = '*'  # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 1000  # (change requires restart)
shared_buffers = 256MB  # min 128kB
work_mem = 4MB      # min 64kB
maintenance_work_mem = 256MB    # min 1MB
vacuum_cost_delay = 20ms    # 0-100 milliseconds
checkpoint_segments = 32    # in logfile segments, min 1, 16MB each
checkpoint_completion_target = 0.9  # checkpoint target duration, 0.0 - 1.0
checkpoint_warning = 60s    # 0 disables
archive_mode = off  # allows archiving to be done
constraint_exclusion = partition    # on, off, or partition
log_destination = 'stderr'      # Valid values are combinations of
logging_collector = on  # Enable capturing of stderr and csvlog
silent_mode = on    # Run server silently.
log_checkpoints = on
log_line_prefix = '%t %d %u '   # special values:
log_statement = 'none'  # none, ddl, mod, all
track_activities = on
track_counts = on
autovacuum = on # Enable autovacuum subprocess?  'on'
log_autovacuum_min_duration = 250   # -1 disables, 0 logs all actions and
autovacuum_max_workers = 3  # max number of autovacuum subprocesses
autovacuum_naptime = 3min   # time between autovacuum runs
autovacuum_vacuum_threshold = 500   # min number of row updates before
autovacuum_analyze_threshold = 100  # min number of row updates before
autovacuum_vacuum_scale_factor = 0.1    # fraction of table size before vacuum
autovacuum_analyze_scale_factor = 0.05  # fraction of table size before analyze
autovacuum_vacuum_cost_delay = 5ms  # default vacuum cost delay for
autovacuum_vacuum_cost_limit = 200  # default vacuum cost limit for
statement_timeout = 0   # in milliseconds, 0 is disabled
datestyle = 'iso, dmy'
lc_messages = 'en_GB.UTF-8' # locale for system error 
message
lc_monetary = 'en_GB.UTF-8' # locale for monetary formatting
lc_numeric = 'en_GB.UTF-8'  # locale for number formatting
lc_time = 'en_GB.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.english'

Besides running VACUUM FULL  pg_largeobject;, is there a way I can get 
autovacuum to start and clear this up?

All the best

Simon 

Simon Windsor
Eml: simon.wind...@cornfield.org.uk
Tel: 01454 617689
Mob: 07590 324560

There is nothing in the world that some man cannot make a little worse and 
sell a little cheaper, and he who considers price only is that man's lawful 
prey.




I might be a bit late in this discussion.
But I think, your problem is right here:

 running VACUUM FULL  pg_largeobject

If you are running VACUUM FULL ... on the table, you should follow it with 
the REINDEX TABLE ..., at least on PG versions prior to 9.0.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rules going away

2011-09-30 Thread Igor Neyman
 -Original Message-
 From: Gregg Jaskiewicz [mailto:gryz...@gmail.com]
 Sent: Friday, September 30, 2011 5:18 AM
 To: Igor Neyman
 Cc: Ondrej Ivanič; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Rules going away
 
 speaking of DO INSTEAD, for insert/update case. Try using RETURNING
 with that and rules ;) Good luck

Well, I designed it for specific case.
And in my application I had no need for RETURNING clause.
That's why I'm saying, it works perfectly for me, but I'm not trying to 
generalize, like those who say Rules are evil and don't ever use them.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rules going away

2011-09-29 Thread Igor Neyman
 -Original Message-
 From: Ondrej Ivanič [mailto:ondrej.iva...@gmail.com]
 Sent: Wednesday, September 28, 2011 6:47 PM
 To: pgsql-general@postgresql.org
 Subject: Re: Rules going away
 
 Hi,
 
  folks, don't use RULES! use triggers -- and as much as possible, keep
  triggers simple, short, and to the point (simple validation, custom
  RI, auditing/logging, etc).
 
 I like them :). 'DO INSTEAD' rules are great for partitioning so you
 can insert (or update) to parent table and 'DO INSTEAD' rule takes
 care about the rest.
 
 --
 Ondrej Ivanic
 (ondrej.iva...@gmail.com)

Years ago (PG version 8.2), I was working on implementing partitioned tables, 
and evaluated whether to use triggers or rules to automagically redirect 
inserts/updates/delete from main table to appropriate partition based on the 
value of partitioning key.

Unfortunately, I didn't document my research, but the result was that triggers 
were unusable for my purposes, and I ended up with the rules calling functions 
where using dynamic sql I decide which partition should be used for 
insert/update/delete (at least on selects PG takes care of choosing proper 
partition, of course with properly set configuration parameter).

These rules (and functions they call) work perfectly for me all these years in 
many, many, many installations.
So, until PG takes care of the problem I'm solving with the rules (implementing 
complete partitioning feature, and not just on selects), please leave rules 
where they are, they definitely have their use when properly implemented with 
specific purpose.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql problem with update double precision

2011-08-05 Thread Igor Neyman


 -Original Message-
 From: Condor [mailto:con...@stz-bg.com]
 Sent: Friday, August 05, 2011 6:49 AM
 To: pgsql-general@postgresql.org
 Subject: Postgresql problem with update double precision
 
 
 Hello ppl,
 for few years I have problem when update double precision field. I have
 table and few double precision columns, here is example:
 
 sumall double precision,
 sumin double precision,
 
 My php script do:
 
 $get = 2.40
 
 and sql code is:
 
 UPDATE table1 SET sumall = sumall + $get WHERE id = 1 AND rd =
 CURRENT_DATE;
 
 
 When I browse the table some times i see incorrect values like:
 
 955.5998
 
 it's should be 955.60 after these updates ... some days is fine, some
 days the value is incorrect.
 
 I have this problem from version 7 of postgresql, Im now use 9.0.3
 
 Anyone know what can be the problem and why some times records is fine,
 some times isnt ?
 
 --
 Regards,
 Condor

double precision is imprecise data type, that's why you see what you see.

If you want to avoid your problem, switch to NUMERIC(precision, scale), which 
is precise data type.
Alter the type of your double columns.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query to get the list of key (reserverd) words?

2011-08-04 Thread Igor Neyman


 -Original Message-
 From: Bill Moran [mailto:wmo...@potentialtech.com]
 Sent: Thursday, August 04, 2011 8:53 AM
 To: pgsql-general@postgresql.org
 Subject: query to get the list of key (reserverd) words?
 
 
 I'm in the unenviable position of needing to check various input to
 ensure that it doesn't contain any PostgreSQL/SQL key words/reserved
 words.
 
 The initial implementation simply made a copy of this table:
 http://www.postgresql.org/docs/8.3/static/sql-keywords-
 appendix.html#KEYWORDS-TABLE
 into a static array in the code.  Obviously, this is non-optimal
 because it becomes a manual chore to ensure the list is up to date
 any time new PG releases are made.
 
 Is there a pg_* or other table in the database that I can query for
 this
 list?  Or some other automated method that can be employed?
 
 --
 Bill Moran
 http://www.potentialtech.com
 http://people.collaborativefusion.com/~wmoran/

Use pg_get_keywords(OUT word text, OUT catcode char, OUT catdesc text)

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint to ensure value does NOT exist in another table?

2011-06-16 Thread Igor Neyman


-Original Message-
From: Mike Christensen [mailto:m...@kitchenpc.com] 
Sent: Thursday, June 16, 2011 1:05 AM
To: pgsql-general@postgresql.org
Subject: Constraint to ensure value does NOT exist in another table?

I know I can setup a FK constraint to make sure Table1.ColA exists in
Table2.Key, however what if I want to do the reverse?

I want to ensure Table1.ColA does NOT exist in Table2.Key..  Can I do
this with any sort of CHECK constraint, trigger, custom function, etc?
 Thanks!

Mike


Trigger (with corresponding trigger function) will definitely do the
job.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rearranging simple where clauses

2011-05-05 Thread Igor Neyman


-Original Message-
From: Michael Graham [mailto:mgra...@bloxx.com] 
Sent: Wednesday, May 04, 2011 11:59 AM
To: pgsql-general@postgresql.org
Subject: Re: Rearranging simple where clauses

On Wed, 2011-05-04 at 11:49 -0400, Tom Lane wrote:
 Well, you failed to show us any concrete examples of the cases you
 were looking at, but no I don't think the planner necessarily likes
 all the constants on one side.  Most likely the win cases are where
 one side of a WHERE-condition operator exactly matches an index, so
 you'd need to be looking for places where rearrangement could make
 that happen. 

The reason I never showed you any was because I don't have any I was
just curious.  But yeah making one side match an index exactly is
probably the biggest win.

I.N.

I think, it'll be probably the only win, not the biggest - sometimes big, 
sometimes small.

But, what if there are more than one index based on the column in question? - 
Which one optimizer is supposed to satisfy by rearranging where clause?

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A join of 2 tables with sum(column) 30

2011-03-16 Thread Igor Neyman
 

 -Original Message-
 From: Alexander Farber [mailto:alexander.far...@gmail.com] 
 Sent: Tuesday, March 15, 2011 5:45 PM
 To: pgsql-general@postgresql.org
 Subject: Re: A join of 2 tables with sum(column)  30
 
 And same for a simple select-query from1 table (w/o join):
 
 # select id from pref_match where sum(completed)  30 group by id;
 ERROR:  aggregates not allowed in WHERE clause LINE 1: select 
 id from pref_match where sum(completed)  30 group by...
 ^
 

You should HAVING close (not WHERE) for an aggregate as a condition:

Select id, sum(col1) from tab
Where id  10
Group by id
Having sum)col1) 30;

Spend some time reading basic SQL docs/books - it'll help you
tremendously.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


<    1   2   3   >