Re: [GENERAL] Exclusive row locks not release

2012-01-19 Thread Mark van Leeuwen

On 20/01/2012 4:40 PM, Tom Lane wrote:

Mark van Leeuwen  writes:

I have a case where exclusive row locks have been placed on a table and
I don't what process has the locks or how they might be released.
The locks are still there even after I have restarted the database.

Uncommitted prepared transaction, perhaps?  Look into pg_prepared_xacts.

regards, tom lane

Yes, that was it. Don't have much experience with Postgresql - had 
expected all locks would have been released by stopping the database.


I used ROLLBACK PREPARED transaction_id to remove the locks.

Thanks for your help,
Mark

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclusive row locks not release

2012-01-19 Thread Tom Lane
Mark van Leeuwen  writes:
> I have a case where exclusive row locks have been placed on a table and 
> I don't what process has the locks or how they might be released.

> The locks are still there even after I have restarted the database. 

Uncommitted prepared transaction, perhaps?  Look into pg_prepared_xacts.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Exclusive row locks not release

2012-01-19 Thread Mark van Leeuwen

Hi,

I have a case where exclusive row locks have been placed on a table and 
I don't what process has the locks or how they might be released.


The locks are still there even after I have restarted the database. 
Rebooting the server also made no difference.


I am running the latest pg version 9.1.2-1.

Here is the query I used to show the locks:
select t.relname,l.locktype,page,virtualtransaction,pid,mode,granted 
from pg_locks l, pg_stat_all_tables t where l.relation=t.relid order by 
relation asc;


Here are the locks (excuse formatting), table name is EJB__TIMER__TBL:
relnamelocktypepagevirtualtransactionpidmodegranted
pg_classrelation 2/633961AccessShareLocktrue
pg_indexrelation 2/633961AccessShareLocktrue
pg_namespacerelation 2/633961AccessShareLocktrue
EJB__TIMER__TBLrelation -1/1761142 
RowExclusiveLocktrue
EJB__TIMER__TBLrelation -1/1758118 
RowExclusiveLocktrue


According to pg_catalog.pg_stat_activity, there are no other connections 
to the database.


Suggestions?

Thanks
Mark


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] planner, newly added records and most common values

2012-01-19 Thread Andrew Sullivan
On Thu, Jan 19, 2012 at 05:36:26PM -0800, Jeff Amiel wrote:

> I COULD do an analyze after loading the file...but there is no guarantee that 
> the file I just loaded will end up in the most common value listand I end 
> up with bad plan.
> 

Sounds like you need to SET STATISTICS higher for that column and do
the analyse.  Have you tried that?

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] planner, newly added records and most common values

2012-01-19 Thread Jeff Amiel
Ive got a scenario where I've got a 2 million row table.  Data from inbound 
files gets processed into it.  
A new file might have 10 resulting rows in this table...might have 
40K...depends on the source, day of month, etc.

I've got a process that parses the file and loads the records into the 
table...giving it a unique file_id for the overall load and places that value 
on each record.
Another process will perform a series of queries...joining against that table 
(for only records with that file_id).  
The problem is that the planner has no idea how many records might exist for 
that file_id.  
If I throw a file_id at the planner that is not in the most common value list, 
it picks a nice number like 384 as it's row count estimate.  
So when I am referencing a new file_id (that obviously isn't IN the most common 
value list as yet..regardless of how many 
records I just loaded because I haven't run analyze yet),  the planner 
dutifully estimates that I will get only 384 rows.  
For large files, this is off by 2 (or god forbid, 3) orders of magnitude.  
That yields very bad overall plans (regardless of the fact that I have indexes 
on the file_id column)

It seems like I am in a no-win situation.  The query I am executing is fairly 
complex...and when the planner is off by multiple orders of magnitude on a 
rowcount, it goes way off the tracks in terms of planning.

I COULD do an analyze after loading the file...but there is no guarantee that 
the file I just loaded will end up in the most common value listand I end 
up with bad plan.

Any thoughts?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to improve my slow query for table have list of child table?

2012-01-19 Thread David Johnston
-Original Message-
From: Alex Lai [mailto:a...@sesda2.com] 
Sent: Thursday, January 19, 2012 3:12 PM
To: David Johnston
Cc: 'postgres general support'
Subject: Re: [GENERAL] How to improve my slow query for table have list of
child table?

David Johnston wrote:
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alex Lai
> Sent: Thursday, January 19, 2012 1:56 PM
> To: postgres general support
> Subject: [GENERAL] How to improve my slow query for table have list of 
> child table?
>
> Dear All,
>
> It looks to me postgres still Seq Scan it's child tables.
> Normally, a simple query on the 30 millions rows with proper indexing 
> will only take about 1 second.
> Any idea are welcome.
>
> --
>
> You need to create indexes on the children before you can expect an 
> index to be used.
>
> http://www.postgresql.org/docs/9.0/interactive/ddl-inherit.html 
> [Section 5.8.1]
>
> "A serious limitation of the inheritance feature is that indexes 
> (including unique constraints) and foreign key constraints only apply 
> to single tables, not to their inheritance children. This is true on 
> both the referencing and referenced sides of a foreign key constraint."
>


Hi David,

I created a table copy all the 30 millions rows from filemeta table. The new
created table has no inherit child tables associated. I ran the same query
and got the time down to 2.8 seconds from 4.5 seconds.
The cost impacted by inherit around 1.8 seconds.

I also noticed the column 'key' has no index on that I use to search. 
Although column 'key' has 25% rows are null.
I was able to created index on column 'key'. I ran the same query and got
the time down to 1.9 seconds form 2.8 seconds.

I wonder there are any work around to over come the inherit issue by not
restructure the schema.


--

Alex,

I do not see a scenario where you added indexes for "estd" to the child
tables and then ran the query.

Inheritance works best when you are able to determine that the desired
values will appear on only a single child table so that the data from the
other tables (index or scan) can be completely ignored.  Since you are
dealing with multiple-table results the performance on a single table is
likely to be better than the performance of the partition-group; regardless
of indexes.

Also, you cannot take a single run of a query against two scenarios and
directly compare them and expect meaningful results.  Even if you are doing
things manually you should probably execute each query 5-10 times and then
throw out the first couple of times for each set.  Then provide the simple
average of the remaining attempts and possibly just include all of the
response times for completeness.

David J.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot connect remotely to postgresql

2012-01-19 Thread Raymond O'Donnell
On 19/01/2012 20:40, Willem Buitendyk wrote:
> I tried manually starting without the service automatically running
> using pg_ctl start -D "c:\program files (x86)\etc etc"  which
> reported back that i might have another postmaster running.  I then
> did pg_ctl reload -D "c:\program files (x86)\etc etc" and it sent a
> signal and voila it worked.  I have since put everything back to
> having the postgresql service start automatically upon machine
> startup and its back to not working.  In fact, when I run pg_ctl
> status from a fresh boot with the postgresql service automatically
> starting I get the return message of: pg_ctl: no server running.

So are you saying that the PostgreSQL service isn't starting up
automatically on system boot, even though it's set to? If so, you need
to check the Windows event log and the Postgres logs to find the reason.

The fact that it works for you when logged in, but not at system boot,
smells to me like a permissions problem... but I'm not an expert.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot connect remotely to postgresql

2012-01-19 Thread Willem Buitendyk
I tried manually starting without the service automatically running using 
pg_ctl start -D "c:\program files (x86)\etc etc"  which reported back that i 
might have another postmaster running.  I then did pg_ctl reload -D "c:\program 
files (x86)\etc etc" and it sent a signal and voila it worked.  I have since 
put everything back to having the postgresql service start automatically upon 
machine startup and its back to not working.  In fact, when I run pg_ctl status 
from a fresh boot with the postgresql service automatically starting I get the 
return message of: pg_ctl: no server running.

So perhaps there is something with 8.3 and windows 64 specifically in that the 
configuration files are loading from somewhere else.  Very peculiar behaviour.  
I have some resolve from my madness.  At least I can manually start the service 
and have it running properly.


On 2012-01-19, at 10:10 AM, Raymond O'Donnell wrote:

> On 19/01/2012 17:27, Willem Buitendyk wrote:
>> I have 8.2 installed on 64bit windows 7.  I have no problem making a
>> local connection.  However, when I make changes to pg_hba.conf such
>> as add:
>> 
>> local all all trust
> 
> What is the exact error message you're getting?
> 
> Did you restart the server after changing pg_hba.conf?
> 
> Also, I don't think "local" rules do anything on windows - you need to
> add a "host" rule as the connections are over TCP/IP (though I could be
> wrong).
> 
>> I still cannot connect through a VPN.  On a hunch that my pg server
>> was not using the config files in "C:\Program Files
>> (x86)\PostgreSQL\8.3\data"  I changed the port in postgresql.conf to
>> 5433 and restarted the server.  After doing this I am still able to
>> connect the server using "psql -h localhost -U postgres -d xxx"  I am
>> assuming (perhaps incorrectly) that I shouldn't be able to do this.
> 
> That does seem odd - you should need the -p option for anything other
> than the standard port.
> 
> Is there any chance that you have more than one installation running on
> the machine, and the other one is listening on port 5432?
> 
> Ray.
> 
> -- 
> Raymond O'Donnell :: Galway :: Ireland
> r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema question

2012-01-19 Thread Raymond O'Donnell
On 19/01/2012 20:14, Heine Ferreira wrote:
> Hi
> 
> I saw with PGAdmin that there is a public schema in the default postgres
> database.
> Does every database have a public schema?

Yes.

> What is a schema and can you create your own?

A schema is a means of making logical divisions within your database.
Read all about it here:

http://www.postgresql.org/docs/9.1/static/sql-createschema.html

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] schema question

2012-01-19 Thread Heine Ferreira
Hi

I saw with PGAdmin that there is a public schema in the default postgres
database.
Does every database have a public schema?
What is a schema and can you create your own?

Thanks

H.F.


Re: [GENERAL] 2 very newbie questions

2012-01-19 Thread Adrian Klaver

On 01/19/2012 12:04 PM, Heine Ferreira wrote:

Hi

The Postgresql manual is rather huge so I would appreciate it if you
could help me with the following 2 questions:

What is an oid when creating tables and what is the purpose of it?


oid stands for Object ID. A long time ago they where used as a quick and 
dirty way to generate unique sequences for a table. The use of oids in a 
public table is now deprecated. If you want a unique sequence use the 
serial type.




What is a toast table?


It is an auxiliary table that stores information for fields when the 
size of the data in the field exceeds certain limits. This does not 
apply to all data types.

Better description here:
http://www.postgresql.org/docs/9.0/static/storage-toast.html



Thanks

H.F.




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to improve my slow query for table have list of child table?

2012-01-19 Thread Alex Lai

David Johnston wrote:

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alex Lai
Sent: Thursday, January 19, 2012 1:56 PM
To: postgres general support
Subject: [GENERAL] How to improve my slow query for table have list of child
table?

Dear All,

I have a large table that have 8 child tables.
The size of the table is 30 millioins with necessary index needed.

Table filemeta
 Column |   Type| Modifiers
+---+---
 fileid | integer   | not null
 esdt   | character varying |
 key| character varying |
 source | character varying |
Indexes:
"pk_filemeta" PRIMARY KEY, btree (fileid)
"ak_filemeta_esdt" btree (esdt)
"ak_filemeta_fileid" btree (fileid)
"ak_filemeta_source" btree (source)
Foreign-key constraints:
"fk_filemeta_esdt" FOREIGN KEY (esdt) REFERENCES esdt_def(esdt)
DEFERRABLE
"fk_filemeta_file" FOREIGN KEY (fileid) REFERENCES file(fileid) ON
DELETE CASCADE
"fk_filemeta_source" FOREIGN KEY (source) REFERENCES source_def(source)
Child tables: filemeta_anc,
  filemeta_app,
  filemeta_l0,
  filemeta_l0r,
  filemeta_mdkey,
  filemeta_ompslookup,
  filemeta_orbital,
  filemeta_timerange

 Explain analyse select * from filemeta where esdt = 'MET' and key =
'2011-10-08 07:09:47-04';

QUERY 
PLAN





---
 Result  (cost=53295.97..558304.84 rows=42 width=37) (actual
time=1063.016..3770.361 rows=5 loops=1)
   ->  Append  (cost=53295.97..558304.84 rows=42 width=37) (actual
time=1063.013..3770.348 rows=5 loops=1)
 ->  Bitmap Heap Scan on filemeta  (cost=53295.97..370366.99
rows=34 width=35) (actual time=1063.011..2020.002 rows=5 loops=1)
   Recheck Cond: ((esdt)::text = 'MET'::text)
   Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
   ->  Bitmap Index Scan on ak_filemeta_esdt
(cost=0.00..53295.96 rows=3245468 width=0) (actual time=910.287..910.287
rows=3216226 loops=1)
 Index Cond: ((esdt)::text = 'MET'::text)
 ->  Seq Scan on filemeta_anc filemeta  (cost=0.00..574.01
rows=1 width=59) (actual time=5.740..5.740 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Seq Scan on filemeta_app filemeta  (cost=0.00..16.30 rows=1
width=100) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Seq Scan on filemeta_l0 filemeta  (cost=0.00..7483.35
rows=1 width=39) (actual time=52.905..52.905 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Seq Scan on filemeta_l0r filemeta  (cost=0.00..123.87
rows=1 width=40) (actual time=1.353..1.353 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Seq Scan on filemeta_mdkey filemeta  (cost=0.00..29707.58
rows=1 width=28) (actual time=273.616..273.616 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Index Scan using pk_filemeta_ompslookup on filemeta_ompslookup
filemeta  (cost=0.00..29.84 rows=1 width=45) (actual
time=0.084..0.084 rows=0 loops=1)
   Index Cond: ((esdt)::text = 'MET'::text)
   Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
 ->  Bitmap Heap Scan on filemeta_orbital filemeta
(cost=95.13..5674.40 rows=1 width=22) (actual time=0.021..0.021 rows=0
loops=1)
   Recheck Cond: ((esdt)::text = 'MET'::text)
   Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
   ->  Bitmap Index Scan on id_filemeta_orbital
(cost=0.00..95.13 rows=4173 width=0) (actual time=0.018..0.018 rows=0
loops=1)
 Index Cond: ((esdt)::text = 'MET'::text)
 ->  Seq Scan on filemeta_timerange filemeta
(cost=0.00..144328.49 rows=1 width=44) (actual time=1416.605..1416.605
rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 Total runtime: 3770.529 ms
(28 rows)


I did
set enable_seqscan = off;
but it does not improve much.

Explain analyse select * from filemeta where esdt = 'MET' and key =
'2011-10-08 07:09:47-04';

QUERY 
PLAN






[GENERAL] 2 very newbie questions

2012-01-19 Thread Heine Ferreira
Hi

The Postgresql manual is rather huge so I would appreciate it if you could
help me with the following 2 questions:

What is an oid when creating tables and what is the purpose of it?

What is a toast table?

Thanks

H.F.


Re: [GENERAL] Cannot connect remotely to postgresql

2012-01-19 Thread Willem Buitendyk

On 2012-01-19, at 10:10 AM, Raymond O'Donnell wrote:

> On 19/01/2012 17:27, Willem Buitendyk wrote:
>> I have 8.2 installed on 64bit windows 7.  I have no problem making a
>> local connection.  However, when I make changes to pg_hba.conf such
>> as add:
>> 
>> local all all trust
> 
> What is the exact error message you're getting?
> 
> Did you restart the server after changing pg_hba.conf?
> 
> Also, I don't think "local" rules do anything on windows - you need to
> add a "host" rule as the connections are over TCP/IP (though I could be
> wrong).
> 
>> I still cannot connect through a VPN.  On a hunch that my pg server
>> was not using the config files in "C:\Program Files
>> (x86)\PostgreSQL\8.3\data"  I changed the port in postgresql.conf to
>> 5433 and restarted the server.  After doing this I am still able to
>> connect the server using "psql -h localhost -U postgres -d xxx"  I am
>> assuming (perhaps incorrectly) that I shouldn't be able to do this.
> 
> That does seem odd - you should need the -p option for anything other
> than the standard port.
> 
> Is there any chance that you have more than one installation running on
> the machine, and the other one is listening on port 5432?

There is only one service listed.  If I try the following:

C:\Users\Willem>postgres -D "C:\Program Files (x86)\PostgreSQL\8.3\data"

I get:

2012-01-19 10:48:06 PST LOG:  loaded library "$libdir/plugins/plugin_debugger.dl
l"
2012-01-19 10:48:06 PST LOG:  could not bind IPv4 socket: No error
2012-01-19 10:48:06 PST HINT:  Is another postmaster already running on port 543
3? If not, wait a few seconds and retry.
2012-01-19 10:48:06 PST WARNING:  could not create listen socket for "10.0.1.7"

There appears to be no other instance of postgresql running on my system other 
then the one.  
I will try a restart without the service starting automatically and try a 
manual start next.

> 
> Ray.
> 
> -- 
> Raymond O'Donnell :: Galway :: Ireland
> r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to make text fields accent insensitive?

2012-01-19 Thread Heine Ferreira
Hi

Recently you answered my question on how to make a field case insensitive.
You showed me how to do it with the extension and data type citext.

How do I make the same field accent insensitive as well?
I managed to install the extension "unaccent" in Postgres 9.1.2
using create extension. Unfortunately I could only use it in the where
clauses
of sql statements. I want to make a field behave accent insensitive during
comparisons just like citext does for case insensitive.

The reason I want to do this is because it's one of the options I am used
to in
Microsoft SQL Server. I can do something like this:

select * from table1 where unaccent(field1)='John';

I can also do this:

select * from table1 where unaccent(field1)::citext='JohN';

But I want field1 to have this a default behaviour?

Thanks

H.F.


Re: [GENERAL] On duplicate ignore

2012-01-19 Thread Lincoln Yeoh

At 10:54 PM 1/19/2012, Florian Weimer wrote:

* Gnanakumar:

>> Just create a unique index on EMAIL column and handle error if it comes
>
> Thanks for your suggestion.  Of course, I do understand that this could be
> enforced/imposed at the database-level at any time.  But I'm trying to find
> out whether this could be solved at the application layer itself.  Any
> thoughts/ideas?

If you use serializable transactions in PostgreSQL 9.1, you can
implement such constraints in the application without additional
locking.  However, with concurrent writes and without an index, the rate
of detected serialization violations and resulting transactions aborts
will be high.


Would writing application-side code to handle those transaction 
aborts in 9.1 be much easier than writing code to handle transaction 
aborts/DB exceptions due to unique constraint violations? These 
transaction aborts have to be handled differently (e.g. retried for X 
seconds/Y tries) from other sort of transaction aborts (not retried).


Otherwise I don't see the benefit of this feature for this scenario. 
Unless of course you get significantly better performance by not 
having a unique constraint.


If insert performance is not an issue and code simplicity is 
preferred, one could lock the table (with an exclusive lock mode), 
then do the selects and inserts, that way your code can assume that 
any transaction aborts are due to actual problems rather than 
concurrency. Which often means less code to write :).


Regards,
Link.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to improve my slow query for table have list of child table?

2012-01-19 Thread David Johnston
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alex Lai
Sent: Thursday, January 19, 2012 1:56 PM
To: postgres general support
Subject: [GENERAL] How to improve my slow query for table have list of child
table?

Dear All,

I have a large table that have 8 child tables.
The size of the table is 30 millioins with necessary index needed.

Table filemeta
 Column |   Type| Modifiers
+---+---
 fileid | integer   | not null
 esdt   | character varying |
 key| character varying |
 source | character varying |
Indexes:
"pk_filemeta" PRIMARY KEY, btree (fileid)
"ak_filemeta_esdt" btree (esdt)
"ak_filemeta_fileid" btree (fileid)
"ak_filemeta_source" btree (source)
Foreign-key constraints:
"fk_filemeta_esdt" FOREIGN KEY (esdt) REFERENCES esdt_def(esdt)
DEFERRABLE
"fk_filemeta_file" FOREIGN KEY (fileid) REFERENCES file(fileid) ON
DELETE CASCADE
"fk_filemeta_source" FOREIGN KEY (source) REFERENCES source_def(source)
Child tables: filemeta_anc,
  filemeta_app,
  filemeta_l0,
  filemeta_l0r,
  filemeta_mdkey,
  filemeta_ompslookup,
  filemeta_orbital,
  filemeta_timerange

 Explain analyse select * from filemeta where esdt = 'MET' and key =
'2011-10-08 07:09:47-04';

QUERY 
PLAN




---
 Result  (cost=53295.97..558304.84 rows=42 width=37) (actual
time=1063.016..3770.361 rows=5 loops=1)
   ->  Append  (cost=53295.97..558304.84 rows=42 width=37) (actual
time=1063.013..3770.348 rows=5 loops=1)
 ->  Bitmap Heap Scan on filemeta  (cost=53295.97..370366.99
rows=34 width=35) (actual time=1063.011..2020.002 rows=5 loops=1)
   Recheck Cond: ((esdt)::text = 'MET'::text)
   Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
   ->  Bitmap Index Scan on ak_filemeta_esdt
(cost=0.00..53295.96 rows=3245468 width=0) (actual time=910.287..910.287
rows=3216226 loops=1)
 Index Cond: ((esdt)::text = 'MET'::text)
 ->  Seq Scan on filemeta_anc filemeta  (cost=0.00..574.01
rows=1 width=59) (actual time=5.740..5.740 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Seq Scan on filemeta_app filemeta  (cost=0.00..16.30 rows=1
width=100) (actual time=0.001..0.001 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Seq Scan on filemeta_l0 filemeta  (cost=0.00..7483.35
rows=1 width=39) (actual time=52.905..52.905 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Seq Scan on filemeta_l0r filemeta  (cost=0.00..123.87
rows=1 width=40) (actual time=1.353..1.353 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Seq Scan on filemeta_mdkey filemeta  (cost=0.00..29707.58
rows=1 width=28) (actual time=273.616..273.616 rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 ->  Index Scan using pk_filemeta_ompslookup on filemeta_ompslookup
filemeta  (cost=0.00..29.84 rows=1 width=45) (actual
time=0.084..0.084 rows=0 loops=1)
   Index Cond: ((esdt)::text = 'MET'::text)
   Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
 ->  Bitmap Heap Scan on filemeta_orbital filemeta
(cost=95.13..5674.40 rows=1 width=22) (actual time=0.021..0.021 rows=0
loops=1)
   Recheck Cond: ((esdt)::text = 'MET'::text)
   Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
   ->  Bitmap Index Scan on id_filemeta_orbital
(cost=0.00..95.13 rows=4173 width=0) (actual time=0.018..0.018 rows=0
loops=1)
 Index Cond: ((esdt)::text = 'MET'::text)
 ->  Seq Scan on filemeta_timerange filemeta
(cost=0.00..144328.49 rows=1 width=44) (actual time=1416.605..1416.605
rows=0 loops=1)
   Filter: (((esdt)::text = 'MET'::text) AND ((key)::text =
'2011-10-08 07:09:47-04'::text))
 Total runtime: 3770.529 ms
(28 rows)


I did
set enable_seqscan = off;
but it does not improve much.

Explain analyse select * from filemeta where esdt = 'MET' and key =
'2011-10-08 07:09:47-04';

QUERY 
PLAN




---
 Result  (cost

[GENERAL] How to improve my slow query for table have list of child table?

2012-01-19 Thread Alex Lai

Dear All,

I have a large table that have 8 child tables.
The size of the table is 30 millioins with necessary index needed.

   Table filemeta
Column |   Type| Modifiers
+---+---
fileid | integer   | not null
esdt   | character varying |
key| character varying |
source | character varying |
Indexes:
   "pk_filemeta" PRIMARY KEY, btree (fileid)
   "ak_filemeta_esdt" btree (esdt)
   "ak_filemeta_fileid" btree (fileid)
   "ak_filemeta_source" btree (source)
Foreign-key constraints:
   "fk_filemeta_esdt" FOREIGN KEY (esdt) REFERENCES esdt_def(esdt) 
DEFERRABLE
   "fk_filemeta_file" FOREIGN KEY (fileid) REFERENCES file(fileid) ON 
DELETE CASCADE

   "fk_filemeta_source" FOREIGN KEY (source) REFERENCES source_def(source)
Child tables: filemeta_anc,
 filemeta_app,
 filemeta_l0,
 filemeta_l0r,
 filemeta_mdkey,
 filemeta_ompslookup,
 filemeta_orbital,
 filemeta_timerange

Explain analyse select * from filemeta where esdt = 'MET' and key = 
'2011-10-08 07:09:47-04';
   
QUERY 
PLAN 


---
Result  (cost=53295.97..558304.84 rows=42 width=37) (actual 
time=1063.016..3770.361 rows=5 loops=1)
  ->  Append  (cost=53295.97..558304.84 rows=42 width=37) (actual 
time=1063.013..3770.348 rows=5 loops=1)
->  Bitmap Heap Scan on filemeta  (cost=53295.97..370366.99 
rows=34 width=35) (actual time=1063.011..2020.002 rows=5 loops=1)

  Recheck Cond: ((esdt)::text = 'MET'::text)
  Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
  ->  Bitmap Index Scan on ak_filemeta_esdt  
(cost=0.00..53295.96 rows=3245468 width=0) (actual time=910.287..910.287 
rows=3216226 loops=1)

Index Cond: ((esdt)::text = 'MET'::text)
->  Seq Scan on filemeta_anc filemeta  (cost=0.00..574.01 
rows=1 width=59) (actual time=5.740..5.740 rows=0 loops=1)
  Filter: (((esdt)::text = 'MET'::text) AND ((key)::text = 
'2011-10-08 07:09:47-04'::text))
->  Seq Scan on filemeta_app filemeta  (cost=0.00..16.30 rows=1 
width=100) (actual time=0.001..0.001 rows=0 loops=1)
  Filter: (((esdt)::text = 'MET'::text) AND ((key)::text = 
'2011-10-08 07:09:47-04'::text))
->  Seq Scan on filemeta_l0 filemeta  (cost=0.00..7483.35 
rows=1 width=39) (actual time=52.905..52.905 rows=0 loops=1)
  Filter: (((esdt)::text = 'MET'::text) AND ((key)::text = 
'2011-10-08 07:09:47-04'::text))
->  Seq Scan on filemeta_l0r filemeta  (cost=0.00..123.87 
rows=1 width=40) (actual time=1.353..1.353 rows=0 loops=1)
  Filter: (((esdt)::text = 'MET'::text) AND ((key)::text = 
'2011-10-08 07:09:47-04'::text))
->  Seq Scan on filemeta_mdkey filemeta  (cost=0.00..29707.58 
rows=1 width=28) (actual time=273.616..273.616 rows=0 loops=1)
  Filter: (((esdt)::text = 'MET'::text) AND ((key)::text = 
'2011-10-08 07:09:47-04'::text))
->  Index Scan using pk_filemeta_ompslookup on 
filemeta_ompslookup filemeta  (cost=0.00..29.84 rows=1 width=45) (actual 
time=0.084..0.084 rows=0 loops=1)

  Index Cond: ((esdt)::text = 'MET'::text)
  Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
->  Bitmap Heap Scan on filemeta_orbital filemeta  
(cost=95.13..5674.40 rows=1 width=22) (actual time=0.021..0.021 rows=0 
loops=1)

  Recheck Cond: ((esdt)::text = 'MET'::text)
  Filter: ((key)::text = '2011-10-08 07:09:47-04'::text)
  ->  Bitmap Index Scan on id_filemeta_orbital  
(cost=0.00..95.13 rows=4173 width=0) (actual time=0.018..0.018 rows=0 
loops=1)

Index Cond: ((esdt)::text = 'MET'::text)
->  Seq Scan on filemeta_timerange filemeta  
(cost=0.00..144328.49 rows=1 width=44) (actual time=1416.605..1416.605 
rows=0 loops=1)
  Filter: (((esdt)::text = 'MET'::text) AND ((key)::text = 
'2011-10-08 07:09:47-04'::text))

Total runtime: 3770.529 ms
(28 rows)


I did
set enable_seqscan = off;
but it does not improve much.

Explain analyse select * from filemeta where esdt = 'MET' and key = 
'2011-10-08 07:09:47-04';
   
QUERY 
PLAN 


---
Result  (cost=53295.97..6558304.84 rows=42 width=37) (actual 
time=1003.565..3706.919 rows=5 loops=1)
  ->  Append  (cost=53295.97..6558304.84 rows=42 width

Re: [GENERAL] Cannot connect remotely to postgresql

2012-01-19 Thread Raymond O'Donnell
On 19/01/2012 17:27, Willem Buitendyk wrote:
> I have 8.2 installed on 64bit windows 7.  I have no problem making a
> local connection.  However, when I make changes to pg_hba.conf such
> as add:
> 
> local all all trust

What is the exact error message you're getting?

Did you restart the server after changing pg_hba.conf?

Also, I don't think "local" rules do anything on windows - you need to
add a "host" rule as the connections are over TCP/IP (though I could be
wrong).

> I still cannot connect through a VPN.  On a hunch that my pg server
> was not using the config files in "C:\Program Files
> (x86)\PostgreSQL\8.3\data"  I changed the port in postgresql.conf to
> 5433 and restarted the server.  After doing this I am still able to
> connect the server using "psql -h localhost -U postgres -d xxx"  I am
> assuming (perhaps incorrectly) that I shouldn't be able to do this.

That does seem odd - you should need the -p option for anything other
than the standard port.

Is there any chance that you have more than one installation running on
the machine, and the other one is listening on port 5432?

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cannot connect remotely to postgresql

2012-01-19 Thread Willem Buitendyk
I have 8.2 installed on 64bit windows 7.  I have no problem making a local 
connection.  However, when I make changes to pg_hba.conf such as add:

local all all trust

I still cannot connect through a VPN.  On a hunch that my pg server was not 
using the config files in "C:\Program Files (x86)\PostgreSQL\8.3\data"  I 
changed the port in postgresql.conf to 5433 and restarted the server.  After 
doing this I am still able to connect the server using "psql -h localhost -U 
postgres -d xxx"  I am assuming (perhaps incorrectly) that I shouldn't be able 
to do this.  So now I'm completely stumped.  I've searched my computer and 
can't find any other conf files.  I recently set $PGDATA to "C:\Program Files 
(x86)\PostgreSQL\8.3\data\" and the same in my $PATH for bin.  I do notice that 
lib is not installed in my $PATH but assume that would not affect my 
connection.  Any ideas?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-19 Thread Florian Weimer
* Scott Marlowe:

> On Thu, Jan 19, 2012 at 7:54 AM, Florian Weimer  wrote:
>> * Gnanakumar:
>>
 Just create a unique index on EMAIL column and handle error if it comes
>>>
>>> Thanks for your suggestion.  Of course, I do understand that this could be
>>> enforced/imposed at the database-level at any time.  But I'm trying to find
>>> out whether this could be solved at the application layer itself.  Any
>>> thoughts/ideas?
>>
>> If you use serializable transactions in PostgreSQL 9.1, you can
>> implement such constraints in the application without additional
>> locking.  However, with concurrent writes and without an index, the rate
>> of detected serialization violations and resulting transactions aborts
>> will be high.
>
> No, you sadly can't.  PostgreSQL doesn't yet support proper predicate
> locking to allow the application to be sure that the OP's original
> statement, and ones like it, don't have a race condition.  A unique
> index is the only way to be sure.

Huh?  This was one of the major new features in PostgreSQL 9.1.

-- 
Florian Weimer
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-19 Thread Scott Marlowe
On Thu, Jan 19, 2012 at 9:49 AM, Scott Marlowe  wrote:
> On Thu, Jan 19, 2012 at 7:54 AM, Florian Weimer  wrote:
>> * Gnanakumar:
>>
 Just create a unique index on EMAIL column and handle error if it comes
>>>
>>> Thanks for your suggestion.  Of course, I do understand that this could be
>>> enforced/imposed at the database-level at any time.  But I'm trying to find
>>> out whether this could be solved at the application layer itself.  Any
>>> thoughts/ideas?
>>
>> If you use serializable transactions in PostgreSQL 9.1, you can
>> implement such constraints in the application without additional
>> locking.  However, with concurrent writes and without an index, the rate
>> of detected serialization violations and resulting transactions aborts
>> will be high.
>
> No, you sadly can't.  PostgreSQL doesn't yet support proper predicate
> locking to allow the application to be sure that the OP's original
> statement, and ones like it, don't have a race condition.  A unique
> index is the only way to be sure.

Wait, did 9.1 implement proper predicate locking to allow this?  If so
I apologize for being out of the loop on the new versions.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-19 Thread Scott Marlowe
On Thu, Jan 19, 2012 at 7:54 AM, Florian Weimer  wrote:
> * Gnanakumar:
>
>>> Just create a unique index on EMAIL column and handle error if it comes
>>
>> Thanks for your suggestion.  Of course, I do understand that this could be
>> enforced/imposed at the database-level at any time.  But I'm trying to find
>> out whether this could be solved at the application layer itself.  Any
>> thoughts/ideas?
>
> If you use serializable transactions in PostgreSQL 9.1, you can
> implement such constraints in the application without additional
> locking.  However, with concurrent writes and without an index, the rate
> of detected serialization violations and resulting transactions aborts
> will be high.

No, you sadly can't.  PostgreSQL doesn't yet support proper predicate
locking to allow the application to be sure that the OP's original
statement, and ones like it, don't have a race condition.  A unique
index is the only way to be sure.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] variadic array arguments, can it work?

2012-01-19 Thread Tom Lane
Ingmar Brouns  writes:
> I was trying to write a variadic function where the arguments themselves
> are arrays, but calling it does not seem to work. I couldn't find
> documentation mentioning this restriction

> postgres=# create or replace function foo(variadic args integer[][])

The reason that doesn't work the way you're expecting is that
1-dimensional integer arrays are not a distinct datatype from
2-dimensional integer arrays.  The system just sees "variadic int[]"
and expects simple integers in a variadic expansion.  Sorry.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] variadic array arguments, can it work?

2012-01-19 Thread Ingmar Brouns
Hi,

I was trying to write a variadic function where the arguments themselves
are arrays, but calling it does not seem to work. I couldn't find
documentation mentioning this restriction

postgres=# create or replace function foo(variadic args integer[][])
returns integer
as $$
begin return args[2][2]; end;
$$ language plpgsql;

Now I can call the function using variadic:

postgres=# select foo(variadic array[array[1,2],array[3,4]]);
 foo
-
   4
(1 row)

but I cannot call it in the normal way...

postgres=# select foo( array[1,2] , array[3,4] );
ERROR:  function foo(integer[], integer[]) does not exist at character 8
HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.
STATEMENT:  select foo( array[1,2] , array[3,4] );
ERROR:  function foo(integer[], integer[]) does not exist
LINE 1: select foo( array[1,2] , array[3,4] );
   ^
HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.

I suspect this has to do something with multiple dimensional arrays not
truly being arrays of arrays...


Kind regards,

Ingmar Brouns



version

--

 PostgreSQL 9.1.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.6.1
20110908 (Red Hat 4.6.1-9), 64-
bit
(1 row)


Re: [GENERAL] On duplicate ignore

2012-01-19 Thread Florian Weimer
* Gnanakumar:

>> Just create a unique index on EMAIL column and handle error if it comes
>
> Thanks for your suggestion.  Of course, I do understand that this could be
> enforced/imposed at the database-level at any time.  But I'm trying to find
> out whether this could be solved at the application layer itself.  Any
> thoughts/ideas?

If you use serializable transactions in PostgreSQL 9.1, you can
implement such constraints in the application without additional
locking.  However, with concurrent writes and without an index, the rate
of detected serialization violations and resulting transactions aborts
will be high.

-- 
Florian Weimer
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] scenario with a slow query

2012-01-19 Thread Volodymyr Kostyrko

Tom Lane wrote:

Volodymyr Kostyrko  writes:

Maybe I'm missing something but I have found a case when planner is
unoptimal.


The planner knows next to nothing about optimizing FULL JOIN, and
I would not recommend holding your breath waiting for it to get better
about that, because there's basically no demand for the work that'd
be involved.  I'd suggest refactoring this query instead.  A nest of
full joins seems like a rather unintuitive way to get the result
anyway ...


That's not about FULL JOIN, that's seems to be about all JOIN's:

select * from (
  select 1 as id
)x natural left join (
  select id, sum(count) as today
  from test_stat
  where date = now()::date group by id
)a natural left join (
  select id, sum(count) as lastday
  from test_stat
  where date = (now() - interval '1 day')::date group by id
)b natural left join (
  select id, sum(count) as week
  from test_stat
  where date between (now() - interval '1 day') and (now() - interval 
'7 day')

  group by id
)c natural left join (
  select id, sum(count) as whole
  from test_stat
  where date <> now()::date
  group by id
)d;

This query exhibits the same seq scan.

By refactoring did you mean something like this:

select
  (select sum(count) from test_stat
where date = now()::date and id = 1
group by id) as today,
  ( select sum (count) from test_stat
where date = (now() - interval '1 day')::date and id = 1
group by id) as lastday,
  ( select sum(count) from test_stat
where date between (now() - interval '1 day')
  and (now() - interval '7 day') and id = 1
group by id) as week,
  (select sum(count) from test_stat
where date <> now()::date and id = 1
group by id) as whole;

This one works much better requiring mostly no planner involvment... 
Yielding the same result though.


--
Sphinx of black quartz judge my vow.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general