[GENERAL] Why doesn't `RAISE EXCEPTION` provide error context?

2015-04-01 Thread Taytay
We make heavy use of `GET STACKED DIAGNOSTICS` to determine where errors
happened.
However, I am trying to use RAISE EXCEPTION to report errors, and have
discovered that RAISE is specifically prevented from adding to the error
context:



So that means that this doesn't work:



I've posted more details here:
http://dba.stackexchange.com/questions/96743/postgres-how-to-get-stack-trace-for-a-manually-raised-exception

That context would be awfully useful for us, even for manually generated
exceptions.
Can anyone shed some light on A) why this is, and B) if it's still desired,
and C) if so, are there any workarounds? Is there an exception I can trick
Postgres into throwing that will include my user-generated string?

Many thanks for any help you can offer.




--
View this message in context: 
http://postgresql.nabble.com/Why-doesn-t-RAISE-EXCEPTION-provide-error-context-tp5844382.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread Tom Lane
TonyS  writes:
> The analyze function has crashed again while the overcommit entries were
> as above. The last bit of the PostgreSQL log shows:
>   MdSmgr: 41934848 total in 14 blocks; 639936 free (0 chunks); 41294912 used
>   ident parser context: 0 total in 0 blocks; 0 free (0 chunks); 0 used
>   hba parser context: 7168 total in 3 blocks; 2288 free (1 chunks); 4880 used
>   LOCALLOCK hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
>   Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used
>   ErrorContext: 8192 total in 1 blocks; 8160 free (6 chunks); 32 used
> 2015-04-01 14:23:27 EDT ERROR:  out of memory
> 2015-04-01 14:23:27 EDT DETAIL:  Failed on request of size 80.
> 2015-04-01 14:23:27 EDT STATEMENT:  analyze verbose;

We need to see all of that memory map, not just the last six lines of it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-01 Thread Steve Crawford

On 04/01/2015 11:50 AM, James Cloos wrote:

I've for some time used:

(now()::timestamp without time zone - 'epoch'::timestamp without time 
zone)::reltime::integer

to get the current seconds since the epoch.  The results are consistant
with date +%s.

(Incidently, is there a better way in 9.4?)

But I found the 'epoch'::timestamp + $THAT_VALUE::reltime was off.

I consitantly get 1970-01-01 06:00 plus a fraction of a second from:

select now() -  ((now()::timestamp without time zone - 'epoch'::timestamp 
without time zone)::reltime::integer)::reltime;

The machines on which I've tried it all have localtime == UTC.

Am I missing something obvious?


Very convoluted calculation as others have noted. As to why it is "off", 
you are casting one part of the statement to an integer thus truncating 
the microseconds but are not doing the same on the other side of the 
calculation.




Also, is there any way to get the equiv of date +%s%N as a numeric or a
double precision?


Not exactly. PostgreSQL has resolution to the microsecond, not the 
nanosecond. But to get the correct number of digits just cast the 
following as needed for you application:


extract(epoch from now())*10


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-01 Thread James Cloos
> "SF" == Stephen Frost  writes:

>> (now()::timestamp without time zone - 'epoch'::timestamp without time 
>> zone)::reltime::integer
>> (Incidently, is there a better way in 9.4?)

SF> Uh, select extract('epoch' from now()); ?

Thanks.  Back when I first needed it (years ago), that long line was the
only recomendation I could find.

SF> The only thing I'd say about this is that you *really* want to use
SF> timestamptz in PG for storing timestamps.

I normally do, but this table hasn't changed in years.  Maybe 7.4 or so
was current when it started.

Thanks!

-JimC
-- 
James Cloos  OpenPGP: 0x997A9F17ED7DAEA6


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread TonyS
On Wed, April 1, 2015 9:35 am, Kevin Grittner-5 [via PostgreSQL] wrote:
>

>
> TonyS  wrote:
>
>
>> The postgresql log has these entries at the crash point:
>> 2015-04-01 06:24:37 EDT LOG:  server process (PID 1384) was terminated
>> by signal 9: Killed 2015-04-01 06:24:38 EDT DETAIL:  Failed process was
>> running: analyze verbose;
>>
>
> That was almost certainly the action of the OS's Out Of Memory
> Killer process.
>
>
>> Is there anything else that would be helpful?
>>
>
> Unfortunately, leaving the OOM killer enabled causes the best
> evidence to be destroyed.  If you disable the OOM killer and run this
> again, when memory is exhausted the database process attempting to
> allocate memory will dump a map of where its memory was allocated.  That
> should give us something to work with regarding the cause.  Try:
>
> vm.overcommit_memory = 2 vm.overcommit_ratio = 80
>

The analyze function has crashed again while the overcommit entries were
as above. The last bit of the PostgreSQL log shows:
  MdSmgr: 41934848 total in 14 blocks; 639936 free (0 chunks); 41294912 used
  ident parser context: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  hba parser context: 7168 total in 3 blocks; 2288 free (1 chunks); 4880 used
  LOCALLOCK hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used
  ErrorContext: 8192 total in 1 blocks; 8160 free (6 chunks); 32 used
2015-04-01 14:23:27 EDT ERROR:  out of memory
2015-04-01 14:23:27 EDT DETAIL:  Failed on request of size 80.
2015-04-01 14:23:27 EDT STATEMENT:  analyze verbose;

>From syslog leading up to the crash
Apr  1 06:24:37 postgrestest kernel: [26692.691936] [ 1434]  1000  1434   
26408   70  53  197 0 sshd
Apr  1 06:24:37 postgrestest kernel: [26692.691939] [ 1435]  1000  1435   
 55930  16  452 0 bash
Apr  1 06:24:37 postgrestest kernel: [26692.691943] [ 1449]  1000  1449   
 6204   81  17   31 0 top
Apr  1 06:24:37 postgrestest kernel: [26692.691947] Out of memory: Kill
process 1384 (postgres) score 790 or sacrifice child
Apr  1 06:24:37 postgrestest kernel: [26692.693667] Killed process 1384
(postgres) total-vm:10044764kB, anon-rss:6963780kB, file-rss:202916kB
Apr  1 06:25:02 postgrestest CRON[1755]: (root) CMD (test -x
/usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))









--
View this message in context: 
http://postgresql.nabble.com/Would-like-to-know-how-analyze-works-technically-tp5844197p5844335.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread Bill Moran
On Wed, 1 Apr 2015 10:47:32 -0700 (MST)
TonyS  wrote:

> On Wed, April 1, 2015 12:30 pm, Igor Neyman [via PostgreSQL] wrote:
> >>
> >> TonyS <[hidden email]>
> >> wrote:
> >>
> >>
> >
> > name,current_setting,source autovacuum,off,configuration file
> > synchronous_commit,off,configuration file TimeZone,localtime,configuration
> > file unix_socket_directories,/var/run/postgresql,configuration file
> > wal_buffers,8MB,configuration file work_mem,1536MB,configuration file
> >
> >
> > ---
> >
> >
> > ?work_mem,1536MB,configuration file
> >
> >
> >
> > IIRC, your RAM is 8GB.  Your work_mem is too high.  Actual memory used
> > for sorting, etc... could be multiples of work_mem setting.
> >
> > That could be the reason for your memory problems.  I'd suggest to set it
> > to 16MB, and see if you can avoid "on disk" sorting.  If not - gradually
> > increase work_mem.
> >
> > Regards,
> >
> >
> > Igor Neyman
> >
> 
> 
> Thanks Igor,
> 
> I will try changing that. I pretty much just let pgtune set all of those
> values for me.

If pgtune set 1.5G of work_mem, then someone should file a bug report.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-01 Thread Tom Lane
James Cloos  writes:
> I've for some time used:
>(now()::timestamp without time zone - 'epoch'::timestamp without time 
> zone)::reltime::integer

> to get the current seconds since the epoch.  The results are consistant
> with date +%s.

> (Incidently, is there a better way in 9.4?)

> But I found the 'epoch'::timestamp + $THAT_VALUE::reltime was off.

> I consitantly get 1970-01-01 06:00 plus a fraction of a second from:

> select now() -  ((now()::timestamp without time zone - 'epoch'::timestamp 
> without time zone)::reltime::integer)::reltime;

"reltime" doesn't have fractional-second precision, so you lose whatever
part of the original timestamp difference was fractional.

"reltime" is deprecated too, and will go away someday (probably long
before this calculation starts to overflow an int, in 2038), so you
really don't want to be using it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] now() vs 'epoch'::timestamp

2015-04-01 Thread Stephen Frost
James,

* James Cloos (cl...@jhcloos.com) wrote:
> I've for some time used:
> 
>(now()::timestamp without time zone - 'epoch'::timestamp without time 
> zone)::reltime::integer
> 
> to get the current seconds since the epoch.  The results are consistant
> with date +%s.
> 
> (Incidently, is there a better way in 9.4?)

Uh, select extract('epoch' from now()); ?

> But I found the 'epoch'::timestamp + $THAT_VALUE::reltime was off.
> 
> I consitantly get 1970-01-01 06:00 plus a fraction of a second from:
> 
> select now() -  ((now()::timestamp without time zone - 'epoch'::timestamp 
> without time zone)::reltime::integer)::reltime;
> 
> The machines on which I've tried it all have localtime == UTC.
> 
> Am I missing something obvious?

The only thing I'd say about this is that you *really* want to use
timestamptz in PG for storing timestamps.

> Also, is there any way to get the equiv of date +%s%N as a numeric or a
> double precision?

See above.

Thanks!

Stephen


signature.asc
Description: Digital signature


[GENERAL] now() vs 'epoch'::timestamp

2015-04-01 Thread James Cloos
I've for some time used:

   (now()::timestamp without time zone - 'epoch'::timestamp without time 
zone)::reltime::integer

to get the current seconds since the epoch.  The results are consistant
with date +%s.

(Incidently, is there a better way in 9.4?)

But I found the 'epoch'::timestamp + $THAT_VALUE::reltime was off.

I consitantly get 1970-01-01 06:00 plus a fraction of a second from:

select now() -  ((now()::timestamp without time zone - 'epoch'::timestamp 
without time zone)::reltime::integer)::reltime;

The machines on which I've tried it all have localtime == UTC.

Am I missing something obvious?

Also, is there any way to get the equiv of date +%s%N as a numeric or a
double precision?

-JimC
-- 
James Cloos  OpenPGP: 0x997A9F17ED7DAEA6


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread TonyS
On Wed, April 1, 2015 12:30 pm, Igor Neyman [via PostgreSQL] wrote:
>>
>> TonyS <[hidden email]>
>> wrote:
>>
>>
>
> name,current_setting,source autovacuum,off,configuration file
> synchronous_commit,off,configuration file TimeZone,localtime,configuration
> file unix_socket_directories,/var/run/postgresql,configuration file
> wal_buffers,8MB,configuration file work_mem,1536MB,configuration file
>
>
> ---
>
>
> ?work_mem,1536MB,configuration file
>
>
>
> IIRC, your RAM is 8GB.  Your work_mem is too high.  Actual memory used
> for sorting, etc... could be multiples of work_mem setting.
>
> That could be the reason for your memory problems.  I'd suggest to set it
> to 16MB, and see if you can avoid "on disk" sorting.  If not - gradually
> increase work_mem.
>
> Regards,
>
>
> Igor Neyman
>


Thanks Igor,

I will try changing that. I pretty much just let pgtune set all of those
values for me.






--
View this message in context: 
http://postgresql.nabble.com/Would-like-to-know-how-analyze-works-technically-tp5844197p5844318.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread FarjadFarid(ChkNet)
Tony,

 

Before assuming that this is a bug/problem with postgresql and therefore out
of your control. 

 

It is best to ensure there is no problem with the set up you have. It will
save you bundle of time. 

 

I have several development system but regularly use postgresql DBs with SSD
on a portable (slow CPU) 

with several dbs *each* with over 6 million records just in one table and
several hundred tables and still don't get any problem. 

 

It would be good to read your post on the final solution.

 

 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of FarjadFarid(ChkNet)
Sent: 01 April 2015 17:14
To: 'TonyS'; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Would like to know how analyze works technically

 

 

Because your system had crashed. I would check everything from bottom up. 

 

The duration of operation and memory usage does suggest it may be hitting a
recurring problem. 

 

First do  a simple check on your hard disk. Making sure it is healthy there
are no hanging indexes.  

 

Then rebuild the whole database, this time ensuring *your indexing is much
tighter than before*, also as a test to be less dependent on memory. I
personally use SSDs for my DBs. It saves a lot of time. The other advantage
of SSD is *the relative* lack of performance degradation due to
fragmentation of data. 

 

Once you are happy with this setup then try to optimise the performance. You
might find the cost of a few good SSD will more than pay for themselves in
terms of the cost of your team's time. 

 

Otherwise indexing shouldn't take so long. 

 

Hope this helps. 

 

Good luck. 

 

Best Regards

 

 

Farjad

 

 

 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of TonyS
Sent: 01 April 2015 14:46
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Would like to know how analyze works technically

 

On Wed, April 1, 2015 9:25 am, FarjadFarid(ChkNet) [via PostgreSQL] wrote: 
> 


> 
> It sounds like your system had crashed several times. 
> 
> 
> My suggestion would be first ensure that your tables and indexes are not 
> corrupted. 
> 
> Second suggestion is to ensure your index is tightly represents the data 
> you are accessing.  The tighter it is the faster the response time. The 
> less memory and CPU usage. 
> 
> Of course these are basic for any good DB but these essential before 
> moving to more complex issues. 
> 
> 
> 
> -Original Message- 
> From: [hidden email] 
> [mailto:[hidden email]] On Behalf Of Bill Moran 
> Sent: 01 April 2015 13:48 
> To: TonyS 
> Cc: [hidden email] 
> Subject: Re: [GENERAL] Would like to know how analyze works technically 
> 
> 
> On Wed, 1 Apr 2015 04:33:07 -0700 (MST) 
> TonyS <[hidden email]> wrote: 
> 
> 
>> On Wed, April 1, 2015 12:18 am, Tom Lane-2 [via PostgreSQL] wrote: 
>> 
>>> 
>>> TonyS <[hidden email]> writes: 
>>> 
>>> 
 Running "analyze verbose;" and watching top, the system starts out 
 using no swap data and about 4GB of cached memory and about 1GB of 
 used memory. As it runs, the amount of used RAM climbs, and 
 eventually the used swap memory increases to 100% and after being at 
 that level for a couple of minutes, the analyze function crashes and 
 indicates "server closed the connection unexpectedly." 
>>> 


Thanks for the suggestion. What command/tool do you use to check a 
PostgreSQL database for corruption? 

  _  

View this message in context: Re: Would like to know how analyze works
technically
 
Sent from the PostgreSQL - general mailing list archive
  at
Nabble.com.



Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of TonyS
Sent: Wednesday, April 01, 2015 12:15 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Would like to know how analyze works technically

On Wed, April 1, 2015 9:35 am, Kevin Grittner-5 [via PostgreSQL] wrote:
>

>
> TonyS <[hidden email]> wrote:
>
>
>> The postgresql log has these entries at the crash point:
>> 2015-04-01 06:24:37 EDT LOG:  server process (PID 1384) was terminated
>> by signal 9: Killed 2015-04-01 06:24:38 EDT DETAIL:  Failed process was
>> running: analyze verbose;
>>
>
> That was almost certainly the action of the OS's Out Of Memory
> Killer process.
>
>
>> Is there anything else that would be helpful?
>>
>
> Unfortunately, leaving the OOM killer enabled causes the best
> evidence to be destroyed.  If you disable the OOM killer and run this
> again, when memory is exhausted the database process attempting to
> allocate memory will dump a map of where its memory was allocated.  That
> should give us something to work with regarding the cause.  Try:
>
> vm.overcommit_memory = 2 vm.overcommit_ratio = 80
>
> Also, it would be useful to see the output of this:
>
>
> SELECT name, current_setting(name), source
> FROM pg_settings
> WHERE source NOT IN ('default', 'override');
>
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Thanks for the follow up Kevin.

I made the changes to overcommit and started running the analyze operation
again about 2 hours ago, so if it stays true to form, it should be
crashing in about 5-1/2 hours.

The output from the query you suggested is:

name,current_setting,source
autovacuum,off,configuration file
checkpoint_completion_target,0.9,configuration file
checkpoint_segments,16,configuration file
checkpoint_timeout,1h,configuration file
client_encoding,WIN1252,session
default_statistics_target,10,configuration file
default_text_search_config,pg_catalog.english,configuration file
effective_cache_size,5132MB,configuration file
external_pid_file,/var/run/postgresql/9.3-main.pid,configuration file
fsync,off,configuration file
lc_messages,en_US.UTF-8,configuration file
lc_monetary,en_US.UTF-8,configuration file
lc_numeric,en_US.UTF-8,configuration file
lc_time,en_US.UTF-8,configuration file
listen_addresses,*,configuration file
log_line_prefix,%t ,configuration file
log_timezone,localtime,configuration file
maintenance_work_mem,480MB,configuration file
max_connections,5,configuration file
max_locks_per_transaction,512,configuration file
max_stack_depth,2MB,environment variable
port,5432,configuration file
shared_buffers,1920MB,configuration file
ssl,on,configuration file
ssl_cert_file,/etc/ssl/certs/ssl-cert-snakeoil.pem,configuration file
ssl_key_file,/etc/ssl/private/ssl-cert-snakeoil.key,configuration file
synchronous_commit,off,configuration file
TimeZone,localtime,configuration file
unix_socket_directories,/var/run/postgresql,configuration file
wal_buffers,8MB,configuration file
work_mem,1536MB,configuration file


---

?work_mem,1536MB,configuration file


IIRC, your RAM is 8GB.  Your work_mem is too high.  Actual memory used for 
sorting, etc... could be multiples of work_mem setting.

That could be the reason for your memory problems.  I'd suggest to set it to 
16MB, and see if you can avoid "on disk" sorting.  If not - gradually increase 
work_mem.

Regards,

Igor Neyman


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread TonyS
On Wed, April 1, 2015 9:35 am, Kevin Grittner-5 [via PostgreSQL] wrote:
>

>
> TonyS  wrote:
>
>
>> The postgresql log has these entries at the crash point:
>> 2015-04-01 06:24:37 EDT LOG:  server process (PID 1384) was terminated
>> by signal 9: Killed 2015-04-01 06:24:38 EDT DETAIL:  Failed process was
>> running: analyze verbose;
>>
>
> That was almost certainly the action of the OS's Out Of Memory
> Killer process.
>
>
>> Is there anything else that would be helpful?
>>
>
> Unfortunately, leaving the OOM killer enabled causes the best
> evidence to be destroyed.  If you disable the OOM killer and run this
> again, when memory is exhausted the database process attempting to
> allocate memory will dump a map of where its memory was allocated.  That
> should give us something to work with regarding the cause.  Try:
>
> vm.overcommit_memory = 2 vm.overcommit_ratio = 80
>
> Also, it would be useful to see the output of this:
>
>
> SELECT name, current_setting(name), source
> FROM pg_settings
> WHERE source NOT IN ('default', 'override');
>
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Thanks for the follow up Kevin.

I made the changes to overcommit and started running the analyze operation
again about 2 hours ago, so if it stays true to form, it should be
crashing in about 5-1/2 hours.

The output from the query you suggested is:

name,current_setting,source
autovacuum,off,configuration file
checkpoint_completion_target,0.9,configuration file
checkpoint_segments,16,configuration file
checkpoint_timeout,1h,configuration file
client_encoding,WIN1252,session
default_statistics_target,10,configuration file
default_text_search_config,pg_catalog.english,configuration file
effective_cache_size,5132MB,configuration file
external_pid_file,/var/run/postgresql/9.3-main.pid,configuration file
fsync,off,configuration file
lc_messages,en_US.UTF-8,configuration file
lc_monetary,en_US.UTF-8,configuration file
lc_numeric,en_US.UTF-8,configuration file
lc_time,en_US.UTF-8,configuration file
listen_addresses,*,configuration file
log_line_prefix,%t ,configuration file
log_timezone,localtime,configuration file
maintenance_work_mem,480MB,configuration file
max_connections,5,configuration file
max_locks_per_transaction,512,configuration file
max_stack_depth,2MB,environment variable
port,5432,configuration file
shared_buffers,1920MB,configuration file
ssl,on,configuration file
ssl_cert_file,/etc/ssl/certs/ssl-cert-snakeoil.pem,configuration file
ssl_key_file,/etc/ssl/private/ssl-cert-snakeoil.key,configuration file
synchronous_commit,off,configuration file
TimeZone,localtime,configuration file
unix_socket_directories,/var/run/postgresql,configuration file
wal_buffers,8MB,configuration file
work_mem,1536MB,configuration file







--
View this message in context: 
http://postgresql.nabble.com/Would-like-to-know-how-analyze-works-technically-tp5844197p5844292.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread FarjadFarid(ChkNet)
 

Because your system had crashed. I would check everything from bottom up. 

 

The duration of operation and memory usage does suggest it may be hitting a
recurring problem. 

 

First do  a simple check on your hard disk. Making sure it is healthy there
are no hanging indexes.  

 

Then rebuild the whole database, this time ensuring *your indexing is much
tighter than before*, also as a test to be less dependent on memory. I
personally use SSDs for my DBs. It saves a lot of time. The other advantage
of SSD is *the relative* lack of performance degradation due to
fragmentation of data. 

 

Once you are happy with this setup then try to optimise the performance. You
might find the cost of a few good SSD will more than pay for themselves in
terms of the cost of your team's time. 

 

Otherwise indexing shouldn't take so long. 

 

Hope this helps. 

 

Good luck. 

 

Best Regards

 

 

Farjad

 

 

 

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of TonyS
Sent: 01 April 2015 14:46
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Would like to know how analyze works technically

 

On Wed, April 1, 2015 9:25 am, FarjadFarid(ChkNet) [via PostgreSQL] wrote: 
> 


> 
> It sounds like your system had crashed several times. 
> 
> 
> My suggestion would be first ensure that your tables and indexes are not 
> corrupted. 
> 
> Second suggestion is to ensure your index is tightly represents the data 
> you are accessing.  The tighter it is the faster the response time. The 
> less memory and CPU usage. 
> 
> Of course these are basic for any good DB but these essential before 
> moving to more complex issues. 
> 
> 
> 
> -Original Message- 
> From: [hidden email] 
> [mailto:[hidden email]] On Behalf Of Bill Moran 
> Sent: 01 April 2015 13:48 
> To: TonyS 
> Cc: [hidden email] 
> Subject: Re: [GENERAL] Would like to know how analyze works technically 
> 
> 
> On Wed, 1 Apr 2015 04:33:07 -0700 (MST) 
> TonyS <[hidden email]> wrote: 
> 
> 
>> On Wed, April 1, 2015 12:18 am, Tom Lane-2 [via PostgreSQL] wrote: 
>> 
>>> 
>>> TonyS <[hidden email]> writes: 
>>> 
>>> 
 Running "analyze verbose;" and watching top, the system starts out 
 using no swap data and about 4GB of cached memory and about 1GB of 
 used memory. As it runs, the amount of used RAM climbs, and 
 eventually the used swap memory increases to 100% and after being at 
 that level for a couple of minutes, the analyze function crashes and 
 indicates "server closed the connection unexpectedly." 
>>> 


Thanks for the suggestion. What command/tool do you use to check a 
PostgreSQL database for corruption? 



  _  

View this message in context: Re: Would like to know how analyze works
technically
 
Sent from the PostgreSQL - general mailing list archive
  at
Nabble.com.



[GENERAL] Partitionning using geometry

2015-04-01 Thread Rémi Cura
Hey dear list,

I'd like to partition geographical (geometry) data with postgres mechanism.
(my usage is in fact related to pointcloud, but I use geometry as a work
around)
>From example I read on constraint, nothing should prevent it from working
Here is a self contained example, the planner doesn"t seems to use the
constraint_exclusion mechanism, whatever the constraint

Thanks,
Cheers,
Rémi-C

--

CREATE SCHEMA IF NOT EXISTS test_partitionning;
SET search_path TO test_partitionning, public ;

DROP TABLE IF  EXISTS test_father CASCADE;
CREATE TABLE test_father  (
gid SERIAL PRIMARY KEY
, geom geometry
);

create table test_child_1 (
check (geometry_overlaps(geom,ST_Expand(ST_MakePoint(10,10),10  ) ) )
,check ( geom&&ST_Expand(ST_MakePoint(10,10),10  ) )
, CHECK (ST_X(geom) BETWEEN 0 AND 20)
, CHECK (ST_Y(geom) BETWEEN 0 AND 20)
, CHECK (  ST_Intersects(geom, ST_Expand(ST_MakePoint(10,10),10  ))  )
) inherits (test_father);
--CREATE INDEX ON test_child_1 USING GIST(geom);

create table test_child_2 (
check (geometry_overlaps(geom,ST_Expand(ST_MakePoint(30,10),10  ) ) )
,check ( geom&&ST_Expand(ST_MakePoint(30,10),10  ) )
, CHECK (ST_X(geom) BETWEEN 20 AND 40)
, CHECK (ST_Y(geom) BETWEEN 0 AND 20)
, CHECK (  ST_Intersects(geom, ST_Expand(ST_MakePoint(30,10),10  ))  )
) inherits (test_father);
--CREATE INDEX ON test_child_2 USING GIST(geom);


INSERT INTO test_child_1 (geom)
SELECT ST_MakePoint(s1/10.0+random(),s2/10.0+random())
FROM generate_series(1,90) AS s1, generate_series(1,90) AS s2;

INSERT INTO test_child_2 (geom)
SELECT ST_MakePoint(s1/10.0+random(),s2/10.0+random())
FROM generate_series(200,300) AS s1, generate_series(1,90) AS s2;


SHOW constraint_exclusion;
SET constraint_exclusion TO partition;


WITH area_of_interest AS (
SELECT ST_Buffer(ST_MakePoint(5,5),1) as buf
)
SELECT *
FROM area_of_interest, test_father
WHERE  -- geom && buf
ST_X(geom) BETWEEN ST_XMin(buf) AND ST_Xmax(buf)
AND ST_Y(geom) BETWEEN ST_YMin(buf) AND ST_Ymax(buf) ;


SELECT *
FROM  test_father , ST_Buffer(ST_MakePoint(5,5),1) as buf
WHERE
ST_X(geom) BETWEEN ST_XMin(buf) AND ST_Xmax(buf)
AND ST_Y(geom) BETWEEN ST_YMin(buf) AND ST_Ymax(buf);
--


Re: [GENERAL] stack builder

2015-04-01 Thread Adrian Klaver

On 04/01/2015 08:21 AM, Ramesh T wrote:

I had installed on Linux.re-install postgres for stack builder ..?
cron is  their ,but I am new to this cron and  is this method for job
schedulers in postgres.


Again, how did you install Postgres?



On Wed, Apr 1, 2015 at 8:39 PM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:

On 04/01/2015 07:59 AM, Ramesh T wrote:

Hi,
   I had installed pgadmin3 but not selected
stackbuilder
,let me
   know how  to  add stackbuilder to pgadmin3 for additional addons.


You cannot, StackBuilder is not part of pgAdmin3, it is another
application entirely.

How did you install Postgres and what OS?


On Wed, Apr 1, 2015 at 7:15 PM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>
>> wrote:

 On 04/01/2015 06:22 AM, Ramesh T wrote:

 Hi all,
   I didn't selected stack builder at the
time of
 installation
 of postgres.Any help how to add*application stack
builder* to
 existed
 postgres.



 Assuming you are talking about an install done using the
EDB installer:


http://www.enterprisedb.com/docs/en/9.3/pginstguide/Table%20of%20Contents.htm




>

 4 Using Stack Builder


 --
 Adrian Klaver
adrian.kla...@aklaver.com 
>




--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stack builder

2015-04-01 Thread Adrian Klaver

On 04/01/2015 07:59 AM, Ramesh T wrote:

   Hi,
  I had installed pgadmin3 but not selected stackbuilder
,let me
  know how  to  add stackbuilder to pgadmin3 for additional addons.


You cannot, StackBuilder is not part of pgAdmin3, it is another 
application entirely.


How did you install Postgres and what OS?



On Wed, Apr 1, 2015 at 7:15 PM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:

On 04/01/2015 06:22 AM, Ramesh T wrote:

Hi all,
  I didn't selected stack builder at the time of
installation
of postgres.Any help how to add*application stack builder* to
existed
postgres.



Assuming you are talking about an install done using the EDB installer:


http://www.enterprisedb.com/__docs/en/9.3/pginstguide/Table%__20of%20Contents.htm



4 Using Stack Builder


--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitoning expert : Partitonning with specialization of one column type

2015-04-01 Thread Rémi Cura
Okay,
thanks for the answer.
I take it that it is "do it differently then".

I might have an idea but it is not working yet, so I'll post another
message.

Thanks dear co-list user ^^
Cheers,
Rémi-C


2015-04-01 16:56 GMT+02:00 David G. Johnston :

> On Wed, Apr 1, 2015 at 7:26 AM, Rémi Cura  wrote:
>
>> pcpatch(n) are specialization of pcpatch type.
>>
>
> ​While this may be true PostgreSQL doesn't understand "specialization" of
> data types.  If you cannot do as Steven suggested you do not get to
> leverage inheritance directly and will need to devise your own work-arounds
> for the capabilities that you require.
>
> I'm somewhat surprised that what you describe works at all - mainly given
> that functions do not allow typemod specifications to pass through the
> function invocation...but alas I'm not all that familiar with PostGIS and
> whatever is being done does indeed seem to be working...
>
> David J.
> ​
>


Re: [GENERAL] partitoning expert : Partitonning with specialization of one column type

2015-04-01 Thread David G. Johnston
On Wed, Apr 1, 2015 at 7:26 AM, Rémi Cura  wrote:

> pcpatch(n) are specialization of pcpatch type.
>

​While this may be true PostgreSQL doesn't understand "specialization" of
data types.  If you cannot do as Steven suggested you do not get to
leverage inheritance directly and will need to devise your own work-arounds
for the capabilities that you require.

I'm somewhat surprised that what you describe works at all - mainly given
that functions do not allow typemod specifications to pass through the
function invocation...but alas I'm not all that familiar with PostGIS and
whatever is being done does indeed seem to be working...

David J.
​


Re: [GENERAL] partitoning expert : Partitonning with specialization of one column type

2015-04-01 Thread Rémi Cura
Hm sorry I wasn't very clear

child table already have type pcpatch(3) , pcpatch(4), etc.
This is the normal behaviour and can't be changed.

pcpatch(n) are specialization of pcpatch type.

pcpatch(3) is widely different from pcpatch(4)

For those who know PostGIS pcpatch is like "geometry" and pcpatch(3) like
"geometry(point)" and pcpatch(5) like "geometry(polygon)",
for instance.

Cheers,
Rémi-C


2015-04-01 16:21 GMT+02:00 Steven Erickson :

>  Your child could be:
>
>
>
> CREATE TABLE child-1(
>
> patchn pcpatchn
>
> ) INHERITS (father);
>
>
>
>
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Rémi Cura
> *Sent:* Wednesday, April 01, 2015 8:41 AM
> *To:* PostgreSQL General
> *Subject:* [GENERAL] partitoning expert : Partitonning with
> specialization of one column type
>
>
>
> Hey, postgres 9.3 here.
>
> for partitionning expert.
>
> I have several table child with columns (gid serial, patch pcpatch(n))
>
> where n may change depending on the tables, and pcpatch is a type from
> pgpointcloud extension
>
> (here is the definition)
>
> -
> CREATE TYPE pcpatch
>(INPUT=pcpatch_in,
>OUTPUT=pcpatch_out,
>RECEIVE=-,
>SEND=-,
>TYPMOD_IN=pc_typmod_in,
>TYPMOD_OUT=pc_typmod_out,
>ANALYZE=-,
>CATEGORY='U', DEFAULT='',
>INTERNALLENGTH=-1, ALIGNMENT=int4, STORAGE=MAIN);
> -
>
> The question is, how do I create an inheritance scheme?
>
> If the father table is
>
> CREATE TABLE father(
>
> gid serial,
>
> patch pcpatch
>
> ) ;
>
> trying to put child_1, child_2 ... to inherit father raise an error
>
> 
> child table "test_child_1" has different type for column "patch"
> 
>
> So my question is, how would it be possible (if at all) to inherit of
> father table, while specializing the type of father table in child table?
>
> Thanks,
>
> Cheers,
>
> Rémi-C
>
> --
> NOTICE: This email message is for the sole use of the intended
> recipient(s) and may contain confidential and privileged information. Any
> unauthorized use, disclosure or distribution is prohibited. If you are not
> the intended recipient, please contact the sender by reply email and
> destroy all copies of the original message.
>
>


Re: [GENERAL] partitoning expert : Partitonning with specialization of one column type

2015-04-01 Thread Steven Erickson
Your child could be:

CREATE TABLE child-1(
patchn pcpatchn
) INHERITS (father);



From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rémi Cura
Sent: Wednesday, April 01, 2015 8:41 AM
To: PostgreSQL General
Subject: [GENERAL] partitoning expert : Partitonning with specialization of one 
column type

Hey, postgres 9.3 here.
for partitionning expert.
I have several table child with columns (gid serial, patch pcpatch(n))
where n may change depending on the tables, and pcpatch is a type from 
pgpointcloud extension
(here is the definition)
-
CREATE TYPE pcpatch
   (INPUT=pcpatch_in,
   OUTPUT=pcpatch_out,
   RECEIVE=-,
   SEND=-,
   TYPMOD_IN=pc_typmod_in,
   TYPMOD_OUT=pc_typmod_out,
   ANALYZE=-,
   CATEGORY='U', DEFAULT='',
   INTERNALLENGTH=-1, ALIGNMENT=int4, STORAGE=MAIN);
-
The question is, how do I create an inheritance scheme?

If the father table is

CREATE TABLE father(
gid serial,
patch pcpatch
) ;
trying to put child_1, child_2 ... to inherit father raise an error


child table "test_child_1" has different type for column "patch"

So my question is, how would it be possible (if at all) to inherit of father 
table, while specializing the type of father table in child table?
Thanks,
Cheers,
Rémi-C


NOTICE: This email message is for the sole use of the intended recipient(s) and 
may contain confidential and privileged information. Any unauthorized use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply email and destroy all copies of 
the original message.


Re: [GENERAL] How to diagnose max_locks_per_transaction is about to be exhausted?

2015-04-01 Thread Alexey Bashtanov

On 01.04.2015 17:09, Kevin Grittner wrote:


I think the "distinct" reference is because if one connection takes
out the same lock on the same object (e.g., a table) you don't need
to count it more than once.  The pg_locks view only shows it once
anyway.
Hmm, pg_locks surely shows two lines for the locks on the same object 
when the pids or the granted differ.


Alexey Bashtanov


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to diagnose max_locks_per_transaction is about to be exhausted?

2015-04-01 Thread Kevin Grittner
Alexey Bashtanov  wrote:

> max_locks_per_transaction enforces some restriction:
> the_locks_count <= the_threshold
>
> the_threshold is max_locks_per_transaction * (max_connections +
> max_prepared_transactions), the documentation is quite clear.
>
> What is the_locks_count?
> In docs we see "no more than this many distinct objects can be
> locked at any one time".
> What kinds of locks should we include? does the limit really
> restricts *distinct* objects count and what is the distinction
> key for every lock kind?
>
> What should one change in the following query to make it show the
> value limited by the_threshold?
>
> select
> count(distinct relation) + count(distinct (classid, objid))
>   from pg_locks

I think the "distinct" reference is because if one connection takes
out the same lock on the same object (e.g., a table) you don't need
to count it more than once.  The pg_locks view only shows it once
anyway.  The view does contain both locks limited by
max_locks_per_transaction and those limited by
max_pred_locks_per_transaction.  I think you will get the number
you are looking for simply by excluding the latter from a count of
all rows in the view:

select count(*) from pg_locks where mode <> 'SIReadLock';

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creating a non-strict custom aggregate that initializes to the first value

2015-04-01 Thread Tom Lane
"David G. Johnston"  writes:
> On Thu, Mar 26, 2015 at 1:49 PM, Timothy Garnett 
> wrote:
>> but if that is declared strict then it would take the first non-null value
>> and return A in my second example, if declared non-strict then the initial
>> state would be fed as null rather then the first value. Is there a way to
>> declare the function non-strict (so that null values are passed) but still
>> have it initialize to the first value like it would if it was strict?

> You want NULL to both mean "not initialized" and "unknown value" which is
> impossible and SQL does not provide any other universal literal that means
> one or the other.

Yeah.  You need distinct representations for "nothing seen yet" and "saw a
NULL"; the built-in behavior doesn't suffice for this.

One idea is for the state value to be of anyarray type: initially null,
and a one-element array containing the first input value once you've seen
that.

It strikes me though that this aggregate is ill-defined by nature.  In
particular, if you're going to treat NULL as being a real data value,
then what're you gonna return when there were no input rows?  You won't be
able to distinguish "no input rows" from "first input row had a NULL".
Maybe you should rethink whatever activity you were wanting it for.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread TonyS
On Wed, April 1, 2015 9:25 am, FarjadFarid(ChkNet) [via PostgreSQL] wrote:
>

>
> It sounds like your system had crashed several times.
>
>
> My suggestion would be first ensure that your tables and indexes are not
> corrupted.
>
> Second suggestion is to ensure your index is tightly represents the data
> you are accessing.  The tighter it is the faster the response time. The
> less memory and CPU usage.
>
> Of course these are basic for any good DB but these essential before
> moving to more complex issues.
>
>
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Bill Moran
> Sent: 01 April 2015 13:48
> To: TonyS
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Would like to know how analyze works technically
>
>
> On Wed, 1 Apr 2015 04:33:07 -0700 (MST)
> TonyS  wrote:
>
>
>> On Wed, April 1, 2015 12:18 am, Tom Lane-2 [via PostgreSQL] wrote:
>>
>>>
>>> TonyS  writes:
>>>
>>>
 Running "analyze verbose;" and watching top, the system starts out
 using no swap data and about 4GB of cached memory and about 1GB of
 used memory. As it runs, the amount of used RAM climbs, and
 eventually the used swap memory increases to 100% and after being at
 that level for a couple of minutes, the analyze function crashes and
 indicates "server closed the connection unexpectedly."
>>>

Thanks for the suggestion. What command/tool do you use to check a
PostgreSQL database for corruption?





--
View this message in context: 
http://postgresql.nabble.com/Would-like-to-know-how-analyze-works-technically-tp5844197p5844259.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] stack builder

2015-04-01 Thread Adrian Klaver

On 04/01/2015 06:22 AM, Ramesh T wrote:

Hi all,
 I didn't selected stack builder at the time of installation
of postgres.Any help how to add*application stack builder* to existed
postgres.




Assuming you are talking about an install done using the EDB installer:

http://www.enterprisedb.com/docs/en/9.3/pginstguide/Table%20of%20Contents.htm

4 Using Stack Builder


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to convert float to timestamp in single select query

2015-04-01 Thread Adrian Klaver

On 03/29/2015 08:52 PM, Maulik Shah wrote:

Dear Sir/Madam


how to convert float to timestamp in single select query

for exp. i have float as 1.251152515236 ,


test=> select to_timestamp(1.251152515236);
 to_timestamp
---
 1969-12-31 16:00:01.251153-08


The above assumes your value is seconds since epoch, which I am pretty 
sure is not the case here.




i want to convert this to datetime and from datetime to timestamp...
i.e. 02:22:044456


Well the above is a time not a datetime or timestamp, so I am not really 
sure what you are after? In other words more context is needed about 
where the value is coming from and what other information is available 
to anchor it to a datetime.




I need sql query


It is actually .crd file made in xml , and i have query inside xml like



Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread Bill Moran
On Wed, 1 Apr 2015 06:26:36 -0700 (MST)
TonyS  wrote:

> On Wed, April 1, 2015 8:48 am, Bill Moran [via PostgreSQL] wrote:
> >
> 
>  Running "analyze verbose;" and watching top, the system starts out
>  using no swap data and about 4GB of cached memory and about 1GB of
>  used memory. As it runs, the amount of used RAM climbs, and
>  eventually the used swap memory increases to 100% and after being at
>  that level for a couple of minutes, the analyze function crashes and
>  indicates "server closed the connection unexpectedly."
> >>>
> >>> ANALYZE is supposed to work in a constrained amount of memory, though
> >>>  that amount depends quite a bit on what the data is and what you've
> >>> got the statistics target set to.
> >>>
> >>> We've heard reports that there are memory leaks in (some versions
> >>> of?) PostGIS's analyze support.  Whether that would apply to you would
> >>> depend on whether you're using PostGIS.
> >>>
> >>> Hard to say more without a lot more concrete info about what your
> >>> data is, what PG version you're using, etc.
> >
> > Don't know if I'm on the right track with this, but what is
> > maintenance_work_mem set to on this system?
> >
> 
> Hello Bill,
> 
> maintenance_work_mem is set to 480MB. I haven't changed that from what
> pgtune suggested.

Doesn't seem unreasonable, so my guess isn't right.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] partitoning expert : Partitonning with specialization of one column type

2015-04-01 Thread Rémi Cura
Hey, postgres 9.3 here.
for partitionning expert.

I have several table child with columns (gid serial, patch pcpatch(n))
where n may change depending on the tables, and pcpatch is a type from
pgpointcloud extension
(here is the definition)
-
CREATE TYPE pcpatch
   (INPUT=pcpatch_in,
   OUTPUT=pcpatch_out,
   RECEIVE=-,
   SEND=-,
   TYPMOD_IN=pc_typmod_in,
   TYPMOD_OUT=pc_typmod_out,
   ANALYZE=-,
   CATEGORY='U', DEFAULT='',
   INTERNALLENGTH=-1, ALIGNMENT=int4, STORAGE=MAIN);
-

The question is, how do I create an inheritance scheme?

If the father table is

CREATE TABLE father(
gid serial,
patch pcpatch
) ;

trying to put child_1, child_2 ... to inherit father raise an error


child table "test_child_1" has different type for column "patch"


So my question is, how would it be possible (if at all) to inherit of
father table, while specializing the type of father table in child table?

Thanks,
Cheers,
Rémi-C


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread Kevin Grittner
TonyS  wrote:

> The postgresql log has these entries at the crash point:
> 2015-04-01 06:24:37 EDT LOG:  server process (PID 1384) was terminated by 
> signal 9: Killed
> 2015-04-01 06:24:38 EDT DETAIL:  Failed process was running: analyze verbose;

That was almost certainly the action of the OS's Out Of Memory
Killer process.

> Is there anything else that would be helpful?

Unfortunately, leaving the OOM killer enabled causes the best
evidence to be destroyed.  If you disable the OOM killer and run
this again, when memory is exhausted the database process
attempting to allocate memory will dump a map of where its memory
was allocated.  That should give us something to work with
regarding the cause.  Try:

vm.overcommit_memory = 2
vm.overcommit_ratio = 80

Also, it would be useful to see the output of this:

SELECT name, current_setting(name), source
FROM pg_settings
WHERE source NOT IN ('default', 'override');

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how would you speed up this long query?

2015-04-01 Thread Igor Neyman


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of zach cruise
Sent: Tuesday, March 31, 2015 2:56 PM
To: Gavin Flower
Cc: PostgreSQL
Subject: Re: [GENERAL] how would you speed up this long query?

> Version of PostgreSQL?
9.3

> Operating system?
win

> Hardware configuration?
8 gb ram. takes about 7000 ms to retrieve about 7000 rows.
max_connections = 200
shared_buffers = 512mb
effective_cache_size = 6gb
work_mem = 13107kb
maintenance_work_mem = 512mb
checkpoint_segments = 32
checkpoint_completion_target = 0.7
wal_buffers = 16mb
default_statistics_target = 100

> Indexes?
no

> Anything else that might be relevant?
no

> What have you already done to investigate?
moved subquery from "where" to "from" to evaluate once instead of once per row

> EXPLAIN ANALYZE output?
would help if you can help us understand what's going on here:
"Group  (cost=5520.89..6335.03 rows=18092 width=199) (actual
time=3864.186..4402.447 rows=5512 loops=1)"
 "  ->  Sort  (cost=5520.89..5566.12 rows=18092 width=199) (actual
time=3864.171..4146.725 rows=97141 loops=1)"
 "Sort Key: z0.zcg, proj.pid, proj.tit, proj.num, detail.cid,
detail.id, det (...)"
 "Sort Method: external merge  Disk: 21648kB"
 "->  Hash Join  (cost=3541.48..4241.51 rows=18092 width=199)
(actual time=254.216..432.629 rows=97141 loops=1)"
 "  Hash Cond: (org.id = detail.id)"
 "  ->  Hash Join  (cost=752.72..1036.45 rows=4955
width=109) (actual time=64.492..86.822 rows=4977 loops=1)"
 "Hash Cond: (org.pid = proj.pid)"
 "->  Seq Scan on org  (cost=0.00..196.82
rows=4982 width=26) (actual time=0.024..6.199 rows=4982 loops=1)"
 "->  Hash  (cost=702.97..702.97 rows=3980
width=91) (actual time=64.439..64.439 rows=3973 loops=1)"
 "  Buckets: 1024  Batches: 1  Memory Usage: 465kB"
 "  ->  Hash Join  (cost=424.04..702.97
rows=3980 width=91) (actual time=20.994..52.773 rows=3973 loops=1)"
 "Hash Cond: (org_1.pid = proj.pid)"
 "->  Seq Scan on org org_1
(cost=0.00..209.28 rows=3980 width=8) (actual time=0.016..10.815
rows=3980 loops=1)"
 "  Filter: ((open)::text = 'Y'::text)"
 "  Rows Removed by Filter: 1002"
 "->  Hash  (cost=374.02..374.02
rows=4002 width=83) (actual time=20.950..20.950 rows=4002 loops=1)"
 "  Buckets: 1024  Batches: 1
Memory Usage: 424kB"
 "  ->  Seq Scan on proj
(cost=0.00..374.02 rows=4002 width=83) (actual time=0.010..9.810
rows=4002 loops=1)"
 "  ->  Hash  (cost=2716.44..2716.44 rows=5786 width=98)
(actual time=189.677..189.677 rows=4959 loops=1)"
 "Buckets: 1024  Batches: 1  Memory Usage: 629kB"
 "->  Hash Join  (cost=2369.71..2716.44 rows=5786
width=98) (actual time=169.635..182.956 rows=4959 loops=1)"
 "  Hash Cond: (org_2.id = detail.id)"
 "  ->  Seq Scan on org org_2
(cost=0.00..209.28 rows=3980 width=8) (actual time=0.015..4.194
rows=3980 loops=1)"
 "Filter: ((open)::text = 'Y'::text)"
 "Rows Removed by Filter: 1002"
 "  ->  Hash  (cost=2340.92..2340.92 rows=2303
width=90) (actual time=169.596..169.596 rows=1964 loops=1)"
 "Buckets: 1024  Batches: 1  Memory
Usage: 224kB"
 "->  Hash Join
(cost=2069.93..2340.92 rows=2303 width=90) (actual
time=159.126..166.937 rows=1964 loops=1)"
 "  Hash Cond: ((detail.z)::text =
(z0.zcg)::text)"
 "  ->  Seq Scan on detail
(cost=0.00..199.03 rows=2303 width=52) (actual time=0.009..2.152
rows=2303 loops=1)"
 "  ->  Hash
(cost=1538.30..1538.30 rows=42530 width=38) (actual
time=159.070..159.070 rows=42530 loops=1)"
 "Buckets: 8192  Batches:
1  Memory Usage: 2451kB"
 "->  Seq Scan on z0
(cost=0.00..1538.30 rows=42530 width=38) (actual time=0.010..82.125
rows=42530 loops=1)"
 "Total runtime: 4414.655 ms"


--

Didn't see replies to this message, so...

Your query spends most of the time on sorting:

"Sort Method: external merge  Disk: 21648kB"

and it doesn't fit in memory.
Try increasing work_mem somewhat to 50MB, you could do it for this particular 
connection only, if you don't want to change it for the whole server.

Regards,
Igor Neyman



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread TonyS
On Wed, April 1, 2015 8:48 am, Bill Moran [via PostgreSQL] wrote:
>

 Running "analyze verbose;" and watching top, the system starts out
 using no swap data and about 4GB of cached memory and about 1GB of
 used memory. As it runs, the amount of used RAM climbs, and
 eventually the used swap memory increases to 100% and after being at
 that level for a couple of minutes, the analyze function crashes and
 indicates "server closed the connection unexpectedly."
>>>
>>> ANALYZE is supposed to work in a constrained amount of memory, though
>>>  that amount depends quite a bit on what the data is and what you've
>>> got the statistics target set to.
>>>
>>> We've heard reports that there are memory leaks in (some versions
>>> of?) PostGIS's analyze support.  Whether that would apply to you would
>>> depend on whether you're using PostGIS.
>>>
>>> Hard to say more without a lot more concrete info about what your
>>> data is, what PG version you're using, etc.
>>>
>>> regards, tom lane
>>>
>>
>> Thanks for the response Tom.
>>
>>
>>
>
> Don't know if I'm on the right track with this, but what is
> maintenance_work_mem set to on this system?
>
> --
> Bill Moran
>

Hello Bill,

maintenance_work_mem is set to 480MB. I haven't changed that from what
pgtune suggested.






--
View this message in context: 
http://postgresql.nabble.com/Would-like-to-know-how-analyze-works-technically-tp5844197p5844252.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread FarjadFarid(ChkNet)
It sounds like your system had crashed several times. 

My suggestion would be first ensure that your tables and indexes are not
corrupted. 

Second suggestion is to ensure your index is tightly represents the data you
are accessing.  The tighter it is the faster the response time. The less
memory and CPU usage. 

Of course these are basic for any good DB but these essential before moving
to more complex issues.  



-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Bill Moran
Sent: 01 April 2015 13:48
To: TonyS
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Would like to know how analyze works technically

On Wed, 1 Apr 2015 04:33:07 -0700 (MST)
TonyS  wrote:

> On Wed, April 1, 2015 12:18 am, Tom Lane-2 [via PostgreSQL] wrote:
> >
> > TonyS  writes:
> >
> >> Running "analyze verbose;" and watching top, the system starts out 
> >> using no swap data and about 4GB of cached memory and about 1GB of 
> >> used memory. As it runs, the amount of used RAM climbs, and 
> >> eventually the used swap memory increases to 100% and after being 
> >> at that level for a couple of minutes, the analyze function crashes 
> >> and indicates "server closed the connection unexpectedly."
> >
> > ANALYZE is supposed to work in a constrained amount of memory, 
> > though that amount depends quite a bit on what the data is and what 
> > you've got the statistics target set to.
> >
> > We've heard reports that there are memory leaks in (some versions 
> > of?) PostGIS's analyze support.  Whether that would apply to you 
> > would depend on whether you're using PostGIS.
> >
> > Hard to say more without a lot more concrete info about what your 
> > data is, what PG version you're using, etc.
> >
> > regards, tom lane
> >
> 
> Thanks for the response Tom.
> 
> I am not using PostGIS. The data in my system is mostly along the 
> lines of what you would see in an accounts payable, accounts 
> receivable, and billing type situation. Names and addresses of 
> individuals, information about billing, payments received, payments sent
etc.
> 
> All of my indexes are b-tree indexes.
> 
> Currently, the largest individual table is 1.8GB.
> 
> select version() returns:
> PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
> 4.8.2-19ubuntu1) 4.8.2, 64-bit
> 
> OS: Ubuntu 14.04.1 LTS
> 
> Physical memory: 8GB
> 
> The postgresql log has these entries at the crash point:
> 2015-04-01 06:24:37 EDT LOG:  server process (PID 1384) was terminated 
> by signal 9: Killed
> 2015-04-01 06:24:38 EDT DETAIL:  Failed process was running: analyze 
> verbose;
> 2015-04-01 06:24:38 EDT LOG:  terminating any other active server 
> processes
> 
> I started this process at 11PM, so it ran for about 7.5 hours before 
> crashing.
> 
> Is there anything else that would be helpful?

Don't know if I'm on the right track with this, but what is
maintenance_work_mem set to on this system?

--
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread Bill Moran
On Wed, 1 Apr 2015 04:33:07 -0700 (MST)
TonyS  wrote:

> On Wed, April 1, 2015 12:18 am, Tom Lane-2 [via PostgreSQL] wrote:
> >
> > TonyS  writes:
> >
> >> Running "analyze verbose;" and watching top, the system starts out
> >> using no swap data and about 4GB of cached memory and about 1GB of used
> >> memory. As it runs, the amount of used RAM climbs, and eventually the
> >> used swap memory increases to 100% and after being at that level for a
> >> couple of minutes, the analyze function crashes and indicates "server
> >> closed the connection unexpectedly."
> >
> > ANALYZE is supposed to work in a constrained amount of memory, though
> > that amount depends quite a bit on what the data is and what you've got the
> > statistics target set to.
> >
> > We've heard reports that there are memory leaks in (some versions of?)
> > PostGIS's analyze support.  Whether that would apply to you would depend
> > on whether you're using PostGIS.
> >
> > Hard to say more without a lot more concrete info about what your
> > data is, what PG version you're using, etc.
> >
> > regards, tom lane
> >
> 
> Thanks for the response Tom.
> 
> I am not using PostGIS. The data in my system is mostly along the lines of
> what you would see in an accounts payable, accounts receivable, and
> billing type situation. Names and addresses of individuals, information
> about billing, payments received, payments sent etc.
> 
> All of my indexes are b-tree indexes.
> 
> Currently, the largest individual table is 1.8GB.
> 
> select version() returns:
> PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
> 4.8.2-19ubuntu1) 4.8.2, 64-bit
> 
> OS: Ubuntu 14.04.1 LTS
> 
> Physical memory: 8GB
> 
> The postgresql log has these entries at the crash point:
> 2015-04-01 06:24:37 EDT LOG:  server process (PID 1384) was terminated by
> signal 9: Killed
> 2015-04-01 06:24:38 EDT DETAIL:  Failed process was running: analyze verbose;
> 2015-04-01 06:24:38 EDT LOG:  terminating any other active server processes
> 
> I started this process at 11PM, so it ran for about 7.5 hours before
> crashing.
> 
> Is there anything else that would be helpful?

Don't know if I'm on the right track with this, but what is
maintenance_work_mem set to on this system?

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GiST indeices on range types

2015-04-01 Thread Rémi Cura
As far as I understand it (not much), gist index over spatial data is in
fact gist index over range(x), range(y).

This is why Gist works in n-dimension. It always works on range
(conceptually).

In fact rectangle are the intersection of a range on x and a range on y
(literally)
same, a 3D box is the intersection of range on x,y,z
You could go further by adding time, etc.

Cheers,
Rémi-C

2015-04-01 9:00 GMT+02:00 Magnus Hagander :

> On Sat, Mar 28, 2015 at 7:52 AM, Rebecca Zahra 
> wrote:
>
>> Good morning,
>>
>> I am Rebecca Zahra and I am currently in my final year of Masters studies
>> at the University of Malta. My thesis is about the usage of indexes for
>> multi-dimensional data.
>>
>> I was going through the posts regarding GIST indexes and I came across
>> the following
>> http://dba.stackexchange.com/questions/39589/optimizing-queries-on-a-range-of-timestamps-two-columns
>>
>> I was wondering if maybe you can help me with a question.  I know that an
>> R-Tree index implementation is used on top of GIST to index spatial data.
>> Can you please tell me what type of index is used on top of GIST to index 
>> *range
>> types*?
>>
>>
> PostgreSQL has had indexable range types for quite some time now:
> http://www.postgresql.org/docs/current/static/rangetypes.html#RANGETYPES-GIST
>
> Indexable with gist or spgist. I don't think the docs cover the actual
> implementation internals though - you'll probably have to go to the source
> if you need that.
>
> --
>  Magnus Hagander
>  Me: http://www.hagander.net/
>  Work: http://www.redpill-linpro.com/
>


Re: [GENERAL] Creating a non-strict custom aggregate that initializes to the first value

2015-04-01 Thread Francisco Olarte
Hi Timothy:

On Thu, Mar 26, 2015 at 9:49 PM, Timothy Garnett  wrote:
> I'm trying to create a custom aggregate function that returns the value from
> the first row (possibly null).
...
> The straightforward way would seem to be something like
>
> CREATE OR REPLACE FUNCTION public.first_agg ( anyelement, anyelement )
> RETURNS anyelement LANGUAGE sql IMMUTABLE STRICT AS $$
> SELECT $1;
> $$;
> but if that is declared strict then it would take the first non-null value
> and return A in my second example, if declared non-strict then the initial
> state would be fed as null rather then the first value. Is there a way to
> declare the function non-strict (so that null values are passed) but still
> have it initialize to the first value like it would if it was strict?

First, your pasted code is incomplete, you should post all relevant
code ( I suspect a highly relevant create aggregate statement is
missing ).

Next, it's already been pointed you are trying to use null for two
meanings, which is not gonna work.

I think your problem is you are using the one function form of
aggregates. Aggregates, in postgres and lots of other places, are done
with two functions, an state transition function called for each row (
more or less, strictness asside ) and a final function. Think on
average, you need an state consisting of sum, count, and state
transition function adding to sum and incremementing count and a final
function to do the division.

Generally, you start with STATE_TYPE state=initial_state; foreach
value state=transition(state, current_val); AGG_TYPE agg_value =
final(state). STATE_TYPE and AGG_TYPE need not be the same.

 When STATE_TYPE and AGG_TYPE are the same many systems, postgres
included,  have a shortcut for using just an state transition function
, using indentity as the final function, as this is adequate for
things like count, min/max, sum, and this seems to be the variant you
are using.

In your case you should probably use the two form agregate, use a
composite type for the state ( first_time, val ), an state transition
function ( if ft true store val, else skip ) and a final function (
return val from state ) ( there are posibly other methods, like using
an array for state, initializing to empty, storing val if empty or
skipping, just play around ).

Also, you may be having an http://xyproblem.info/ , do you want an
aggregate to return the first value, or a function?. ( I suspect it is
the first case, as you are pasting a create function for first_agg and
a select for my_first, but not having posted a complete example makes
it very hard ).

Regards.

   Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-01 Thread TonyS
On Wed, April 1, 2015 12:18 am, Tom Lane-2 [via PostgreSQL] wrote:
>

>
> TonyS  writes:
>
>> Running "analyze verbose;" and watching top, the system starts out
>> using no swap data and about 4GB of cached memory and about 1GB of used
>> memory. As it runs, the amount of used RAM climbs, and eventually the
>> used swap memory increases to 100% and after being at that level for a
>> couple of minutes, the analyze function crashes and indicates "server
>> closed the connection unexpectedly."
>
> ANALYZE is supposed to work in a constrained amount of memory, though
> that amount depends quite a bit on what the data is and what you've got the
> statistics target set to.
>
> We've heard reports that there are memory leaks in (some versions of?)
> PostGIS's analyze support.  Whether that would apply to you would depend
> on whether you're using PostGIS.
>
> Hard to say more without a lot more concrete info about what your
> data is, what PG version you're using, etc.
>
> regards, tom lane
>

Thanks for the response Tom.

I am not using PostGIS. The data in my system is mostly along the lines of
what you would see in an accounts payable, accounts receivable, and
billing type situation. Names and addresses of individuals, information
about billing, payments received, payments sent etc.

All of my indexes are b-tree indexes.

Currently, the largest individual table is 1.8GB.

select version() returns:
PostgreSQL 9.3.6 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
4.8.2-19ubuntu1) 4.8.2, 64-bit

OS: Ubuntu 14.04.1 LTS

Physical memory: 8GB

The postgresql log has these entries at the crash point:
2015-04-01 06:24:37 EDT LOG:  server process (PID 1384) was terminated by
signal 9: Killed
2015-04-01 06:24:38 EDT DETAIL:  Failed process was running: analyze verbose;
2015-04-01 06:24:38 EDT LOG:  terminating any other active server processes

I started this process at 11PM, so it ran for about 7.5 hours before
crashing.

Is there anything else that would be helpful?





--
View this message in context: 
http://postgresql.nabble.com/Would-like-to-know-how-analyze-works-technically-tp5844197p5844240.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Creating a non-strict custom aggregate that initializes to the first value

2015-04-01 Thread David G. Johnston
On Thu, Mar 26, 2015 at 1:49 PM, Timothy Garnett 
wrote:

>
> but if that is declared strict then it would take the first non-null value
> and return A in my second example, if declared non-strict then the initial
> state would be fed as null rather then the first value. Is there a way to
> declare the function non-strict (so that null values are passed) but still
> have it initialize to the first value like it would if it was strict?
>

​Late night pondering here but...

Because of the way SQL null works, and your desired to handle "anyelement",
you are stuck determining whether you are currently evaluating the first
row of your input - or not.  For the first row you always take the "new"
value while for all subsequent rows you take the "state" value.  So, your
state needs to encompass both "prior row number" and "active value", which
suggests you need to create a custom type for your state variable.

You want NULL to both mean "not initialized" and "unknown value" which is
impossible and SQL does not provide any other universal literal that means
one or the other.

I'm not sure how you deal with "anyelement" in a custom type that could be
used as a state variable...

David J.


Re: [GENERAL] Strange behavior of insert CTE with trigger

2015-04-01 Thread David G. Johnston
On Fri, Mar 27, 2015 at 4:18 PM, Anil Menon  wrote:

> Hi,
>
> I am trying to wrap my head around a strange problem I am having. I have
> double checked the documentation but I could not find anything on this.
>
> ​[...]​


>
> However I get no rows  returned from the select statement- looks the
> insert to abc_Excp_log is executed *after* the select statement or some
> sort of race condition is executed.
>
> Is this documented anywhere and is the expected behavior? Documented
> anywhere? The CTE part of the PG doc does not say anything on this.
>
>
​Yes, it does:
http://www.postgresql.org/docs/9.3/static/queries-with.html#QUERIES-WITH-MODIFYING

​Specifically (nearly the entire last 1/6 of the page - the "Data-Modifying
Statements in WITH" section):

​"​
The sub-statements in WITH are executed concurrently with each other and
with the main query.
"

Since you cannot see even the updated price on products in the following
query the fact that you cannot see the result of triggers on the same is a
logical conclusion even though triggers are not explicitly mentioned.

WITH t AS (
UPDATE products SET price = price * 1.05
RETURNING *
)
SELECT * FROM products;

David J.


Re: [GENERAL] GiST indeices on range types

2015-04-01 Thread Magnus Hagander
On Sat, Mar 28, 2015 at 7:52 AM, Rebecca Zahra 
wrote:

> Good morning,
>
> I am Rebecca Zahra and I am currently in my final year of Masters studies
> at the University of Malta. My thesis is about the usage of indexes for
> multi-dimensional data.
>
> I was going through the posts regarding GIST indexes and I came across the
> following
> http://dba.stackexchange.com/questions/39589/optimizing-queries-on-a-range-of-timestamps-two-columns
>
> I was wondering if maybe you can help me with a question.  I know that an
> R-Tree index implementation is used on top of GIST to index spatial data.
> Can you please tell me what type of index is used on top of GIST to index 
> *range
> types*?
>
>
PostgreSQL has had indexable range types for quite some time now:
http://www.postgresql.org/docs/current/static/rangetypes.html#RANGETYPES-GIST

Indexable with gist or spgist. I don't think the docs cover the actual
implementation internals though - you'll probably have to go to the source
if you need that.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/