[GENERAL] Best way to find last data sample before a given time

2016-10-21 Thread George Woodring
ndex Scan Backward using d_current_20161014_pkey_index
on d_current_20161014  (cost=0.56..511168.98 rows=299 width=16) (actu
al time=233.877..233.877 rows=1 loops=1)
 Index Cond: ((timeslot <= '2016-10-20
21:40:00+00'::timestamp with time zone) AND (pollgrpid = 497582) AND (dsnum
= 0))
   ->  Index Scan Backward using d_current_20161015_pkey_index
on d_current_20161015  (cost=0.56..511254.41 rows=302 width=16) (actu
al time=207.323..207.323 rows=1 loops=1)
 Index Cond: ((timeslot <= '2016-10-20
21:40:00+00'::timestamp with time zone) AND (pollgrpid = 497582) AND (dsnum
= 0))
   ->  Index Scan Backward using d_current_20161016_pkey_index
on d_current_20161016  (cost=0.56..511334.88 rows=299 width=16) (actu
al time=231.319..231.319 rows=1 loops=1)
 Index Cond: ((timeslot <= '2016-10-20
21:40:00+00'::timestamp with time zone) AND (pollgrpid = 497582) AND (dsnum
= 0))
   ->  Index Scan Backward using d_current_20161017_pkey_index
on d_current_20161017  (cost=0.56..511533.88 rows=304 width=16) (actu
al time=524.123..524.123 rows=1 loops=1)
 Index Cond: ((timeslot <= '2016-10-20
21:40:00+00'::timestamp with time zone) AND (pollgrpid = 497582) AND (dsnum
= 0))
   ->  Index Scan Backward using d_current_20161018_pkey_index
on d_current_20161018  (cost=0.56..511763.48 rows=299 width=16) (actu
al time=232.597..232.597 rows=1 loops=1)
 Index Cond: ((timeslot <= '2016-10-20
21:40:00+00'::timestamp with time zone) AND (pollgrpid = 497582) AND (dsnum
= 0))
   ->  Index Scan Backward using d_current_20161019_pkey_index
on d_current_20161019  (cost=0.56..511706.24 rows=303 width=16) (actu
al time=247.332..247.332 rows=1 loops=1)
 Index Cond: ((timeslot <= '2016-10-20
21:40:00+00'::timestamp with time zone) AND (pollgrpid = 497582) AND (dsnum
= 0))
   ->  Index Scan Backward using d_current_20161020_pkey_index
on d_current_20161020  (cost=0.56..465464.76 rows=267 width=16) (actu
al time=1.095..1.095 rows=1 loops=1)
 Index Cond: ((timeslot <= '2016-10-20
21:40:00+00'::timestamp with time zone) AND (pollgrpid = 497582) AND (dsnum
= 0))
 Total runtime: 2604.818 ms
(35 rows)


My problem is that if I run this against a newly created pollgrpid/dsnum
pair, there is no data in the earlier tables and my guess is that the query
switches to an seqence scan because I cannot get the query to finish ( my
last attempt did not complete after waiting 10 minutes ).

Any suggestions would be appreciated

George Woodring
iGLASS Networks
www.iglass.net


Re: [GENERAL] SSL connection issue via perl

2016-01-05 Thread George Woodring
Just to add a final resolution to this thread, my solution was to add "use
Net::SSL" to the top of my script.  Even though my script does not use SSL
directly, I think this must have helped in loading the libraries in an
order that let everything work.

Thanks for the help

iGLASS Networks
www.iglass.net

On Thu, Dec 31, 2015 at 2:29 PM, George Woodring  wrote:

> OS: CentOS 6.6
> Postgres Version: 9.3.10
>
> I have a script that is worked for years that does the following
>
> - Connect to postgres and get a list of URLs to poll for status
> - close connection
> - Start threads to poll the URLs
> - cleanup threads and collect the results.
> - Connect to postgres and write the url status.
> - close connection
>
> We updated perl SSL libraries to the latest version, one of which was
> Net::SSLeay 1.35 -> 1.72
>
> Now the script dies without any feedback when attempting the 2nd
> connection.  The only hint at the problem is
>
> /var/log/messages
> Dec 31 14:04:03 iprobe002 kernel: iPoller2.pl[16044] general protection
> ip:7f677fde112c sp:7fff5db9e328 error:0 in SSLeay.so[7f677fd6a000+94000]
>
> /var/log/postgresql
> Dec 31 14:04:03 iprobe002 postgres[16255]: [4-1] LOG:  could not accept
> SSL connection: EOF detected
>
> I have worked around the immediate issue by keeping the 1st connection
> open for the entire script instead of making 2 connections, but I would
> like to try to find out what is going wrong.
>
> Any suggestions would be appreciated.
> Thanks,
> George
>
>
> iGLASS Networks
> www.iglass.net
>


Re: [GENERAL] SSL connection issue via perl

2015-12-31 Thread George Woodring
I went and look and we have the ssl_renegotiation_limit set to the default,
which the documentation says is 0.

Thanks,
George

iGLASS Networks
www.iglass.net

On Thu, Dec 31, 2015 at 3:16 PM, Adrian Klaver 
wrote:

> On 12/31/2015 11:29 AM, George Woodring wrote:
>
>> OS: CentOS 6.6
>> Postgres Version: 9.3.10
>>
>> I have a script that is worked for years that does the following
>>
>> - Connect to postgres and get a list of URLs to poll for status
>> - close connection
>> - Start threads to poll the URLs
>> - cleanup threads and collect the results.
>> - Connect to postgres and write the url status.
>> - close connection
>>
>> We updated perl SSL libraries to the latest version, one of which was
>> Net::SSLeay 1.35 -> 1.72
>>
>> Now the script dies without any feedback when attempting the 2nd
>> connection.  The only hint at the problem is
>>
>> /var/log/messages
>> Dec 31 14:04:03 iprobe002 kernel: iPoller2.pl[16044] general protection
>> ip:7f677fde112c sp:7fff5db9e328 error:0 in SSLeay.so[7f677fd6a000+94000]
>>
>> /var/log/postgresql
>> Dec 31 14:04:03 iprobe002 postgres[16255]: [4-1] LOG:  could not accept
>> SSL connection: EOF detected
>>
>> I have worked around the immediate issue by keeping the 1st connection
>> open for the entire script instead of making 2 connections, but I would
>> like to try to find out what is going wrong.
>>
>> Any suggestions would be appreciated.
>>
>
> Might want to take a look at the ssl_renegotiation_limit setting in
> postgresql.conf and if it is set to > 0, reset to 0 per:
>
>
> http://www.postgresql.org/docs/9.4/interactive/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SECURITY
>
> "
>  (integer)
>
> Specifies how much data can flow over an SSL-encrypted connection
> before renegotiation of the session keys will take place. Renegotiation
> decreases an attacker's chances of doing cryptanalysis when large amounts
> of traffic can be examined, but it also carries a large performance
> penalty. The sum of sent and received traffic is used to check the limit.
> If this parameter is set to 0, renegotiation is disabled. The default is 0.
>
> Note: SSL libraries from before November 2009 are insecure when
> using SSL renegotiation, due to a vulnerability in the SSL protocol. As a
> stop-gap fix for this vulnerability, some vendors shipped SSL libraries
> incapable of doing renegotiation. If any such libraries are in use on the
> client or server, SSL renegotiation should be disabled.
>
> Warning
>
> Due to bugs in OpenSSL enabling ssl renegotiation, by configuring a
> non-zero ssl_renegotiation_limit, is likely to lead to problems like
> long-lived connections breaking.
>
> "
>
> and this from the 9.5 release notes:
>
>
> "
> Decommission server configuration parameter ssl_renegotiation_limit, which
> was deprecated in earlier releases (Andres Freund)
>
> While SSL renegotiation is a good idea in theory, it has caused enough
> bugs to be considered a net negative in practice, and it is due to be
> removed from future versions of the relevant standards. We have therefore
> removed support for it from PostgreSQL. The ssl_renegotiation_limit
> parameter still exists, but cannot be set to anything but zero (disabled).
> It's not documented anymore, either.
>
> "
>
>> Thanks,
>> George
>>
>>
>> iGLASS Networks
>> www.iglass.net <http://www.iglass.net>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] SSL connection issue via perl

2015-12-31 Thread George Woodring
OS: CentOS 6.6
Postgres Version: 9.3.10

I have a script that is worked for years that does the following

- Connect to postgres and get a list of URLs to poll for status
- close connection
- Start threads to poll the URLs
- cleanup threads and collect the results.
- Connect to postgres and write the url status.
- close connection

We updated perl SSL libraries to the latest version, one of which was
Net::SSLeay 1.35 -> 1.72

Now the script dies without any feedback when attempting the 2nd
connection.  The only hint at the problem is

/var/log/messages
Dec 31 14:04:03 iprobe002 kernel: iPoller2.pl[16044] general protection
ip:7f677fde112c sp:7fff5db9e328 error:0 in SSLeay.so[7f677fd6a000+94000]

/var/log/postgresql
Dec 31 14:04:03 iprobe002 postgres[16255]: [4-1] LOG:  could not accept SSL
connection: EOF detected

I have worked around the immediate issue by keeping the 1st connection open
for the entire script instead of making 2 connections, but I would like to
try to find out what is going wrong.

Any suggestions would be appreciated.
Thanks,
George


iGLASS Networks
www.iglass.net


Re: [GENERAL] SQL solution for my JDBC timezone issue

2015-02-24 Thread George Woodring
>
>
>
> So:
>
> JDBC Web servers(US/East)  <---> 90 database (5 different timezones)
>
> Therefore everything to the end user is passed through the Web servers?
>
> Is there a reason why the databases have different timezones?
>
> Seems to me less complicated to have all the databases share the UTC
> timezone. Then you only have one offset, US/East <--> UTC.
>
>
We originally did it for simplicity sake, the database is set for our
customer's timezone.  So when we grabbed a date from the db, it would in
the correct timezone for the viewer, without the web code having to know
the timezone.

George Woodring
iGLASS Networks
www.iglass.net


Re: [GENERAL] SQL solution for my JDBC timezone issue

2015-02-24 Thread George Woodring
 -- In your original post you mentioned that access to the databases is
through a Web server.

 -- Is there just one Web server with one time zone?

We have 2 web servers that are clustered together.  They are both set to
Eastern since that is the timezone they are located in.

iGLASS Networks
www.iglass.net


Re: [GENERAL] SQL solution for my JDBC timezone issue

2015-02-23 Thread George Woodring
This is what I was looking for, however the JDBC does something to make its
timezone the default.

My cluster is set to GMT, I have a DB that is set to US/Pacific,  when I
get the connection from JDBC it is US/Eastern.  The reset command does not
affect it.  I can set timezone in the code to 'US/Pacific" and I see it
change, when I do another RESET timezone it goes back to US/Eastern.

Thanks,
George Woodring

iGLASS Networks
www.iglass.net

On Mon, Feb 23, 2015 at 10:49 AM, Tom Lane  wrote:

> George Woodring  writes:
> > Yes, that is where we think we are heading, the issue is that the code
> does
> > not know what it needs to be set back to.  We have 90 databases with 5
> > different time zones.  I was just hoping for a more elegant solution than
> > writing a lookup table that says if you are connecting to db x then set
> to
> > timezone y.
>
> "RESET timezone" ?
>
> regards, tom lane
>


Re: [GENERAL] SQL solution for my JDBC timezone issue

2015-02-23 Thread George Woodring
Yes, that is where we think we are heading, the issue is that the code does
not know what it needs to be set back to.  We have 90 databases with 5
different time zones.  I was just hoping for a more elegant solution than
writing a lookup table that says if you are connecting to db x then set to
timezone y.

George Woodring

iGLASS Networks
www.iglass.net

On Mon, Feb 23, 2015 at 9:16 AM, Dave Cramer  wrote:

> Well you could always just put it back to whatever you want when you open
> the connection  ie "set timezone "
>
>
>
> Dave Cramer
>
> dave.cramer(at)credativ(dot)ca
> http://www.credativ.ca
>
> On 23 February 2015 at 08:40, George Woodring 
> wrote:
>
>> Anyone have a suggestion for setting the timezone back to the Postgres db
>> default on a connection.  JDBC now sets the timezone to be the client which
>> is my web server and ignores the default timezone that I have set in the
>> DB.  There are large parts of my code that I have never worried about
>> timezones because the DB would handle it.  Before I head down that path, I
>> thought I would check and see if there was an easier way for me to put it
>> back into the database.
>>
>> My latest issue is I create a date object and set it to 3am PST,  If I
>> save it into a timestamp with timezone the db saves it at 3am PST when I
>> pull it out.  If I save it to timestamp without timezone, I get 6am now
>> where as before I would get 3am.
>>
>> Any suggestions would be appreciated
>> George Woodring
>> iGLASS Networks
>> www.iglass.net
>>
>
>


[GENERAL] SQL solution for my JDBC timezone issue

2015-02-23 Thread George Woodring
Anyone have a suggestion for setting the timezone back to the Postgres db
default on a connection.  JDBC now sets the timezone to be the client which
is my web server and ignores the default timezone that I have set in the
DB.  There are large parts of my code that I have never worried about
timezones because the DB would handle it.  Before I head down that path, I
thought I would check and see if there was an easier way for me to put it
back into the database.

My latest issue is I create a date object and set it to 3am PST,  If I save
it into a timestamp with timezone the db saves it at 3am PST when I pull it
out.  If I save it to timestamp without timezone, I get 6am now where as
before I would get 3am.

Any suggestions would be appreciated
George Woodring
iGLASS Networks
www.iglass.net


Re: [GENERAL] Server hanging with high CPU wait -- Checkpoint issue?

2014-02-04 Thread George Woodring
Thank you for the response.

Our new server went down in memory from 20Gig to 16Gig.  Our old server has
100 databases in the cluster, and we will be splitting up into multiple
servers so we thought the the decrease was acceptable.

dirty_background_ratio is 10 on the new box and 1 on the old.  That might
be something we can look at.

If the bottle neck is IO, then it would have to be memory access.  When our
system gets in the state, the writer and wal writer both get into wait,
iostat goes to 0, our network traffic goes to 0.  After the freeze, iostat
increases and it catches up very easily.

Thanks again for the help, I will update if we find something useful

George

iGLASS Networks
www.iglass.net


On Mon, Feb 3, 2014 at 3:35 PM, Jeff Janes  wrote:

> On Mon, Feb 3, 2014 at 11:16 AM, George Woodring <
> george.woodr...@iglass.net> wrote:
>
>> I am running 9.3.2 on CentOS 6.5
>>
>> We have a new server we are migrating to that seems to lock up for up to
>> 30sec at a time with all of the processes in wait.  I am seeing a weird
>> output with the checkpoint logs and am wondering if I need to adjust
>> something with the checkpointer config.
>>
>> My checkpoint settings are:
>>
>> checkpoint_completion_target = 0.9
>> checkpoint_timeout = 12min
>> checkpoint_segments = 50 # in logfile segments, min 1, 16MB each
>> checkpoint_warning = 300s # 0 disables
>>
>> When the box is running will, the checkpoint logs look like the
>> following, checkpoints restarting every 12 minutes and it completing in
>> about 10 minutes.
>>
>> Feb  3 11:00:09 sponge07b postgres[2754]: [2651-1] LOG:  checkpoint
>> starting: time
>> Feb  3 11:10:57 sponge07b postgres[2754]: [2652-1] LOG:  checkpoint
>> complete: wrote 32352 buffers (6.2%); 0 transaction log file(s) added, 0
>> removed, 14 recycled; write=647.749 s, sync=0.110 s, total=647.908 s; sync
>> files=424, longest=0.017 s, average=0.000 s
>>
>> ...
>
>
>> When we are having the issue just now, the checkpoint logs look like the
>> following.  Checkpoints running every 12 minutes, but the checkpoint is
>> written after 2-3 minutes
>>
>>  Feb  3 12:24:09 sponge07b postgres[2754]: [2665-1] LOG:  checkpoint
>> starting: time
>> Feb  3 12:27:43 sponge07b postgres[2754]: [2666-1] LOG:  checkpoint
>> complete: wrote 10127 buffers (1.9%); 0 transaction log file(s) added, 4
>> removed, 28 recycled; write=197.510 s, sync=15.975 s, total=213.746 s; sync
>> files=440, longest=9.089 s, average=0.022 s
>>
>>
> Probably what is happening here is that the backends or the bgwriter are
> writing out dirty buffers (because they need those buffers for other
> purposes) that have been marked for the checkpointer to write.  The
> checkpointer then thinks it is falling behind, because it does not realize
> that other processes are clearing its flags, so it finishes early.
>
> This estimation problem perhaps can be fixed in a future version of
> PostgreSQL, for example by including in the fsync queue information about
> how many buffers flagged for the checkpointer have been written by others,
> or by adding a bit to the buffer headers for buffers that were flagged for
> the checkpointer but were written by someone else first.
>
> However, this would probably not solve the true problem, as I think the
> foreshortened checkpoint is probably just a symptom and not the cause.  The
> most likely cause is that you have a spike in buffer dirtying, which both
> causes congestion in your IO system (which actually causes the freezes) and
> also causes the bgwriter to write out buffers flagged for the checkpointer
> which in turn causes the checkpointer to finish early.  It is possible that
> fixing the estimation problem would give the IO more time to un-congest
> before the fsyncs start landing, but I am not optimistic about that being
> very effective.
>
> But all of that is for PostgreSQL's hacker to consider.  From the
> perspective of a PostgreSQL user, I'd ask about the difference in RAM
> between the old and new server, and what the setting are
> for /proc/sys/vm/dirty_background_ratio and
> /proc/sys/vm/dirty_background_bytes?  Probably your new server is allowing
> a lot more dirty data to build up in the kernel's buffers, which then
> causes serious congestion when those buffers need to get written for real.
>
> Cheers,
>
> Jeff
>


[GENERAL] Server hanging with high CPU wait -- Checkpoint issue?

2014-02-03 Thread George Woodring
I am running 9.3.2 on CentOS 6.5

We have a new server we are migrating to that seems to lock up for up to
30sec at a time with all of the processes in wait.  I am seeing a weird
output with the checkpoint logs and am wondering if I need to adjust
something with the checkpointer config.

My checkpoint settings are:

checkpoint_completion_target = 0.9
checkpoint_timeout = 12min
checkpoint_segments = 50 # in logfile segments, min 1, 16MB each
checkpoint_warning = 300s # 0 disables

When the box is running will, the checkpoint logs look like the following,
checkpoints restarting every 12 minutes and it completing in about 10
minutes.

Feb  3 11:00:09 sponge07b postgres[2754]: [2651-1] LOG:  checkpoint
starting: time
Feb  3 11:10:57 sponge07b postgres[2754]: [2652-1] LOG:  checkpoint
complete: wrote 32352 buffers (6.2%); 0 transaction log file(s) added, 0
removed, 14 recycled; write=647.749 s, sync=0.110 s, total=647.908 s; sync
files=424, longest=0.017 s, average=0.000 s
Feb  3 11:12:09 sponge07b postgres[2754]: [2653-1] LOG:  checkpoint
starting: time
Feb  3 11:22:57 sponge07b postgres[2754]: [2654-1] LOG:  checkpoint
complete: wrote 33459 buffers (6.4%); 0 transaction log file(s) added, 0
removed, 14 recycled; write=647.737 s, sync=0.146 s, total=647.967 s; sync
files=441, longest=0.017 s, average=0.000 s
Feb  3 11:24:09 sponge07b postgres[2754]: [2655-1] LOG:  checkpoint
starting: time
Feb  3 11:34:57 sponge07b postgres[2754]: [2656-1] LOG:  checkpoint
complete: wrote 33397 buffers (6.4%); 0 transaction log file(s) added, 0
removed, 15 recycled; write=647.630 s, sync=0.085 s, total=647.788 s; sync
files=448, longest=0.009 s, average=0.000 s
Feb  3 11:36:09 sponge07b postgres[2754]: [2657-1] LOG:  checkpoint
starting: time
Feb  3 11:46:57 sponge07b postgres[2754]: [2658-1] LOG:  checkpoint
complete: wrote 32569 buffers (6.2%); 0 transaction log file(s) added, 0
removed, 15 recycled; write=647.633 s, sync=0.118 s, total=647.806 s; sync
files=458, longest=0.014 s, average=0.000 s
Feb  3 11:48:09 sponge07b postgres[2754]: [2659-1] LOG:  checkpoint
starting: time
Feb  3 11:58:57 sponge07b postgres[2754]: [2660-1] LOG:  checkpoint
complete: wrote 34068 buffers (6.5%); 0 transaction log file(s) added, 0
removed, 15 recycled; write=647.739 s, sync=0.129 s, total=647.911 s; sync
files=428, longest=0.019 s, average=0.000 s

When we are having the issue just now, the checkpoint logs look like the
following.  Checkpoints running every 12 minutes, but the checkpoint is
written after 2-3 minutes

Feb  3 12:24:09 sponge07b postgres[2754]: [2665-1] LOG:  checkpoint
starting: time
Feb  3 12:27:43 sponge07b postgres[2754]: [2666-1] LOG:  checkpoint
complete: wrote 10127 buffers (1.9%); 0 transaction log file(s) added, 4
removed, 28 recycled; write=197.510 s, sync=15.975 s, total=213.746 s; sync
files=440, longest=9.089 s, average=0.022 s
Feb  3 12:36:09 sponge07b postgres[2754]: [2667-1] LOG:  checkpoint
starting: time
Feb  3 12:39:01 sponge07b postgres[2754]: [2668-1] LOG:  checkpoint
complete: wrote 7540 buffers (1.4%); 0 transaction log file(s) added, 0
removed, 15 recycled; write=168.468 s, sync=3.899 s, total=172.425 s; sync
files=450, longest=2.261 s, average=0.008 s
Feb  3 12:48:09 sponge07b postgres[2754]: [2669-1] LOG:  checkpoint
starting: time
Feb  3 12:53:57 sponge07b postgres[2754]: [2670-1] LOG:  checkpoint
complete: wrote 9990 buffers (1.9%); 0 transaction log file(s) added, 0
removed, 16 recycled; write=340.182 s, sync=7.443 s, total=347.684 s; sync
files=472, longest=2.590 s, average=0.015 s
Feb  3 13:00:09 sponge07b postgres[2754]: [2671-1] LOG:  checkpoint
starting: time
Feb  3 13:04:07 sponge07b postgres[2754]: [2672-1] LOG:  checkpoint
complete: wrote 5769 buffers (1.1%); 0 transaction log file(s) added, 0
removed, 16 recycled; write=234.841 s, sync=3.164 s, total=238.060 s; sync
files=444, longest=2.323 s, average=0.007 s
Feb  3 13:12:09 sponge07b postgres[2754]: [2673-1] LOG:  checkpoint
starting: time
Feb  3 13:14:01 sponge07b postgres[2754]: [2674-1] LOG:  checkpoint
complete: wrote 5097 buffers (1.0%); 0 transaction log file(s) added, 0
removed, 15 recycled; write=108.781 s, sync=3.100 s, total=112.037 s; sync
files=430, longest=2.365 s, average=0.007 s
Feb  3 13:24:10 sponge07b postgres[2754]: [2675-1] LOG:  checkpoint
starting: time
Feb  3 13:29:18 sponge07b postgres[2754]: [2676-1] LOG:  checkpoint
complete: wrote 3276 buffers (0.6%); 0 transaction log file(s) added, 0
removed, 16 recycled; write=303.894 s, sync=0.383 s, total=308.004 s; sync
files=464, longest=0.274 s, average=0.000 s
Feb  3 13:36:10 sponge07b postgres[2754]: [2677-1] LOG:  checkpoint
starting: time
Feb  3 13:38:56 sponge07b postgres[2754]: [2678-1] LOG:  checkpoint
complete: wrote 5443 buffers (1.0%); 0 transaction log file(s) added, 0
removed, 16 recycled; write=166.440 s, sync=0.108 s, total=166.605 s; sync
files=426, longest=0.008 s, average=0.000 s

Could our issue be having to do a lot of cleanup before the next checkpoin

[GENERAL] client_min_messages documentation typo

2013-12-20 Thread George Woodring
Currently the documentation looks like:

client_min_messages (enum)

Controls which message levels are sent to the client. Valid values are
DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, LOG, NOTICE, WARNING, ERROR, FATAL,
and PANIC. Each level includes all the levels that follow it. The later the
level, the fewer messages are sent. The default is NOTICE. Note that LOG has
a different rank here than in log_min_messages.
log_min_messages (enum)

Controls which message levels are written to the server log. Valid values
are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR,
LOG, FATAL, and PANIC. Each level includes all the levels that follow it.
The later the level, the fewer messages are sent to the log. The default is
WARNING. Note that LOG has a different rank here than in client_min_messages.
Only superusers can change this setting.


Shouldn't client_min_messages include INFO?

George
iGLASS Networks
www.iglass.net


Re: [GENERAL] autovacuum issue after upgrade to 9.0.1

2011-04-11 Thread George Woodring
We are still seeing the spike in vacuums every 8 days, even after upgrading
to 9.0.3.  Any suggestions on how to spread them out?

Thanks,
George Woodring



>
> On Wed, Mar 16, 2011 at 7:12 PM, Aleksey Tsalolikhin <
> atsaloli.t...@gmail.com> wrote:
>
>  Do you see this issue on 9.0.3, the current and the recommended 9.x
>> version?
>>
>
> --
> iGLASS Networks
> www.iglass.net
>



-- 
iGLASS Networks
www.iglass.net


Re: [GENERAL] autovacuum issue after upgrade to 9.0.1

2011-04-01 Thread George Woodring
I was able to upgrade the machine on Wednesday to 9.0.3 and we saw the spike
on Thursday, right on the 8 day schedule.  I will keep my eye out next
Friday to see if it happens again.  This will have the whole period on the
new version.

Thanks
George

On Wed, Mar 16, 2011 at 7:12 PM, Aleksey Tsalolikhin <
atsaloli.t...@gmail.com> wrote:

 Do you see this issue on 9.0.3, the current and the recommended 9.x
> version?
>

-- 
iGLASS Networks
www.iglass.net


[GENERAL] autovacuum issue after upgrade to 9.0.1

2011-03-15 Thread George Woodring
We recently upgraded  from 8.3.something to 9.0.1.  With 9.0.1, we have a
huge spike in vacuums every 8 days only on one of our DB servers.  We go
from approx 20 vacuums every 5 minutes to 350 per 5 minutes.  This lasts for
several hours, then stops.  I have attached a graph that shows the
occurrence.  I am assuming that it needs to vacuum all of my tables to avoid
some sort of wrap around counter.  I am wondering what is the best way to
make autovacuum spread this out so it will not be quite a big of a hit.  I
we did not see this with 8.3 and I kept the setting the same after the
upgrade.

Here are my autovacuum settings:

autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 5
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 100
autovacuum_analyze_threshold = 100
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
autovacuum_freeze_max_age = 2
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = -1


Any suggestions would be appreciated.

Thanks,
George Woodring

-- 
iGLASS Networks
www.iglass.net
<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database corruption

2010-08-30 Thread George Woodring
I am running 8.3.3 currently on this box.
Last week we had a database corruption issue that started as:

Aug 24 07:15:19 iprobe028 postgres[20034]: [3-1] ERROR:  could not read
block 0 of relation 1663/16554/7463400: read only 0 of 8192 bytes
Aug 24 07:15:49 iprobe028 postgres[27663]: [3-1] ERROR:  could not read
block 0 of relation 1663/16554/7463400: read only 0 of 8192 bytes
Aug 24 07:20:10 iprobe028 postgres[20231]: [4-1] ERROR:  could not open
relation 1663/16554/7463400: No such file or directory
Aug 24 07:20:10 iprobe028 postgres[20231]: [5-1] ERROR:  current transaction
is aborted, commands ignored until end of transaction block
Aug 24 07:20:18 iprobe028 postgres[20231]: [6-1] ERROR:  could not open
relation 1663/16554/7463400: No such file or directory
Aug 24 07:20:51 iprobe028 postgres[25809]: [3-1] ERROR:  could not open
relation 1663/16554/7463400: No such file or directory
Aug 24 07:20:52 iprobe028 postgres[20280]: [3-1] ERROR:  could not open
relation 1663/16554/7463400: No such file or directory
Aug 24 07:21:52 iprobe028 postgres[20320]: [4-1] ERROR:  could not open
relation 1663/16554/7463400: No such file or directory
Aug 24 07:22:52 iprobe028 postgres[20338]: [3-1] ERROR:  could not open
relation 1663/16554/7463400: No such file or directory
Aug 24 07:23:52 iprobe028 postgres[20376]: [3-1] ERROR:  could not open
relation 1663/16554/7463400: No such file or directory


We renamed and recreated the database that it was complaining about, and the
error started up again with the new database within the hour.  We move the
database to a different disk, with the same results.  The only way to fix it
was to do a complete initdb on the box.

Tom:
I did not see anything that might have been fixed in the 8.3 release notes
that might be this issue.  I know this is an old version, but I have 2
copies of the data directory that should have the error if you (or whoever
you delegate) would like to see them before I remove them.  I don't have an
issue with sending the cluster data off list if it might be useful.

Thanks,
Woody

-- 
iGLASS Networks
www.iglass.net


Re: [GENERAL] Database corruption with duplicate tables.

2010-04-19 Thread George Woodring
The version is 8.3.3,  and I use autovacuum for the routine maintenance.

The ctid's are distinct

grande=# select oid, ctid, relname from pg_class where oid IN
(26770910, 26770918, 26770919, 26770920);
   oid|  ctid   |relname
--+-+---
 26770910 | (36,52) | availcpedata_20100410
 26770918 | (36,42) | availcpedata_20100410_date_index
 26770919 | (36,45) | availcpedata_20100410_pollgrpid_index
 26770910 | (37,19) | availcpedata_20100410
(4 rows)


I will try deleting the one with (37,19) manually in the morning.

Thanks for the suggestion.

Woody

On Mon, Apr 19, 2010 at 1:32 PM, Tom Lane  wrote:
> George Woodring  writes:
>> Upon investigation I found that I have a table that is in the database twice
>
>> db=> select oid, relname from pg_class where oid IN (26770910,
>> 26770918, 26770919);
>>    oid    |                relname
>> --+---
>>  26770910 | availcpedata_20100410
>>  26770918 | availcpedata_20100410_date_index
>>  26770919 | availcpedata_20100410_pollgrpid_index
>>  26770910 | availcpedata_20100410
>> (4 rows)
>
> It's not immediately clear whether that's really two instances of the
> row for availcpedata_20100410, or a false hit due to index corruption.
> If you include ctid in the query, do the rows have distinct ctids?
> If not, reindexing pg_class should fix it.
>
>> Can anyone suggest a strategy for removing the table?  I don't want to
>> start randomly deleting stuff from the catalogs.
>
> If there are two, manually deleting one is the only way to fix it.  Use
> the ctid to make sure you remove only one ...
>
>                        regards, tom lane
>



-- 
iGLASS Networks
www.iglass.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database corruption with duplicate tables.

2010-04-19 Thread George Woodring
I have found that I have a database problem after receiving the
following error from pg_dump:

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  more than one row returned
by a subquery used as an expression
pg_dump: The command was: SELECT tableoid, oid, typname, typnamespace,
(SELECT rolname FROM pg_catalog.pg_roles WHERE oid = typowner) as
rolname, typinput::oid as typinput, typoutput::oid as typoutput,
typelem, typrelid, CASE WHEN typrelid = 0 THEN ' '::"char" ELSE
(SELECT relkind FROM pg_class WHERE oid = typrelid) END as typrelkind,
typtype, typisdefined, typname[0] = '_' AND typelem != 0 AND (SELECT
typarray FROM pg_type te WHERE oid = pg_type.typelem) = oid AS isarray
FROM pg_type

Upon investigation I found that I have a table that is in the database twice

db=> select oid, relname from pg_class where oid IN (26770910,
26770918, 26770919);
   oid|relname
--+---
 26770910 | availcpedata_20100410
 26770918 | availcpedata_20100410_date_index
 26770919 | availcpedata_20100410_pollgrpid_index
 26770910 | availcpedata_20100410
(4 rows)

I was going to recreate the database by running "pg_dump -n public" to
get around the duplicate table, but I get the same pg_dump error
message.

Can anyone suggest a strategy for removing the table?  I don't want to
start randomly deleting stuff from the catalogs.

Thanks,
Woody


--
iGLASS Networks
www.iglass.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Access Problem After Version Upgrade -- Update

2006-01-30 Thread George Woodring
We had the same problem after re-compiling on 8.1.2.  We had to edit the
/etc/ld.so.conf file and add "/usr/local/pgsql/lib" manually.

There is a mention to this in the DBD::Pg readme file.

Woody


iGLASS Networks
211-A S. Salem St
Apex NC 27502
(919) 387-3550 x813
www.iglass.net

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Suggestions on storing and retrieving geocode data

2006-01-26 Thread George Woodring
I am looking for suggestions on storing and retrieving geocode
information.

My application currently stores 2 columns (lat, long) as numeric and I
have a btree index on them.  This works fine for the current set of
data, but as it expands I know it will become an issue.

I am looking at changing the index to an rtree, but I did not know if it
is better to create a point column or if I could use the existing
lat/long columns.

The query will always be to select points inside a box.

Thanks in advance,
Woody


iGLASS Networks
211-A S. Salem St
Apex NC 27502
(919) 387-3550 x813
www.iglass.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Issue with fixseq.sql in 8.1 release notes

2006-01-12 Thread George Woodring

I have install 8.1.2 on my test box and loaded data from a 7.4.8
database. 

I was running fixseq.sql copied from the release notes to up date the
sequences and the output has an extra "\" character

ALTER TABLE public.modems_old ALTER COLUMN modemsid SET DEFAULT
nextval\('modems_id');

To fix it I had to change the 3rd argument in the regexp_replace()

$$val\(\1$$ --> $$val(\1$$

I thought I would let someone know just in case something has changed
since 8.1 and the release notes needed updating.

Woody


iGLASS Networks
211-A S. Salem St
Apex NC 27502
(919) 387-3550 x813
www.iglass.net

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Foreign Key written as a trigger

2005-07-11 Thread George Woodring
I have a database that has been upgrade numerous times (Started about
7.1 and is not 7.4.8).  The table has 2 foreign keys on it, but when I
look at the table I see that the FK applied before 7.4.X is written as a
trigger.  Everything seems to work, I was just wondering if there are
any performance reasons to change the trigger to a FK.


issp=> \d notpoll
  Table "public.notpoll"
   Column|  Type   | Modifiers
-+-+---
 notpollid   | integer | not null
 pollgrpid   | integer |
 notgroupsid | integer |
Indexes:
"notpoll_pkey" primary key, btree (notpollid)
"notpoll_pollgrpid_key" unique, btree (pollgrpid)
Foreign-key constraints:
"pollgrp_exists" FOREIGN KEY (pollgrpid) REFERENCES
pollgrpinfo(pollgrpid) ON DELETE CASCADE
Triggers:
"RI_ConstraintTrigger_2137396" AFTER INSERT OR UPDATE ON notpoll
FROM notgroups NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE
PROCEDURE "RI_FKey_check_ins"('notgroups_exists', 'notpoll',
'notgroups', 'UNSPECIFIED', 'notgroupsid', 'notgroupsid')


Thanks,
Woody


iGLASS Networks
211-A S. Salem St
Apex NC 27502
(919) 387-3550 x813
www.iglass.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Variable column name in plpgsql function

2004-12-01 Thread George Woodring
I am trying to create a trigger that needs to capture the primary key
value out of NEW variable.

However the trigger can be called from 2 different tables whose PKs are

Table1id
Table2id

Is possible to declare a variable to build the column name

Tableid varchar(20) := TG_RELNAME || ''id'';

And then use this variable to get the PK value?

Tableidvalue int4 := NEW.tableid;


Thanks,
Woody


iGLASS Networks
211-A S. Salem St
Apex NC 27502
(919) 387-3550 x813
www.iglass.net

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Issue adding foreign key

2004-10-28 Thread George Woodring
I have 2 existing tables in my db:

iss=> \d pollgrpinfo
 Table "public.pollgrpinfo"
Column |  Type  | Modifiers
---++---
 pollgrpinfoid | integer| not null
 pollgrpid | integer| not null
 name  | character varying(100) |
 descript  | character varying(200) |
Indexes:
"pollgrpinfo_pkey" primary key, btree (pollgrpinfoid)
"pollgrpinfo_pollgrpid_key" unique, btree (pollgrpid)
 
iss=> \d notpoll
   Table "public.notpoll"
   Column|  Type  |  Modifiers
-++-
 notpollid   | integer| not null
 pollgrpid   | integer|
 notgroupsid | integer|
 alerting| character(1)   | default 'y'::bpchar
 disuser | character varying(50)  |
 distime | integer|
 alertingcom | character varying(200) |
Indexes:
"notpoll_pkey" primary key, btree (notpollid)
"notpoll_pollgrpid_key" unique, btree (pollgrpid)
"notpoll_alerting_index" btree (alerting)
Triggers:
"RI_ConstraintTrigger_2110326" AFTER INSERT OR UPDATE ON notpoll
FROM notgroups NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE
PROCEDURE "RI_FKey_check_ins"('notgroups_exists', 'notpoll',
'notgroups', 'UNSPECIFIED', 'notgroupsid', 'notgroupsid')

I am trying to add a foreign key to the notpoll table

iss=> alter table notpoll add constraint pollgrp_exists foreign
key(pollgrpid) references pollgrpinfo on delete cascade;
ERROR:  insert or update on table "notpoll" violates foreign key
constraint "pollgrp_exists"
DETAIL:  Key (pollgrpid)=(7685) is not present in table "pollgrpinfo".

I have verified that the information is in the pollgrpinfo table:

iss=> select * from pollgrpinfo where pollgrpid=7685;
 pollgrpinfoid | pollgrpid | name | descript
---+---+--+--
   767 |  7685 | HTTP |
(1 row)


I could use a suggestion on how to proceed in troubleshooting the error
message.  I am running 7.4.5

Thanks,
Woody


iGLASS Networks
211-A S. Salem St
Apex NC 27502
(919) 387-3550 x813
www.iglass.net

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match