Re: [GENERAL] pg on Debian servers

2017-11-13 Thread Mark Morgan Lloyd

On 12/11/17 19:15, Karsten Hilbert wrote:

On Sat, Nov 11, 2017 at 01:03:18PM +, Mark Morgan Lloyd wrote:


Several legacy programs written in Delphi ground to a halt this morning,
which turned out to be because a Debian system had updated its copy of
PostgreSQL and restarted the server, which broke any live connections.

At least some versions of Delphi, not to mention other IDE/RAD tools with
database-aware components, don't automatically try to reestablish a database
session that's been interrupted. In any event, an unexpected server restart
(irrespective of all investment in UPSes etc.) has the potential of playing
havoc on a clustered system.

Is there any way that either the package maintainer or a site
administrator/programmer such as myself can mark the Postgres server
packages as "manual upgrade only" or similar? Or since I'm almost certainly
not the first person to be bitten by this, is there a preferred hack in
mitigation?


Apart from that (putting packages on hold), PostgreSQL
updates on Debian don't upgrade existing clusters
automatically. They do create a new cluster but the old one
is kept around and stays running, IIRC even on the very same
port.

(Having gone all the way from PG 7.1 to PG 10 on Debian :)


With the caveat that Debian has only comparatively-recently introduced 
unattended updates as the default... I think only with Stretch. If 
you're still on Jessie you can yet be saved :-)



What did

pg_lsclusters

say ?


I don't have it from the time of the problem, but currently it gives me

Ver Cluster Port Status OwnerData directory   Log file
9.6 main5432 online postgres /var/lib/postgresql/9.6/main 
/var/log/postgresql/postgresql-9.6-main.log


i.e. a single-server system, although I've since done a manual restart 
so that I could change some DIMMs.


However syslog and postgresql-9.6-main.log show me this:

Nov 11 06:27:38 postgres1 systemd[1]: Starting Daily apt upgrade and 
clean activities...

Nov 11 06:28:05 postgres1 systemd[1]: Reloading.
Nov 11 06:28:07 postgres1 systemd[1]: Reloading.
Nov 11 06:28:07 postgres1 systemd[1]: Stopped PostgreSQL RDBMS.
Nov 11 06:28:07 postgres1 systemd[1]: Stopping PostgreSQL Cluster 
9.6-main...

Nov 11 06:28:08 postgres1 systemd[1]: Stopped PostgreSQL Cluster 9.6-main.
Nov 11 06:28:10 postgres1 systemd[1]: Reloading.

2017-11-11 06:28:07.587 UTC [675] LOG:  received fast shutdown request
2017-11-11 06:28:07.587 UTC [675] LOG:  aborting any active transactions
[Session names here]
2017-11-11 06:28:07.607 UTC [730] LOG:  autovacuum launcher shutting down
[More session names here]
2017-11-11 06:28:07.680 UTC [727] LOG:  shutting down
2017-11-11 06:28:07.984 UTC [675] LOG:  database system is shut down
2017-11-11 06:28:13.039 UTC [11122] LOG:  database system was shut down 
at 2017-11-11 06:28:07 UTC
2017-11-11 06:28:13.081 UTC [11122] LOG:  MultiXact member wraparound 
protections are now enabled

2017-11-11 06:28:13.085 UTC [11126] LOG:  autovacuum launcher started
2017-11-11 06:28:13.085 UTC [11121] LOG:  database system is ready to 
accept connections
2017-11-11 06:28:13.371 UTC [11128] [unknown]@[unknown] LOG:  incomplete 
startup packet


All live applications saw that as a loss of database connectivity, yet 
when I was alerted by their squeals of anguish (MIDI on app servers has 
its uses :-) I found the database server running and accepting connections.



There must have been something additional at play.


The apps are written in Delphi, I admit not a very recent version and 
they're due to be converted to Lazarus which is an open-source and 
portable clone. I'll defend my choice of language since it is, 
basically, the best "4GL" you'll find.


However one flaw of Delphi etc. is that they assume that they can safely 
hold a database session open for an extended period. I can't speak for 
Delphi any more since it has, basically, priced itself out of our league 
particularly taking into account its lack of portability, but 
FPC/Lazarus appears to have something which is intended to reconnect a 
lost session, although it's so far unimplemented.


So I've got multiple options for fixing this at the application level: 
either fill in the unimplemented bit of the database control in the 
Lazarus Class Library, or prevent apps from holding database connections 
open. But the real problem, I feel, is that Debian is enabling 
unattended upgrades without checking with the user, and while an 
attended upgrade normally asks for confirmation before restarting a 
daemon an unattended one doesn't.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg on Debian servers

2017-11-11 Thread Mark Morgan Lloyd

On 11/11/17 16:45, Jan Claeys wrote:

On Sat, 2017-11-11 at 14:23 +, Mark Morgan Lloyd wrote:

I think that the "preventing upgrades" route is the one to follow,
since inhibiting the restart would obviously present a risk that
something loaded dynamically could get out of step. As an at least
temporary hack I've disabled unattended updates using

# systemctl disable unattended-upgrades.service


Unattended-upgrades is configurable and allows whitelisting package
origins, as well as blacklisting packages so that they never get
upgraded automatically (you can still upgrade them manually, of
course).

See /etc/apt/apt.conf.d/50unattended-upgrades (the default version of
that file includes documentation as comments).

Also see the unattended-upgrade(8) manpage, and the on/off switch in
/etc/apt/apt.conf.d/20auto-upgrades


Thanks Jan, noted. I was, of course, working to a fairly traditional 
priority: get things running again, whine for a few hours, and only 
later implement a proper fix :-)


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg on Debian servers

2017-11-11 Thread Mark Morgan Lloyd

On 11/11/17 13:45, Christoph Berg wrote:

Re: Magnus Hagander 2017-11-11 

[GENERAL] pg on Debian servers

2017-11-11 Thread Mark Morgan Lloyd
Apologies for something which is distro related, but I was bitten by a 
"silly mistake"- one of my own, I hasten to say- earlier.


Several legacy programs written in Delphi ground to a halt this morning, 
which turned out to be because a Debian system had updated its copy of 
PostgreSQL and restarted the server, which broke any live connections.


At least some versions of Delphi, not to mention other IDE/RAD tools 
with database-aware components, don't automatically try to reestablish a 
database session that's been interrupted. In any event, an unexpected 
server restart (irrespective of all investment in UPSes etc.) has the 
potential of playing havoc on a clustered system.


Is there any way that either the package maintainer or a site 
administrator/programmer such as myself can mark the Postgres server 
packages as "manual upgrade only" or similar? Or since I'm almost 
certainly not the first person to be bitten by this, is there a 
preferred hack in mitigation?


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_logical/snapshots directory

2017-11-05 Thread Mark Fletcher
On Sat, Nov 4, 2017 at 10:45 AM, Mark Fletcher <ma...@corp.groups.io> wrote:

>
> While trying to track down my logical decoding problem, I noticed that
> my pg_logical/snapshots directory has ~5000 .snap files and is growing at a
> rate of about 4 files a minute. The earliest file is from yesterday
> afternoon, dating to the time I took our site down and added a column to a
> large table.
>
> This is with Postgres 9.6.5. After the directory got to 8000 files, I
decided it wasn't going to stop on its own. Here is my setup, and what I
ended up doing:

We run 6 logical streaming processes. First, I stopped and restarted those
processes. That did not clean up the pg_logical/snapshots directory. Then I
restarted Postgres itself. That also had no effect. Finally, I destroyed
all 6 logical slots. That was the thing that caused Postgres to clean up
the snapshots directory. I then recreated the slots and everything is back
to normal (normal meaning around 20 .snap files in that directory that get
rotated regularly).

Going back through my backups, it's clear that this problem began right
after I added a column to a table. So maybe that caused something to be
held that shouldn't be held?

Thanks,
Mark


[GENERAL] pg_logical/snapshots directory

2017-11-04 Thread Mark Fletcher
Hi All,

While trying to track down my logical decoding problem, I noticed that
my pg_logical/snapshots directory has ~5000 .snap files and is growing at a
rate of about 4 files a minute. The earliest file is from yesterday
afternoon, dating to the time I took our site down and added a column to a
large table.

I have not been able to find any docs on this directory. Can someone point
me to an explanation of this directory and how to manage the number of
.snap files in it (and/or whether I should be worried about the number of
files in it)?

Thanks,
Mark


Re: [GENERAL] Logical decoding error

2017-11-02 Thread Mark Fletcher
On Thu, Nov 2, 2017 at 9:59 AM, Steve Atkins <st...@blighty.com> wrote:

>
> Where are the errors coming from - your code or pgx? If it's from pgx,
> what's the exact error? ('w' is regular replication payload data, so it'd
> be expected as a copydata payload message type, but would be an error for a
> replication message).
>
> The errors are coming from PGX. Best I can tell (and I'm not very familiar
with the PGX code, nor with the internals of Postgres replication), it's
waiting for a replication message from the backend, so it's expecting
things like CopyData/CopyInResponse/CopyOutResponse/CopyBothResponse. It
gets a 'w' msg type there, which it doesn't handle, so it errors back to me
(the copydata message type it expects there is 'd').

I'm only using PGX to open a replication connection, start replication,
read replication messages and send heartbeats.

I did open an issue on GitHub about this a couple of days ago, but haven't
heard back from the PGX developer, so I thought I should try to help out
and debug it myself. So I've been trying to educate myself on replication
internals and such. Hence my email here.


Do you capture the raw data from the replication connection when the error
> happens?
>
> I was not, but I have added that and some other logging, and will report
back when I get more info.


Thanks,
Mark


[GENERAL] Logical decoding error

2017-11-02 Thread Mark Fletcher
Hello,

Running Postgres 9.6.5, we're using logical decoding to take changes to the
database and propagate them elsewhere in our system. We are using the PGX
Go Postgres library, at https://github.com/jackc/pgx, and we are using the
test_decoding plugin to format the changes. We are using 6 slots/have 6
processes streaming the changes from our database.

This setup works great, except that every 20 hours or so, some or all of
the processes encounter a problem, all at the same time. They receive an
unexpected message type 'w'.  At this point the processes restart, and when
they do, they encounter another error: "ERROR: got sequence entry 0 for
toast chunk 20559160 instead of seq 6935 (SQLSTATE XX000)" (the chunk
number/seq number varies). This causes them to restart again. They will
encounter the sequence entry error up to 3 more times, before things
magically start to work again.

We are also doing standard streaming replication to a slave off this
database, and that has never seen a problem.

Does this ring a bell for anyone? Do you have any suggestions for how I
should go about figuring out what's happening?

Thanks,
Mark


[GENERAL] multiple sql results to shell

2017-10-23 Thread Mark Lybarger
I have this bash/sql script which outputs some curl commands.  the
backticks causes it to get interpreted by the shell.   This works fine if
there is one result, but when there are many rows returned, it looks like
one shell command.

any help on getting multiple rows returned to be executed by the shell
would be appreciated!

thanks!

`psql -P "tuples_only=on" -h ${DB_HOST} -d ${DB_NAME} -U ${DB_USER} -c
"select 'curl -X POST http://${REGISTER_HOST}:8080/' || source_id ||
'/${MT}/' || model || '/' || site || '/backoffice/register' from
myschema.events where source_id = $SOURCE_ID and ineffective_date is null"`


Re: [GENERAL] archive_command fails but works outside of Postgres

2017-08-18 Thread Mark Watson
De : pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] De la part de twoflower
Envoyé : Friday, August 18, 2017 2:40 PM
À : pgsql-general@postgresql.org
Objet : [GENERAL] archive_command fails but works outside of Postgres

I changed my archive_command to the following:



archive_command = 'gsutil cp /storage/postgresql/9.6/main/%p 
gs://my_bucket/pg_xlog/'
and it fails, leaving the following in the log:



2017-08-18 18:34:25.057 GMT [1436][0]: [104319] LOG:  archive command failed 
with exit code 1

2017-08-18 18:34:25.057 GMT [1436][0]: [104320] DETAIL:  The failed archive 
command was: gsutil cp  /storage/postgresql/9.6/main/0001038B00D8 
gs://my_bucket/pg_xlog/

2017-08-18 18:34:25.057 GMT [1436][0]: [104321] WARNING:  archiving transaction 
log file "0001038B00D8" failed too many times, will try again later

But the command works when executed manually:



root$ su postgres -c "gsutil cp 
/storage/postgresql/9.6/main/0001038B00D8 gs://my_bucket/pg_xlog/"

root$ echo $?

0
The last command verifies that gsutil indeed exited with 0.

How to best debug this issue?

I think the parameter %p contains the complete path of the file and that you 
may need to specify the file name for the destination. Try this for your 
archive command :
archive_command = 'gsutil cp "%p" "gs://my_bucket/pg_xlog/%f"'


-  Mark Watson


[GENERAL] Setting Variables within the PostgreSQL Service Process on AWS Linux

2017-08-03 Thread Mark Street
Hi All,

 

I was hoping someone could point me in the right direction. I want to use 
Python 3.4 with AWS Linux and PostgreSQL 9.4. By default AWS Linux OS users 
Python2.7. I am trying to set the following variable in the PostgreSQL startup 
service

 

PYTHONPATH=/usr/lib64/python3.4

 

Below is a snippet from the service:

 



# Set defaults for configuration variables

PGENGINE=/usr/pgsql-9.1/bin

PGPORT=5432

PGDATA=/var/lib/pgsql/9.1/data

PGLOG=/var/lib/pgsql/9.1/pgstartup.log

lockfile="/var/lock/subsys/${NAME}"

pidfile="/var/run/${NAME}.pid"

PYTHONPATH=/usr/lib64/python3.4

 

# Override defaults from /etc/sysconfig/pgsql if file is present

[ -f /etc/sysconfig/pgsql/${NAME} ] && . /etc/sysconfig/pgsql/${NAME}

 

export PGDATA

export PGPORT

export PYTHONPATH

---

 

The variable is never loaded to the PID.  When I do the same thing with Centos 
7 it works fine. The only way I can get the PostgreSQL PID to see the variable 
is to set a global one in the /etc/profile.d directory. This unfortunately 
breaks AWS Linux utilities that use Python2.7.

 

Any help would be appreciated.

 

Regards,

 

Mark Street

 



Systemd support (was:Re: [GENERAL] Please say it isn't so)

2017-07-12 Thread Mark Morgan Lloyd

On 12/07/17 05:00, Steve Litt wrote:

Hi all,

Please tell me this is a mistake:

https://wiki.postgresql.org/wiki/Systemd

Why a database system should care about how processes get started is
beyond me. Systemd is an entangled mess that every year subsumes more
and more of the operating system, in a very non-cooperative way.

There are almost ten init systems. In every one of those init systems,
one can run a process supervisor, such as runit or s6 or
daemontools-encore, completely capable of starting the postgres server.

Every year, systemd further hinders interoperability, further erodes
interchangeability of parts, and continues to address problems with
WONTFIX. In the long run, you do your users no favor by including
init-system specific code in Postgres or its makefiles. If systemd
can't correctly start Postgres, I guarantee you that s6 or runit,
running on top of systemd, can.

Postgres doesn't care which language makes a query to it. Why
should Postgres care which init system started it? I hope you can free
Postgres of init-specific code, and if for some reason you can't do
that, at least don't recommend init-specific code.


OTOH since systemd is what's being supported by a significant number of 
distributions it makes sense to at least try to work robustly with it.


While my preference would have been to have made such a change at a 
major version transition, the reality is that database systems are 
competitive, and not keeping up with the underlying platform would have 
been very much to PostgreSQL's disadvantage,


OP: Please note that you do yourself no favours at all by posting a 
subject line which could very easily be misinterpreted as spam.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.6.2 and pg_log - solved

2017-04-28 Thread Mark Watson

De : pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] De la part de Mark Watson
Envoyé : Monday, April 24, 2017 3:39 PM
À : David G. Johnston
Cc : (pgsql-general@postgresql.org)
Objet : Re: [GENERAL] Postgres 9.6.2 and pg_log


De : David G. Johnston [mailto:david.g.johns...@gmail.com]
Envoyé : Monday, April 24, 2017 3:15 PM
À : Mark Watson
Cc : (pgsql-general@postgresql.org<mailto:pgsql-general@postgresql.org>)
Objet : Re: [GENERAL] Postgres 9.6.2 and pg_log

On Mon, Apr 24, 2017 at 11:27 AM, Mark Watson 
<mark.wat...@jurisconcept.ca<mailto:mark.wat...@jurisconcept.ca>> wrote:

log_destination = 'stderr' # Valid values are 
combinations of

# stderr, csvlog, syslog, and eventlog,

# depending on platform.  csvlog

# requires logging_collector to be on.

# This is used when logging to stderr:
logging_collector = on# Enable capturing of stderr and 
csvlog

# into log files. Required to be on for

# csvlogs.

# (change requires restart)


​I'm out of ideas...

David J.​
Not to worry. This week I’m uninstalling and reinstalling postgres 9.6. I’ll do 
some tests and let this list know.

Mark Watson
Solved (sort of)
After a complete uninstall and fresh install of 9.6.2,  everything is behaving 
as normal. I have a tendency to chalk it off to perhaps an artifact from 
updating 9.6 beta => 9.6 RC => 9.6.0 => 9.6.1 => 9.6.2. Anyway, I’m not keen on 
repeating the updates. By the way, thanks all for the excellent work with 9.6 !

Mark Watson


Re: [GENERAL] Postgres 9.6.2 and pg_log

2017-04-24 Thread Mark Watson

De : David G. Johnston [mailto:david.g.johns...@gmail.com]
Envoyé : Monday, April 24, 2017 3:15 PM
À : Mark Watson
Cc : (pgsql-general@postgresql.org)
Objet : Re: [GENERAL] Postgres 9.6.2 and pg_log

On Mon, Apr 24, 2017 at 11:27 AM, Mark Watson 
<mark.wat...@jurisconcept.ca<mailto:mark.wat...@jurisconcept.ca>> wrote:

log_destination = 'stderr' # Valid values are 
combinations of

# stderr, csvlog, syslog, and eventlog,

# depending on platform.  csvlog

# requires logging_collector to be on.

# This is used when logging to stderr:
logging_collector = on# Enable capturing of stderr and 
csvlog

# into log files. Required to be on for

# csvlogs.

# (change requires restart)


​I'm out of ideas...

David J.​
Not to worry. This week I’m uninstalling and reinstalling postgres 9.6. I’ll do 
some tests and let this list know.

Mark Watson


Re: [GENERAL] Postgres 9.6.2 and pg_log

2017-04-24 Thread Mark Watson
De : David G. Johnston [mailto:david.g.johns...@gmail.com]
Envoyé : Monday, April 24, 2017 2:18 PM
À : Mark Watson
Cc : (pgsql-general@postgresql.org)
Objet : Re: [GENERAL] Postgres 9.6.2 and pg_log

On Mon, Apr 24, 2017 at 10:58 AM, Mark Watson 
<mark.wat...@jurisconcept.ca<mailto:mark.wat...@jurisconcept.ca>> wrote:

The lines log_rotation_age and log_rotation_size are commented, and currently 
are:
#log_rotation_age = 1d # Automatic rotation of 
logfiles will

# happen after that time.  0 disables.
#log_rotation_size = 10MB  # Automatic rotation of 
logfiles will

# happen after that much log output.

# 0 disables.


I see from your reference article that the log_rotation_age is now in minutes, 
and I will adjust that to 1440 (1 day). I don’t know where the “1d” came from. 
I know it used to be like this in earlier versions.


​Those are fine:  See "Numeric with Unit" @​

​https://www.postgresql.org/docs/9.6/static/config-setting.html

​I think the actual missing, and (just checked) defaulted off, parameter is 
"logging_collector"

David J.
I have (copy/paste) :
log_destination = 'stderr' # Valid values are 
combinations of

# stderr, csvlog, syslog, and eventlog,

# depending on platform.  csvlog

# requires logging_collector to be on.

# This is used when logging to stderr:
logging_collector = on# Enable capturing of stderr and 
csvlog

# into log files. Required to be on for

# csvlogs.
    
# (change requires restart)

Mark Watson


Re: [GENERAL] Postgres 9.6.2 and pg_log

2017-04-24 Thread Mark Watson

De : David G. Johnston [mailto:david.g.johns...@gmail.com]
Envoyé : Monday, April 24, 2017 1:34 PM
À : Mark Watson
Cc : (pgsql-general@postgresql.org)
Objet : Re: [GENERAL] Postgres 9.6.2 and pg_log

On Mon, Apr 24, 2017 at 8:43 AM, Mark Watson 
<mark.wat...@jurisconcept.ca<mailto:mark.wat...@jurisconcept.ca>> wrote:
Good day all,

I just noticed an anomaly regarding the logging. I have my logging set up as 
follows:
log_filename = 'postgresql-%d.log'
log_truncate_on_rotation = on

​I don't see "log_rotation_age" and/or "log_rotation_size" here [1] and at 
least one needs to be set in order to enable actual rotation; the "truncate" 
option simply tells PostgreSQL what to do when encountering a file with the 
same name during the rotation process.​

log_rotation_age apparently has under-documented intelligence since I would 
expect a server that starts up mid-hour and uses a 60 minute rotation to rotate 
mid-hour as well so the log would contain 1 hours worth of data but the leading 
hours would be different.  The examples in log_truncate_on_rotation indicate 
that this isn't the case.  I have not tested reality or read the source.

This is on Windows 10, 64-bit
PostgreSQL 9.2.2, compiled by Visual C++ build 1800, 64-bit
(EnterpriseDB installer)

Note that this is not a major concern on my end; postgres 9.6.2 has otherwise 
been running flawlessly.


​Um...you're reporting a very outdated 9.2 release in the supposed copy-paste 
job above but claiming 9.6.2 ...

[1] 
https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHERE

David J.
Sorry about the version; I queried SELECT version() in pgAdmin4 and did not 
copy/paste the results. It is version 9.6.2.


Re: [GENERAL] Postgres 9.6.2 and pg_log

2017-04-24 Thread Mark Watson


De : David G. Johnston [mailto:david.g.johns...@gmail.com]
Envoyé : Monday, April 24, 2017 1:34 PM
À : Mark Watson
Cc : (pgsql-general@postgresql.org)
Objet : Re: [GENERAL] Postgres 9.6.2 and pg_log

On Mon, Apr 24, 2017 at 8:43 AM, Mark Watson 
<mark.wat...@jurisconcept.ca<mailto:mark.wat...@jurisconcept.ca>> wrote:
Good day all,

I just noticed an anomaly regarding the logging. I have my logging set up as 
follows:
log_filename = 'postgresql-%d.log'
log_truncate_on_rotation = on

​I don't see "log_rotation_age" and/or "log_rotation_size" here [1] and at 
least one needs to be set in order to enable actual rotation; the "truncate" 
option simply tells PostgreSQL what to do when encountering a file with the 
same name during the rotation process.​

log_rotation_age apparently has under-documented intelligence since I would 
expect a server that starts up mid-hour and uses a 60 minute rotation to rotate 
mid-hour as well so the log would contain 1 hours worth of data but the leading 
hours would be different.  The examples in log_truncate_on_rotation indicate 
that this isn't the case.  I have not tested reality or read the source.

This is on Windows 10, 64-bit
PostgreSQL 9.2.2, compiled by Visual C++ build 1800, 64-bit
(EnterpriseDB installer)

Note that this is not a major concern on my end; postgres 9.6.2 has otherwise 
been running flawlessly.


​Um...you're reporting a very outdated 9.2 release in the supposed copy-paste 
job above but claiming 9.6.2 ...

[1] 
https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHERE

David J.

Thanks, David,
The lines log_rotation_age and log_rotation_size are commented, and currently 
are:
#log_rotation_age = 1d # Automatic rotation of 
logfiles will

# happen after that time.  0 disables.
#log_rotation_size = 10MB  # Automatic rotation of 
logfiles will

# happen after that much log output.

# 0 disables.


I see from your reference article that the log_rotation_age is now in minutes, 
and I will adjust that to 1440 (1 day). I don’t know where the “1d” came from. 
I know it used to be like this in earlier versions.

Mark Watson


[GENERAL] Postgres 9.6.2 and pg_log

2017-04-24 Thread Mark Watson
Good day all,

I just noticed an anomaly regarding the logging. I have my logging set up as 
follows:
log_filename = 'postgresql-%d.log'
log_truncate_on_rotation = on

My log file postgresql-21.log contains only entries for today (April 24). When 
I restart the service, entries correctly start accumulating in 
postgresql-24.log.  For example, my postgresql-21.log contains:
2017-04-24 11:19:34 EDT LOG:  received fast shutdown request
2017-04-24 11:19:34 EDT LOG:  aborting any active transactions
2017-04-24 11:19:34 EDT LOG:  autovacuum launcher shutting down
2017-04-24 11:19:35 EDT LOG:  shutting down
2017-04-24 11:19:35 EDT LOG:  database system is shut down

My postgresql-24.log contains:
2017-04-24 11:19:40 EDT LOG:  database system was shut down at 2017-04-24 
11:19:35 EDT
2017-04-24 11:19:40 EDT LOG:  MultiXact member wraparound protections are now 
enabled
2017-04-24 11:19:40 EDT LOG:  database system is ready to accept connections
2017-04-24 11:19:40 EDT LOG:  autovacuum launcher started

This is on Windows 10, 64-bit
PostgreSQL 9.2.2, compiled by Visual C++ build 1800, 64-bit
(EnterpriseDB installer)

Note that this is not a major concern on my end; postgres 9.6.2 has otherwise 
been running flawlessly.

Mark Watson








Re: [GENERAL] Request to add feature to the Position function

2017-03-27 Thread Mark Watson


De : pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] De la part de Ron Ben
Envoyé : Monday, March 27, 2017 11:05 AM
À : pgsql-general@postgresql.org
Objet : [GENERAL] Request to add feature to the Position function


> position(substring in string)

> as listed here:

> https://www.postgresql.org/docs/9.1/static/functions-string.html

> locates sub string in a string.

>

> It doesn't support locateing the substring from the back.


If what you mean by ‘from the back’  is ‘the last occurrence in a string read 
from left to right’, here is a quickie plpgsql function:

CREATE OR REPLACE FUNCTION rposition(substr text, str text)
  RETURNS integer AS
$BODY$
declare
pos integer;
lastpos integer;
begin
pos := position(substr in str);
lastpos := 0;

while pos <> 0 loop
lastpos := pos;
pos := position(substr in substring(str from pos + 1));
if pos > 0 then pos := pos + lastpos ; end if;
end loop;
return lastpos;
end;
$BODY$
  LANGUAGE plpgsql IMMUTABLE


Re: [GENERAL] Running TAP regression tests under windows/msvc

2017-03-07 Thread Mark Dilger

> On Mar 7, 2017, at 12:24 PM, Mark Dilger <hornschnor...@gmail.com> wrote:
> 
> Hello,
> 
> I am attempting to get the tap tests working under windows so as to
> help review patches for the 10.0 development cycle.  I can compile
> the sources on windows 2008 using the MS Visual C and run the
> 
>   vcregress.bat check
> 
> without any problems or failures.  In an attempt to run the tap tests,
> I have edited config_default.pl as follows:
> 
> diff --git a/src/tools/msvc/config_default.pl 
> b/src/tools/msvc/config_default.pl
> index 97f1af8..1e7b19d 100644
> --- a/src/tools/msvc/config_default.pl
> +++ b/src/tools/msvc/config_default.pl
> @@ -16,7 +16,7 @@ our $config = {
>extraver  => undef,# --with-extra-version=
>gss   => undef,# --with-gssapi=
>nls   => undef,# --enable-nls=
> -   tap_tests => undef,# --enable-tap-tests
> +   tap_tests => 1,# --enable-tap-tests
>tcl   => undef,# --with-tls=
>perl  => undef,# --with-perl
>python=> undef,# --with-python=
> 
> and when I run
> 
>   vcregress.bat bincheck
> 
> I get a few failures.  Am I doing something wrong, or are these failures
> the same for other folks?  A portion of the log of the regressions follows:

I added a bit of debugging logic to PostgresNode.pm, to print out the
name of the log file being grep'd and the size of that file, and it seems
in these cases the log file is of size zero (empty).

not ok 10 - SQL CLUSTER run: SQL found in server log 
c:/jenkins/workspace/unicorns/postgresql/src/bin/scripts/tmp_check/log/010_clusterdb_main.log,
 length 0
ok 11 - fails with nonexistent table
ok 12 - clusterdb -t test1 exit code 0
not ok 13 - cluster specific table: SQL found in server log 
c:/jenkins/workspace/unicorns/postgresql/src/bin/scripts/tmp_check/log/010_clusterdb_main.log,
 length 0
ok 14 - clusterdb with connection string
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/14 subtests 

Perhaps there is a race condition between when the test is run and when the
log file is flushed?  I'm just guessing here

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Running TAP regression tests under windows/msvc

2017-03-07 Thread Mark Dilger
Hello,

I am attempting to get the tap tests working under windows so as to
help review patches for the 10.0 development cycle.  I can compile
the sources on windows 2008 using the MS Visual C and run the

vcregress.bat check

without any problems or failures.  In an attempt to run the tap tests,
I have edited config_default.pl as follows:

diff --git a/src/tools/msvc/config_default.pl b/src/tools/msvc/config_default.pl
index 97f1af8..1e7b19d 100644
--- a/src/tools/msvc/config_default.pl
+++ b/src/tools/msvc/config_default.pl
@@ -16,7 +16,7 @@ our $config = {
extraver  => undef,# --with-extra-version=
gss   => undef,# --with-gssapi=
nls   => undef,# --enable-nls=
-   tap_tests => undef,# --enable-tap-tests
+   tap_tests => 1,# --enable-tap-tests
tcl   => undef,# --with-tls=
perl  => undef,# --with-perl
python=> undef,# --with-python=

and when I run

vcregress.bat bincheck

I get a few failures.  Am I doing something wrong, or are these failures
the same for other folks?  A portion of the log of the regressions follows:


All tests successful.
Files=5, Tests=17, 323 wallclock secs ( 0.08 usr +  0.03 sys =  0.11 CPU)
Result: PASS
t/001_pgbench.pl .. 
1..3
ok 1 - concurrent OID generation: exit code 0
ok 2 - concurrent OID generation: no stderr
ok 3 - concurrent OID generation: matches
ok
All tests successful.
Files=1, Tests=3, 12 wallclock secs ( 0.08 usr +  0.03 sys =  0.11 CPU)
Result: PASS

#   Failed test 'SQL CLUSTER run: SQL found in server log'
#   at c:/jenkins/workspace/unicorns/postgresql/src/test/perl/PostgresNode.pm 
line 1321.
#   ''
# doesn't match '(?^:statement: CLUSTER;)'

#   Failed test 'cluster specific table: SQL found in server log'
#   at c:/jenkins/workspace/unicorns/postgresql/src/test/perl/PostgresNode.pm 
line 1321.
#   ''
# doesn't match '(?^:statement: CLUSTER test1;)'
# Looks like you failed 2 tests of 14.
t/010_clusterdb.pl  
1..14
ok 1 - clusterdb --help exit code 0
ok 2 - clusterdb --help goes to stdout
ok 3 - clusterdb --help nothing to stderr
ok 4 - clusterdb --version exit code 0
ok 5 - clusterdb --version goes to stdout
ok 6 - clusterdb --version nothing to stderr
ok 7 - clusterdb with invalid option nonzero exit code
ok 8 - clusterdb with invalid option prints error message
ok 9 - clusterdb exit code 0
not ok 10 - SQL CLUSTER run: SQL found in server log
ok 11 - fails with nonexistent table
ok 12 - clusterdb -t test1 exit code 0
not ok 13 - cluster specific table: SQL found in server log
ok 14 - clusterdb with connection string
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/14 subtests 

#   Failed test 'cluster all databases: SQL found in server log'
#   at c:/jenkins/workspace/unicorns/postgresql/src/test/perl/PostgresNode.pm 
line 1321.
#   ''
# doesn't match '(?^s:statement: CLUSTER.*statement: CLUSTER)'
# Looks like you failed 1 test of 2.
t/011_clusterdb_all.pl  
1..2
ok 1 - clusterdb -a exit code 0
not ok 2 - cluster all databases: SQL found in server log
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/2 subtests 

#   Failed test 'SQL CREATE DATABASE run: SQL found in server log'
#   at c:/jenkins/workspace/unicorns/postgresql/src/test/perl/PostgresNode.pm 
line 1321.
#   ''
# doesn't match '(?^:statement: CREATE DATABASE foobar1)'

#   Failed test 'create database with encoding: SQL found in server log'
#   at c:/jenkins/workspace/unicorns/postgresql/src/test/perl/PostgresNode.pm 
line 1321.
#   ''
# doesn't match '(?^:statement: CREATE DATABASE foobar2 ENCODING 'LATIN1')'
# Looks like you failed 2 tests of 13.
t/020_createdb.pl . 
1..13
ok 1 - createdb --help exit code 0
ok 2 - createdb --help goes to stdout
ok 3 - createdb --help nothing to stderr
ok 4 - createdb --version exit code 0
ok 5 - createdb --version goes to stdout
ok 6 - createdb --version nothing to stderr
ok 7 - createdb with invalid option nonzero exit code
ok 8 - createdb with invalid option prints error message
ok 9 - createdb foobar1 exit code 0
not ok 10 - SQL CREATE DATABASE run: SQL found in server log
ok 11 - createdb -l C -E LATIN1 -T template0 foobar2 exit code 0
not ok 12 - create database with encoding: SQL found in server log
ok 13 - fails if database already exists
Dubious, test returned 2 (wstat 512, 0x200)
Failed 2/13 subtests 

#   Failed test 'SQL CREATE EXTENSION run: SQL found in server log'
#   at c:/jenkins/workspace/unicorns/postgresql/src/test/perl/PostgresNode.pm 
line 1321.
#   ''
# doesn't match '(?^:statement: CREATE EXTENSION "plpgsql")'
# Looks like you failed 1 test of 14.




Thanks in advance for any clarification regarding what I might be doing wrong.

Mark Dilger



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make 

Re: [GENERAL] PostgreSQL on eMMC - Corrupt file system

2017-02-10 Thread Mark Morgan Lloyd

On 09/02/17 23:00, Christoph Moench-Tegeder wrote:

## Thomas Güttler (guettl...@thomas-guettler.de):


 Is running linux with postgres on eMMC a bad idea in general?


I'd say that running anything with a read-write load on eMMC will
end in pieces. It's ok to occasionally write something, but a mixed
load is not really what these things were designed for. The wear
leveling can be quite basic, you never know when it's gonna happen
(i.e. sudden power down can kill your filesystem - that's why disabling
journaling is not a very great idea), and if your device is "mostly
full" anyways, the wear leveling has not much space to redirect the
writes to. Remember that some of those chips are sold mostly by
price - that is, the hobbyist "embedded" devices get the cheapest
chips. A safer bet would be adding an external storage; some
64GB SATA SSDs are available for less than 50€ (perhaps it's better
not to go for the cheapest ones here, too).


I agree, but three additional comments. First, we've got a fair number 
of RPis running their root filesystems on the internal SD-Card without 
problems, but the one Odroid which runs an eMMC card failed a few weeks 
ago. Second, a useful precaution is to put stuff which will be updated 
on an external device, although the same longevity concerns apply if 
it's Flash-based. Third, experience here suggests that reliability 
/might/ be improved if you fully zero a device before partitioning it to 
make absolutely sure that the internal controller has touched every block.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regrding:- Arduino-Postgresql Direct Connection

2016-12-13 Thread Mark Morgan Lloyd

On 09/12/16 21:30, John R Pierce wrote:

On 12/8/2016 8:56 PM, Roshan Jha wrote:

In my project ,I have  to send the data from the arduino to the direct
postgresql for storing .So i want know that is it possible to make
direct connection between arduino to postgresql ,If yes, then let me
know, how should i do.


arduinos don't even have native networking, do they?  so you'll need an
ethernet adapter, and someone's tcp/ip libraries to even get data off it...

it might be easier to do this on a raspberry pi...  they run a small
linux system natively, and have ethernet built in, the pi3 has
integrated wifi, too.

To talk to postgresql directly, you would need a libpq compiled for
arduino, this compilation, while theoretically possible quite likely
would be rather tricky.

instead, I'd suggest implementing a front end, perhaps RESTful, that
runs on an application server, either on the postgres database server or
another nearby server, and have the arduino send the data to it, this
front end would format the data and insert it into the postgres
database.   keep the app server API at a simple application-oriented
level ('GET record', 'PUT record' kind of thing...) so the arduino code
can be kept quite simple.


I really don't think an Arduino is the tool for this job, but I've known 
plenty of people who'd try to make do... and wouldn't know when to give up.


I'd add my vote for Raspberry Pi here, if the OP has an unavoidable need 
to use Arduino "shields" there's an interface called the Raspduino which 
would help.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to open PGStrom (an extension of PostgreSQL) in Netbeans?

2016-11-22 Thread Mark Anns
Yes making the file is the problem. If you read my topic again, then you may
know about what is the exact question



--
View this message in context: 
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425p5931594.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to open PGStrom (an extension of PostgreSQL) in Netbeans?

2016-11-22 Thread Mark Anns
Nope. I am not asking about installation instructions. I have installed it.
And I know how to run it from command line.

I just wanted to compile it in netbeans.



--
View this message in context: 
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425p5931431.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to open PGStrom (an extension of PostgreSQL) in Netbeans?

2016-11-21 Thread Mark Anns
PostgreSQL has been successfully compiled in netbeans 8.1. But how to add its
extension PG_Strom into it?



--
View this message in context: 
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425p5931427.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to open PGStrom (an extension of PostgreSQL) in Netbeans?

2016-11-21 Thread Mark Anns
Considering PGStrom, an extension of PostgreSQL-9.5.4, I tried opening that
file in netbeans 8.1

I opened PGStrom in netbeans as File -> New Project -> C/C++ -> C/C++
Project with Existing Sources. And then selected the Folder that contains
existing sources (PG_Strom). And then Finish

It is showing some configuration failed error as 

Creating project was successful
Running make clean failed
Running make was successful
Configuring project for code assistance failed

Details:

0 out of 27 source files have limited code assistance
17 out of 17 header files have limited code assitance

Where I am missing? How can I do it? It needs CUDA also I think.

Thanks



--
View this message in context: 
http://postgresql.nabble.com/How-to-open-PGStrom-an-extension-of-PostgreSQL-in-Netbeans-tp5931425.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-21 Thread Mark Anns
What are the functions (for example) are available/not available to get
transformed to GPU source code?

What is the factor value u consider to get multiplied with actual cost for
CPU? For example, default cpu_tuple_cost is 0.01. 

Consider, for example, if the cost=0.00..458.00 for seq scan, how can it be
multiplied to get the cost for GPU? considering any one gpu card.

Is there any documentation regarding these details in GPU?



--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5931271.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-17 Thread Mark Anns
"fraction of the cost of executing the same portion of the plan using
the traditional CPU processing"

Can u explain about this fraction in detail?

I need the clarifications for query plan tree also. 

Executing a query in CPU is different from executing the same in GPU. So the
plan also differs. How it differs?



--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930903.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-17 Thread Mark Anns
 Can u explain this statement "check whether the scan qualifier can
be executable on GPU device"

What are the scan qualifiers?

How to determine whether they are device executable or not?

The cost estimates are entirely based on number of rows and type of scan.
Then it will be same for both CPU and GPU. How the decision can be made for
cheaper one comparing CPU and GPU estimates?




--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930783.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-17 Thread Mark Anns
Can u explain this statement "check whether the scan qualifier can
be executable on GPU device"

What are the scan qualifiers?

How to determine whether they are device executable or not?



--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930781.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-15 Thread Mark Anns
Thanks for your response.

But that planning for a query execution in GPU is different from planning a
query execution in CPU right?

Even considering cost calculation, cost for executing a query in GPU is
different from cost for executing a query in CPU. How this cost calculation
for GPU occurs? 



--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930376.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-15 Thread Mark Anns
Yeah I think Kouhei Kaigai is one of the Contributors. So expecting his
reply.

And thanks for your kind responses



--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930373.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-14 Thread Mark Anns
Thank you so much for your references.

How the planning factors of PGStrom differs from planning factos of
PostgreSQL?



--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930356.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-14 Thread Mark Anns

Thank you so much for your kind reply.

I am just curious about this planning factors in GPU.

There can be more than one appropriate paths in query plan tree. How the
decision for particular path has been made considering those planning
factors?





--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724p5930354.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How the Planner in PGStrom differs from PostgreSQL?

2016-11-09 Thread Mark Anns
I am reading through Postgres and PGStrom. Regarding the planning factors, I
need some clarifications. Can u help me with that?

Planner in Postgres checks for different scan and join methods, and then
find the cheapest one and creates a query plan tree. While going for same
thing in GPU, the checks should also be made for, whether it is device
executable or not and the query plan tree from Postgres has been updated.

How this planning in GPU actually works for? How to determine device
executables? What are the factors considered for the planner in GPU?

For example, in gpujoin.c, the function responsible for this plan is
pgstrom_post_planner_gpujoin. What is the work of this function? What is
this actually doing?  It is updating some target list. What are they? It is
checking for pgstrom_device_expression(tle->expr) i.e., for device
executables. What are the tasks covered under this?

Best regards



--
View this message in context: 
http://postgresql.nabble.com/How-the-Planner-in-PGStrom-differs-from-PostgreSQL-tp5929724.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL and ArcGIS used in UK military exercise

2016-10-30 Thread Mark Morgan Lloyd

Potentially useful publicity.

http://www.theregister.co.uk/2016/10/28/unmanned_warrior_esri_argcis_cloud_based_mapping/

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] update records to have occurance number

2016-10-25 Thread Mark Lybarger
I want to update a table to have the value of the occurrence number.  For
instance, I have the below table.  I want to update the number column to
increment the count of last name occurrences, so that it looks like this:

first last 1
second last 2
third last 3
first other 1
next other 2

Here's my simple table:

create table person ( fname text, lname text, number integer);

insert into person (fname, lname) values ('first', 'last');
insert into person (fname, lname) values ('second', 'last');
insert into person (fname, lname) values ('third', 'last');

insert into person (fname, lname) values ('first', 'other');
insert into person (fname, lname) values ('next', 'other');

How would I issue an update statement to update the number column?

thanks!


Re: [GENERAL] postgres driver for mysql

2016-09-11 Thread Mark Morgan Lloyd

On 05/09/16 18:46, Mimiko wrote:

Hello to all.

I want to move applications to postgres. But there are applications
which can use only mysql or local mdb or mssql. For now I run a mysql
server to which this applications are connected. Long time ago I've
encountered a federeted module for mysql to allow to store data in
postgres. Now that site doesnot exists.


Have you checked archive.org? At the very least that would give you 
things like developers names that you can follow up, since there might 
be a continuation project.


We certainly can't do that for you, since you haven't given us a URL or 
told us the name of the module.



Can anyone suggest a db federeted plugin for mysql/mariadb to store data
in pg. Changing applications is impossible, they are proprietary and
work only with specific databases only.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] foreign key with where clause

2016-08-18 Thread Mark Lybarger
I have two tables that i want to link with a FK where the child table
record is "active".

some googling shows that i could use a function and a check constraint on
the function, but that only works for inserts, not updates on table b.

create table a (int id, text name);
create table b (int id, boolean active);

alter table a add column b_id integer;
-- how to do this?
alter table a add foreign key (b_id) references b(id) where b.active == true

help :).


[GENERAL] unique constraint with several null values

2016-07-20 Thread Mark Lybarger
I have a relation such as
create table order_item ( id uuid not null primary key, order_id number not
null, item_code text, make text, model text, reason text, size text,
expiration_date timestamp );

where the combination of the columns order_id, item_code, make, model,
reason, size must be unique (unless there's an expiration date).

I'm inclined to use a unique index:

create unique index unique_index_order_item_1 on order_item (order_id,
item_code, make, model, reason, size)
where expiration_date is null;

this works as expected and the duplicate row is rejected
:
insert into order_item ( 1, 88, 'CODE', 'MAKE','MODEL','REASON', 'SIZE',
null); <- first adds
insert into order_item ( 1, 88, 'CODE', 'MAKE','MODEL','REASON', 'SIZE',
null); <- rejects

however, nulls are allowed for all the columns except the order_id. so,
when I add a null value, it fails to meet my expectations,

insert into order_item ( 1, 88, 'CODE', 'MAKE',null,'REASON', 'SIZE',
null); <- first adds
insert into order_item ( 1, 88, 'CODE', 'MAKE',null,'REASON', 'SIZE',
null); <- adds, but should reject.

This leads me to think I need to create 2^5 or 32 unique constraints to
handle the various combinations of data that I can store.  Until now, this
integrity is handled in the application code.  That breaks when the
application is multi-threaded and the rules are not applied at the database
level.

Another solution I can think of is to just use a trigger to prevent the
duplicate rows.

Any thoughts are certainly appreciated.  I can't do much about the data
model itself right now, I need to protect the integrity of the data.

Thanks!
-mark-


Re: [GENERAL] Stored procedure version control

2016-07-03 Thread Mark Morgan Lloyd

Mark Morgan Lloyd wrote:
Elsewhere, somebody was asking how people implemented version control 
for stored procedures on (MS) SQL Server.


The consensus was that this is probably best managed by using scripts or 
command files to generate stored procedures etc., but does anybody have 
any comment on that from the POV of PostgreSQL?


For completeness although this appears to be MS-specific, somebody has 
drawn my attention to 
http://www.red-gate.com/products/sql-development/sql-source-control/


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure version control

2016-06-30 Thread Mark Morgan Lloyd

Mike Sofen wrote:

-Original Message-

From: Mark Morgan Lloyd Sent: Thursday, June 30, 2016 2:41 AM
Neil Anderson wrote:

On 2016-06-29 12:37 PM, Mark Morgan Lloyd wrote:
Elsewhere, somebody was asking how people implemented version control 
for stored procedures on (MS) SQL Server.


The consensus was that this is probably best managed by using scripts 
or command files to generate stored procedures etc., but does anybody 
have any comment on that from the POV of PostgreSQL?





[etc.] Thanks everybody, summary passed on.
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk


A bit late to the thread, but here's some specific details on how I've
implemented version control in PG 9.5, in a small team environment deploying
to single database servers in each tier (dev, qa, stage, prod).  It's
working well so far, and allows my stored proc versions to be aligned with
the middle and upper tier code releases.  I'm the lead database
architect-engineer for a brand new genomics application (lots of data).


Thanks Mike, I'll pass that on if the thread on CIX still looks live.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure version control

2016-06-30 Thread Mark Morgan Lloyd

Neil Anderson wrote:

On 2016-06-29 12:37 PM, Mark Morgan Lloyd wrote:

Elsewhere, somebody was asking how people implemented version control
for stored procedures on (MS) SQL Server.

The consensus was that this is probably best managed by using scripts or
command files to generate stored procedures etc., but does anybody have
any comment on that from the POV of PostgreSQL?



I can't comment from the POV of those who represent Postgres, but I used 
to work for a company who specialised in change management for database 
products, SQL Server and Oracle in particular. There are at least two 
approaches. The migrations approach and the state based approach.


[etc.] Thanks everybody, summary passed on.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Stored procedure version control

2016-06-29 Thread Mark Morgan Lloyd
Elsewhere, somebody was asking how people implemented version control 
for stored procedures on (MS) SQL Server.


The consensus was that this is probably best managed by using scripts or 
command files to generate stored procedures etc., but does anybody have 
any comment on that from the POV of PostgreSQL?


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] System aggregation

2016-04-05 Thread Mark Morgan Lloyd
ScaleMP apparently has a facility where a single system image can be 
spread over multiple host computers, possibly with guest 
paravirtualisation, although the non-priced variant only supports memory 
aggregation. http://www.scalemp.com/products/product-comparison/


There also used to be something called Virtual Iron (possibly based on 
Xen), but it was swallowed by Oracle where it was under the wing of Wim 
Coekaerts who I believe has just joined Microsoft.


Has anybody ever experimented with PostgreSQL on this sort of thing, and 
is anybody aware of an open equivalent?


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL advocacy

2016-03-25 Thread Mark Morgan Lloyd

Jernigan, Kevin wrote:

On 3/25/16, 4:37 AM, "pgsql-general-ow...@postgresql.org on behalf of Mark Morgan 
Lloyd" <pgsql-general-ow...@postgresql.org on behalf of 
markmll.pgsql-gene...@telemetry.co.uk> wrote:


Just because a corporate has a hundred sites cooperating for inventory 
management doesn't mean that the canteen menus have to be stored on 
Oracle RAC :-)



Right, but often the customer has paid for a site license, in which case the IT 
department will just keep spinning up more Oracle (or SQL Server or DB2) 
databases when requests come in - even if it’s overkill for the proposed use 
case / workload, it’s less work if IT only has one database technology to 
support.


OTOH, if the license takes the number of CPUs/cores into account then 
adding even unsophisticated unrelated databases will, eventually, cost.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL advocacy

2016-03-25 Thread Mark Morgan Lloyd

Jernigan, Kevin wrote:

On 3/22/16, 8:07 AM, "Bruce Momjian" <br...@momjian.us> wrote:




HA   Scaling  Upgrade Add/Remove
   Oracle RAC   50% 50%easyeasy
   Streaming Rep.  100% 25%*   hardeasy
   Sharding  0%100%hardhard
   
   * Allows read scaling


--
 Bruce Momjian  <br...@momjian.us>http://momjian.us
 EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


Implementing RAC-equivalent functionality is extremely hard, as evidenced by the lack of any directly comparable capability from any other relational db engine, until the release of IBM DB2 Shareplex a few years ago. And given the improvement of PostgreSQL and other open source solutions over the past 20 years, it’s not clear that it makes sense to go through the initial design and implementation work and then the ongoing maintenance overhead - most of what RAC provides can be achieved through other existing capabilities. 


Hearing what IBM's strong points are is always useful, since the various 
flavours of DB2 obviously have facilities to which other databases 
should aspire. As with Oracle, DB2's strong points aren't really 
well-publicised, and things are further complicated by the variant 
terminology which IBM has evolved over the half century they've been 
building mainframes.



While I’m not sure that the percentage breakdowns in your chart are totally 
accurate, I agree with the general assessment, except for the highest-end 
applications which have zero-downtime requirements which can’t be met with 
streaming replication: the overhead of synchronous replication limits 
scalability, and the failover time for moving from primary to a failover target 
is significantly slower than RAC - which can be literally zero if configured 
correctly.

The higher-level point that I think is important is that while I may be able to 
win technical arguments that RAC is better for certain high-end extreme 
workloads - and maybe I can’t even win those arguments ;-) - the real issue is 
that there aren’t very many of those workloads, and the PostgreSQL community 
shouldn’t care: the vast majority of Oracle (and SQL Server etc) workloads 
don’t need all the fancy high-end RAC capabilities, or many of the other 
high-end commercial database capabilities. And those workloads can relatively 
easily be migrated to PostgreSQL, with minor disruption / change to schemas, 
data, triggers, constraints, procedural SQL…


What I've seen so far suggests that if MS is positioning SQL Server to 
challenge Oracle, it's basically looking for low-hanging fruit: in 
particular supplementary databases which corporates have put onto Oracle 
out of habit but which quite simply don't need some of the higher-end 
facilities for which Oracle is harvesting revenue.


Just because a corporate has a hundred sites cooperating for inventory 
management doesn't mean that the canteen menus have to be stored on 
Oracle RAC :-)


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL advocacy

2016-03-21 Thread Mark Morgan Lloyd
If anybody puts together a "just the facts" document after Oracle's 
attack on PostgreSQL in Russia, please make sure it's drawn to the 
attention of this mailing list for the benefit of those who aren't in 
-advocacy.


I was discussing this sort of thing elsewhere in the context of MS's 
apparent challenge to Oracle and IBM, and the dominant feeling appeared 
to be that actual use of things like Oracle RAC was vanishingly 
uncommon. Which surprised me, and which I'm treating with caution since 
the fact that facilities aren't used (in a certain population of 
developers etc.) can in no way be interpreted as meaning that the 
technology is not unavailable or unreliable.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] $user namespace with pg_dump?

2016-03-02 Thread Mark E. Haase
I can `SET search_path TO "$user",foo,bar,public` and the first path
element will expand to the current user.

Can I do the same for `pg_dump -n`? I've tried many variations but none of
them appear to work:

pg_dump -U myuser -n '($user|foo|bar|public)' ...
pg_dump -U myuser -n '("$user"|foo|bar|public)' ...
pg_dump -U myuser -n '(\$user|foo|bar|public)' ...

I can't tell if I'm doing something wrong or if $user expansion is just
some magic in SET that doesn't exist in pg_dump or `\dn`.

(The workaround is obvious, of course: replace $user with the value of the
-U argument . This is a question of curiosity, not practicality.)

Also, is there any difference between `pg_dump -n '(foo|bar)'` and `pg_dump
-n foo -n bar`? In my narrow testing, they produce identical results.

Thanks,

-- 
Mark E. Haase


Re: [GENERAL] "Web of trust" connections

2015-11-10 Thread Mark Morgan Lloyd

Jim Nasby wrote:

On 11/6/15 8:01 AM, Mark Morgan Lloyd wrote:

Purely out of curiosity, is there any way of using some sort of "web of
trust" (comparable with GPG or whatever) when verifying server and
client certificates, rather than going back to a centralised CA?

My apologies if this is a silly question, or if there are fundamental
reasons why such a thing would be inappropriate. My scenario is that I'm
looking at multiple PostgreSQL servers (with supporting custom software)
arranged (approximately) as a tree, with nodes sending notifications to
their peers as they see changes. I want to make it as easy as possible
to set up a new server and get it cooperating with the rest, and some
sort of WoT might be plausible rather than having to wait for the root
administrator to send keys over a secure channel.


Postgres does support PAM, so you might be able to craft such a solution 
using that along with something that support WoT (like GPG).


Thanks for that Jim, very interesting suggestion.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] "Web of trust" connections

2015-11-06 Thread Mark Morgan Lloyd
Purely out of curiosity, is there any way of using some sort of "web of 
trust" (comparable with GPG or whatever) when verifying server and 
client certificates, rather than going back to a centralised CA?


My apologies if this is a silly question, or if there are fundamental 
reasons why such a thing would be inappropriate. My scenario is that I'm 
looking at multiple PostgreSQL servers (with supporting custom software) 
arranged (approximately) as a tree, with nodes sending notifications to 
their peers as they see changes. I want to make it as easy as possible 
to set up a new server and get it cooperating with the rest, and some 
sort of WoT might be plausible rather than having to wait for the root 
administrator to send keys over a secure channel.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Red Hat Policies Regarding PostgreSQL

2015-10-28 Thread Mark Morgan Lloyd

Tom Lane wrote:

Michael Convey <smcon...@gmail.com> writes:

Due to policies for Red Hat family distributions, the PostgreSQL
installation will not be enabled for automatic start or have the database
initialized automatically.



To which policies are they referring? Licensing, security, or other?​


Packaging policy: daemons shall not run merely by virtue of having been
installed.  Otherwise, if you install a boatload of software without
checking each package, you'd have a boatload of probably-unwanted and
possibly-incorrectly-configured daemons running.  Which is a performance
problem and likely a security hazard too.

It's a good policy IMO (though I used to work there so no doubt I've just
drunk too much Red Hat koolaid).


Seems reasonable. In fact somewhat better than current KDE as in e.g. 
Debian "Jessie", which embeds a copy of MySQL whether the the user wants 
to use it or not.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Listen/notify, equivalents on other DBs

2015-09-28 Thread Mark Morgan Lloyd

Thomas Kellerer wrote:

Mark Morgan Lloyd schrieb am 25.09.2015 um 23:41:

I'm trying to get support for PostgreSQL's listen/notify into a
development environment, but since it supports multiple database
backends: can anybody comment on how many other servers have a
comparable facility?

Minimal research has allowed me to code something for
Firebird/Interbase, but I'm not in a position to investigate Oracle,
IBM, MS and the rest. I'd appreciate any general comments from
somebody who has broad SQL experience, I'm not asking for example
code.


For Oracle this would be the "change notification" feature which
goes beyond what Postgres can do if I'm not mistaken as you can
register a query and the client is notified if the result of that
query would change:

http://docs.oracle.com/database/121/JJDBC/dbchgnf.htm#JJDBC28815


Thanks for that, very useful. There's minimal info on the 
Firebird/Interbase equivalent at 
http://www.firebirdsql.org/file/documentation/papers_presentations/Power_Firebird_events.pdf

 but my recollection is that it's very lightweight- no useful payload.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Listen/notify, equivalents on other DBs

2015-09-25 Thread Mark Morgan Lloyd
I'm trying to get support for PostgreSQL's listen/notify into a 
development environment, but since it supports multiple database 
backends: can anybody comment on how many other servers have a 
comparable facility?


Minimal research has allowed me to code something for 
Firebird/Interbase, but I'm not in a position to investigate Oracle, 
IBM, MS and the rest. I'd appreciate any general comments from somebody 
who has broad SQL experience, I'm not asking for example code.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-06 Thread Mark Morgan Lloyd

Jan de Visser wrote:

On July 6, 2015 06:43:53 AM c.bu...@posteo.jp wrote:

On 2015-07-05 15:13 Jan de Visser j...@de-visser.net wrote:

You could set up a whole new server with a different $PGDATA on a
different port.

I (and the user) don't want to setup anything - that is the point.


Well, you don't have to setup anything. You do an initdb in a different 
directory, that will write a .conf file there, which you then massage to 
include a different port. You'll use the same binaries as the standard pgsql 
install, but in a different environment.


I'm not sure that helps, since I think part of the question is what the 
true Debian way is to massage the configuration files to include 
appropriate entries.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] Insert result does not match record count

2015-05-04 Thread mark
Did this every go any further?


I wrote some data transformation script at work, and after seeing  with
count -2017657667 (and similar) in my scripts log I got a bit worried.
seeing else where were folks just run a full on count(*) later to check
counts but that is even MORE time and I was thinking it was a psycopg2
problem, but seems there are issues with the internal counters in pg as
well for tracking large changes.

thanks,

Mark

On Sun, Feb 2, 2014 at 9:12 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Vik Fearing vik.fear...@dalibo.com writes:
  Without re-doing the work, my IRC logs show that I was bothered by this
  in src/backend/tcop/postgres.c:

  max_rows = pq_getmsgint(input_message, 4);

  I needed to change max_rows to int64 which meant I had to change
  pq_getmsgint to pq_getmsgint64 which made me a little worried.

 As well you should be, because we are *not* doing that.  That would be
 a guaranteed-incompatible protocol change.  Fortunately, I don't see
 any functional need for widening the row-limit field in execute messages;
 how likely is it that someone wants to fetch exactly 3 billion rows?
 The practical use-cases for nonzero row limits generally involve fetching
 a bufferload worth of data at a time, so that the restriction to getting
 no more than INT_MAX rows at once is several orders of magnitude away
 from being a problem.

 The same goes for internal uses of row limits, which makes it
 questionable whether it's worth changing the width of ExecutorRun's
 count parameter, which is what I assume you were on about here.  But
 in any case, if we did that we'd not try to reflect it as far as here,
 because the message format specs can't change.

 regards, tom lane


 --
 Sent via pgsql-hackers mailing list (pgsql-hack...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



[GENERAL] [No subject]

2015-03-31 Thread Mark Morgan Lloyd


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres connection failures

2014-04-04 Thread Mark van Cuijk
Hi,

I’m developing an application using Postgres and when doing a transaction with 
a bunch of INSERT statements, the connection to the server (on localhost) often 
suddenly dies. Trying to diagnose the problem, I’ve increased log levels and 
I’ve run a tcpdump session to find out what’s going on, can someone spot 
something strange or point me in a direction to continue debugging?

The symptom I notice is that the Postgres server (9.1 from Ubuntu repo) closes 
the connection, sometimes by sending a TCP FIN, sometimes by sending a TCP RST, 
but I’m not sure why this happens. The (node.js / node-postgres-pure) client 
doesn’t report any error message that could have been received from the server; 
it only complains that the connection unexpectedly closed during query 
execution.

Log message around the time the connections stops. I’ve looked up some lines in 
the code on GitHub, but I can’t figure out what’s going on yet. Queries are 
large, but that doesn’t seem to be a problem, as a lot of those have been added 
before.

2014-04-04 15:16:33 CEST DEBUG:  0: parse unnamed: INSERT INTO xx (xx
xx, xx, xx, xx, xx) VALUES ($1, $2, $3, $4, $5), ($1, $2, $6, $7,
 $8), ($1, $2, $9, $10, $11), ($1, $2, $12, $13, $14), ($1, $2, $15, $16, $17), 
($1, $2, $18, $19, $20), ($1, $2, $21, $22, $23), ($1, $2, $24, $25, $26), ($1, 
(… cut …)
2014-04-04 15:16:33 CEST LOCATION:  exec_parse_message, postgres.c:1146
2014-04-04 15:16:33 CEST DEBUG:  0: StartTransactionCommand
2014-04-04 15:16:33 CEST LOCATION:  start_xact_command, postgres.c:2426
2014-04-04 15:16:33 CEST DEBUG:  0: bind unnamed to unnamed
2014-04-04 15:16:33 CEST LOCATION:  exec_bind_message, postgres.c:1429
2014-04-04 15:16:33 CEST DEBUG:  0: ProcessQuery
2014-04-04 15:16:33 CEST LOCATION:  ProcessQuery, pquery.c:170
2014-04-04 15:16:34 CEST DEBUG:  0: shmem_exit(0): 7 callbacks to make
2014-04-04 15:16:34 CEST LOCATION:  shmem_exit, ipc.c:211
2014-04-04 15:16:34 CEST DEBUG:  0: proc_exit(0): 4 callbacks to make
2014-04-04 15:16:34 CEST LOCATION:  proc_exit_prepare, ipc.c:183
2014-04-04 15:16:34 CEST LOG:  0: disconnection: session time: 0:01:04.287 u
ser=postgres database=dev host=127.0.0.1 port=57848
2014-04-04 15:16:34 CEST LOCATION:  log_disconnections, postgres.c:4375
2014-04-04 15:16:34 CEST DEBUG:  0: exit(0)
2014-04-04 15:16:34 CEST LOCATION:  proc_exit, ipc.c:135
2014-04-04 15:16:34 CEST DEBUG:  0: shmem_exit(-1): 0 callbacks to make
2014-04-04 15:16:34 CEST LOCATION:  shmem_exit, ipc.c:211
2014-04-04 15:16:34 CEST DEBUG:  0: proc_exit(-1): 0 callbacks to make
2014-04-04 15:16:34 CEST LOCATION:  proc_exit_prepare, ipc.c:183
2014-04-04 15:16:34 CEST DEBUG:  0: reaping dead processes
2014-04-04 15:16:34 CEST LOCATION:  reaper, postmaster.c:2360
2014-04-04 15:16:34 CEST DEBUG:  0: server process (PID 20677) exited with e
xit code 0
2014-04-04 15:16:34 CEST LOCATION:  LogChildExit, postmaster.c:2873
2014-04-04 15:16:37 CEST DEBUG:  0: attempting to remove WAL segments older 
than log file 0001003C0071
2014-04-04 15:16:37 CEST LOCATION:  RemoveOldXlogFiles, xlog.c:3364
2014-04-04 15:16:37 CEST DEBUG:  0: recycled transaction log file 00010
03C0071
2014-04-04 15:16:37 CEST LOCATION:  RemoveOldXlogFiles, xlog.c:3414
2014-04-04 15:16:37 CEST DEBUG:  0: recycled transaction log file 00010
03C006F
2014-04-04 15:16:37 CEST LOCATION:  RemoveOldXlogFiles, xlog.c:3414
2014-04-04 15:16:37 CEST DEBUG:  0: recycled transaction log file 
0001003C0070
2014-04-04 15:16:37 CEST LOCATION:  RemoveOldXlogFiles, xlog.c:3414

/Mark

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres connection failures

2014-04-04 Thread Mark van Cuijk
On 04 Apr 2014, at 16:09 , Tom Lane t...@sss.pgh.pa.us wrote:

 Your log extract looks like the server side thought it got a connection
 closure command.  In particular, the server process seems to have exited
 normally, and it did not write LOG:  unexpected EOF on client connection,
 which it would have done if the TCP connection dropped without receipt of
 such a command.
 
 Based on these facts, I'm going to speculate that your client application
 is multi-threaded and some thread is closing a connection out from under
 another one.  It's usually best to have only one thread touching a
 particular connection; or if you want to maintain a connection pool
 yourself, be very sure you have clear acquire and release rules.

Thanks! I’ve patched the library to print all outgoing messages and indeed it 
seems one with 0x58 is sent just before the connection is closed. Printing a 
stack trace over there reveals that the pool code thinks the connection is idle 
and closes it. I’ll submit a bug report in the node-postgres library.

Thanks for the help!

/Mark

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] designing time dimension for star schema

2014-02-15 Thread Mark Wong
On Mon, Feb 10, 2014 at 8:45 AM, Mark Wong mark...@gmail.com wrote:
 Hello everybody,

 I was wondering if anyone had any experiences they can share when
 designing the time dimension for a star schema and the like.  I'm
 curious about how well it would work to use a timestamp for the
 attribute key, as opposed to a surrogate key, and populating the time
 dimension with triggers on insert to the fact tables.  This is
 something that would have data streaming in (as oppose to bulk
 loading) and I think we want time granularity to the minute.

Hello everybody,

I did a simple experiment and just wanted to share.  Hopefully this
wasn't too simple.  On a 72GB 15K rpm 2.5 drive, I tried to see how
long it would take to insert (committing after each insert) 100,000
bigints, timestamps with time zone, and timestamps with time zone with
insert trigger.  The timestamp and bigints by themselves took ~10
minutes to insert 100,000 rows, and implementing the trigger increased
the time up to about ~11 minutes.

Regards,
Mark


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] designing time dimension for star schema

2014-02-10 Thread Mark Wong
Hello everybody,

I was wondering if anyone had any experiences they can share when
designing the time dimension for a star schema and the like.  I'm
curious about how well it would work to use a timestamp for the
attribute key, as opposed to a surrogate key, and populating the time
dimension with triggers on insert to the fact tables.  This is
something that would have data streaming in (as oppose to bulk
loading) and I think we want time granularity to the minute.

A simplified example:

-- Time dimension
CREATE TABLE time (
datetime TIMESTAMP WITH TIME ZONE NOT NULL,
day_of_week SMALLINT NOT NULL
);
CREATE UNIQUE INDEX ON time (datetime);

-- Fact
CREATE TABLE fact(
datetime TIMESTAMP WITH TIME ZONE NOT NULL,
FOREIGN KEY (datetime) REFERENCES time(datetime)
);

-- Function to populate the time dimension
CREATE OR REPLACE FUNCTION decompose_timestamp() RETURNS TRIGGER AS $$
BEGIN
NEW.datetime = date_trunc('minutes', NEW.datetime);
INSERT INTO time (datetime, day_of_week)
VALUES (NEW.datetime, date_part('dow', NEW.datetime));
RETURN NEW;
EXCEPTION
WHEN unique_violation THEN
-- Do nothing if the timestamp already exists in the dimension table.
RETURN new;
END; $$
LANGUAGE 'plpgsql';

CREATE TRIGGER populate_time BEFORE INSERT
ON fact FOR EACH ROW
EXECUTE PROCEDURE decompose_timestamp();

Regards,
Mark


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] designing time dimension for star schema

2014-02-10 Thread Mark Wong
On Mon, Feb 10, 2014 at 9:20 AM, CS DBA cs_...@consistentstate.com wrote:
 I've done a lot of DSS architecture. A couple of thoughts:

 - in most cases the ETL process figures out the time id's as part of the
 preparation and then does bulk loads into the fact tables
   I would be very concerned about performance of a trigger that
 fired for every row on the fact table

   you mention you want to do data streaming instead of bulk loads,
 can you elaborate?

We have processes inserting data from log files as they are written.

 - When querying a star schema one of the performance features is the
 fact that all joins to the dimension tables are performed via a numeric
 key, such as:
 select * from fact, time_dim, geo_dim
where fact.time_id = time_dim.time_id...

 In the case of this being a timestamp I suspect the performance would
 take a hit, depending on the size of your fact table and the
 scope/volume of your DSS queries this could easily be a show stopper
 based on the assumption that the database can do a numeric binary search
 much faster than a timestamp search

I guess I was hoping the extra 4 bytes from a timestamp, compared to a
bigint, wouldn't be too significant yet I didn't consider postgres
might do a binary search faster on an integer type than a timestamp.
Even with 1 billion rows, but maybe that's wishful thinking.  Maybe a
regular integer at 4 bytes would be good enough.  I would estimate a
query would touch up to an order of 1 million rows at a time.

Regards,
Mark


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PgAgent Error 1053 on Windows 2008 R2 Server

2013-12-02 Thread Mark Street
Dear Sir/Madam,

PgAgent is failing to start, giving an Error 1053: The service did not
respond to the start or control request in a  timely fashion on a Windows
2008 R2 server.

I can run pgagent no problems in DEBUG mode. I have un-installed and
re-installed PgAgent multiple times. I have checked the service entry in the
Registry and compared it with other windows servers I have it running
successfully.

If some someone could point me down the right path as to why the service is
not starting it would be appreciated.

Regards,

Mark Street
Support
Instinct Systems




[GENERAL] Largest PG database known to man!

2013-10-01 Thread Mark Jones
Hi all,

We are currently working with a customer who is looking at a database of
between 200-400 TB! They are after any confirmation of PG working at this
size or anywhere near it.
Anyone out there worked on anything like this size in PG please? If so, can
you let me know more details etc..

   Mark Jones
   Principal Sales Engineer Emea


   http://www.enterprisedb.com/
   
   Email: mark.jo...@enterprisedb.com
   Tel: 44 7711217186
   Skype: Mxjones121







Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Mark Jones
Thanks for your quick response John.

From the limited information, it is mostly relational.
As for usage patterns, I do not have that yet.
I was just after a general feel of what is out there size wise.

Regards

 

   Mark Jones
   Principal Sales Engineer Emea


   http://www.enterprisedb.com/
   
   Email: mark.jo...@enterprisedb.com
   Tel: 44 7711217186
   Skype: Mxjones121














On 01/10/2013 22:56, John R Pierce pie...@hogranch.com wrote:

On 10/1/2013 2:49 PM, Mark Jones wrote:
 We are currently working with a customer who is looking at a database
 of between 200-400 TB! They are after any confirmation of PG working
 at this size or anywhere near it.


is that really 200-400TB of relational data, or is it 199-399TB of bulk
data (blobs or whatever) interspersed with some relational metadata?

what all is the usage pattern of this data?   that determines the
feasibility of something far more than just the raw size.




-- 
john r pierce  37N 122W
somewhere on the middle of the left coast



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PL/R etc.

2013-05-10 Thread Mark Morgan Lloyd
I don't know whether anybody active on the list has R (and in particular 
PL/R) experience, but just in case... :-)


i)   Something like APL can operate on an array with minimal regard for 
index order, i.e. operations across the array are as easily-expressed 
and as efficient as operations down the array. Does this apply to PL/R?


ii)  Things like OpenOffice can be very inefficient if operating over a 
table comprising a non-trivial number of rows. Does PL/R offer a 
significant improvement, e.g. by using a cursor rather than trying to 
read an entire resultset into memory?


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/R etc.

2013-05-10 Thread Mark Morgan Lloyd

Merlin Moncure wrote:

On Fri, May 10, 2013 at 4:41 AM, Mark Morgan Lloyd
markmll.pgsql-gene...@telemetry.co.uk wrote:

I don't know whether anybody active on the list has R (and in particular
PL/R) experience, but just in case... :-)

i)   Something like APL can operate on an array with minimal regard for
index order, i.e. operations across the array are as easily-expressed and as
efficient as operations down the array. Does this apply to PL/R?

ii)  Things like OpenOffice can be very inefficient if operating over a
table comprising a non-trivial number of rows. Does PL/R offer a significant
improvement, e.g. by using a cursor rather than trying to read an entire
resultset into memory?


pl/r (via R) very terse and expressive.  it will probably meet or beat
any performance expectations you have coming from openoffice.   that
said, it's definitely a memory bound language; typically problem
solving involves stuffing data into huge data frames which then pass
to the high level problem solving functions like glm.

you have full access to sql within the pl/r function, so nothing is
keeping you from paging data into the frame via a cursor, but that
only helps so much.

a lot depends on the specific problem you solve of course.


Thanks Merlin and Joe. As an occasional APL user terse and oppressive 
doesn't really bother me :-)


As a particular example of the sort of thing I'm thinking, using pure 
SQL the operation of summing the columns in each row and summing the 
rows in each column are very different.


In contrast, in APL if I have an array

B
1  2  3  4
5  6  7  8
9 10 11 12

I can perform a reduction operation using + over whichever axis I specify:

+/[1]B
15 18 21 24
+/[2]B
10 26 42

or even by default

+/B
10 26 42

Does PL/R provide that sort of abstraction in a uniform fashion?

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/R etc.

2013-05-10 Thread Mark Morgan Lloyd

Merlin Moncure wrote:

On Fri, May 10, 2013 at 2:32 PM, Mark Morgan Lloyd
markmll.pgsql-gene...@telemetry.co.uk wrote:

Merlin Moncure wrote:

On Fri, May 10, 2013 at 4:41 AM, Mark Morgan Lloyd
markmll.pgsql-gene...@telemetry.co.uk wrote:

I don't know whether anybody active on the list has R (and in particular
PL/R) experience, but just in case... :-)

i)   Something like APL can operate on an array with minimal regard for
index order, i.e. operations across the array are as easily-expressed and
as
efficient as operations down the array. Does this apply to PL/R?

ii)  Things like OpenOffice can be very inefficient if operating over a
table comprising a non-trivial number of rows. Does PL/R offer a
significant
improvement, e.g. by using a cursor rather than trying to read an entire
resultset into memory?


pl/r (via R) very terse and expressive.  it will probably meet or beat
any performance expectations you have coming from openoffice.   that
said, it's definitely a memory bound language; typically problem
solving involves stuffing data into huge data frames which then pass
to the high level problem solving functions like glm.

you have full access to sql within the pl/r function, so nothing is
keeping you from paging data into the frame via a cursor, but that
only helps so much.

a lot depends on the specific problem you solve of course.


Thanks Merlin and Joe. As an occasional APL user terse and oppressive
doesn't really bother me :-)

As a particular example of the sort of thing I'm thinking, using pure SQL
the operation of summing the columns in each row and summing the rows in
each column are very different.

In contrast, in APL if I have an array

B
1  2  3  4
5  6  7  8
9 10 11 12

I can perform a reduction operation using + over whichever axis I specify:

+/[1]B
15 18 21 24
+/[2]B
10 26 42

or even by default

+/B
10 26 42

Does PL/R provide that sort of abstraction in a uniform fashion?


certainly (for example see here:
http://stackoverflow.com/questions/13352180/sum-different-columns-in-a-data-frame)
-- getting good at R can take some time but it's worth it.   R is
hot right now with all the buzz around big data lately.  The main
challenge actually is the language is so rich it can be difficult to
zero in on the precise behaviors you need.   Also, the documentation
is all over the place.

pl/r plays in nicely because with some thought you can marry the R
analysis functions directly to the query in terms of both inputs and
outputs -- basically very, very sweet syntax sugar.   It's a little
capricious though (and be advised: Joe has put up some very important
and necessary fixes quite recently) so usually I work out the R code
in the R console first before putting in the database.


[Peruse] Thanks, I think I get the general idea. I'm aware of the 
significance of R, and in particular that it's attracting attention due 
to the undesirability of hiding functionality in spreadsheets where 
these usurped APL for certain types of operation.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How large can a PostgreSQL database get?

2013-04-17 Thread Mark Felder
On Wed, 17 Apr 2013 08:23:41 -0500, Aleksey Tsalolikhin  
atsaloli.t...@gmail.com wrote:



Hi.  I was promoting PostgreSQL to an AIX/Oracle shop yesterday, they are
looking to switch to open source to cut their licensing costs, and was
asked how large a database does PostgreSQL support?  Is there an upper
bound on database size and if so, what it is?


According to yahoo...:

http://glinden.blogspot.com/2008/05/yahoo-builds-two-petabyte-postgresql.html

...pretty big. But yahoo threw some programmers at it, I believe.

Straight out of the box? Not sure, but I'd expect many on this list have  
databases larger than enterprise oracle shops.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] libpq thread safety

2012-12-27 Thread Mark Morgan Lloyd
Do any special precautions need to be taken when PQNotifies is being 
called, to make sure that nothing else is referencing the handle?


The sort of nightmare scenario I'm thinking about is when a background 
thread is periodically pulling data from a table into a buffer, but a 
foreground (GUI) timer is asynchronously polling for notifications.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq thread safety

2012-12-27 Thread Mark Morgan Lloyd

Tom Lane wrote:

Mark Morgan Lloyd markmll.pgsql-gene...@telemetry.co.uk writes:
Do any special precautions need to be taken when PQNotifies is being 
called, to make sure that nothing else is referencing the handle?


It's pretty much the same as any other operation on a PGconn: if there
could be more than one thread touching the connection object
concurrently, you'd be well advised to add some application-level
locking.

http://www.postgresql.org/docs/9.2/static/libpq-threading.html

The lack of any such locking inside libpq is partly historical, and
partly because in many practical situations you'll need application-side
locks anyway to protect application data structures associated with the
connection.


Thanks, Tom. I'm fairly happy with the ways I've used it so far, but I'm 
just trying to think ahead for the future.


In the case of Delphi/Lazarus, where you can have multiple queries on 
top of the same connection object, my experience so far is that using 
the connection object's handle is safe. But I think that to be 
absolutely confident of that I need to do some tracing, and find out 
under what circumstance calls are being issued directly against that 
handle rather than it just being a placeholder for authentication etc.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble with PQnotifies()

2012-12-14 Thread Mark Morgan Lloyd

seil...@so-net.net.tw wrote:

The following listening worker thread behaves as expected if I insert/delete rows 
into/from table t1 in psql prompt.

My trouble is when the SQL execution worker thread inserts/ deletes rows into/from table 
t1, the listening worker thread then goes crazy: PQnotifies() always returns 
NULL which pushes the listening thread to grab all CPU power because select() returns 
immediately in every iteration. The weird part is that select() says that there is 
something available but PQnotifies() returns NULL.

..

Please ignore this question!

My connection pool implementation seems to have flaw. Somehow and somewhere the 
connection acquired by a thread is robbed by other threads. The PGconn  sending 
LISTEN NotifyMe becomes different from the PGconn passed to PQsocket(), 
PQconsumeInput(), and/or PQnotifies().


I was looking at it carefully, and was about to ask about the 
connection- in particular whether it was shared across threads etc. Glad 
you've found the issue, I've been caught by something very similar using 
list/notify on Lazarus/FPC where you can end up with several handles 
only one of which is reliable.



Please also pardon me for asking inappropriate questions like this one. As far 
as I can recall, every issue I encountered before always finally proved that 
PostgreSQL is flawless.


But at least it demonstrates that somebody's using that facility.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq - prevent automatic reconnect

2012-12-07 Thread Mark Morgan Lloyd

Chris Angelico wrote:

On Thu, Dec 6, 2012 at 5:56 AM, Tom Lane t...@sss.pgh.pa.us wrote:

I suspect this action isn't dropping the TCP connection.  It's only
equivalent to a momentary glitch in your network connectivity --- and
you'd be very unhappy if that caused TCP connections to go down, because
networks have glitches all the time.  Generally, the operating system
tries hard to prevent applications from even knowing that a glitch
happened.  (Connections will time out eventually if connectivity doesn't
come back, but typically such timeouts are many minutes.  Possibly
whatever your real complaint is could be addressed by twiddling the TCP
timeout parameters for the socket.)


Yep. For a better test, try taking the interface down for a good while
(several minutes), or actually shut down the Postgres server at the
other end.


I find PostgreSQL connections, particularly with listen/notify set up, 
to be fairly sensitive to disconnection. This is particularly the case 
with apps written using either Delphi or Lazarus, where a session is 
kept live for an extended period rather than simply being used to 
transfer a query and resultset.


This isn't a recent thing, and I'm definitely not saying that it's a 
Postgres issue. I've tried forcing random connection drops at the 
application level in the past and have never been able to characterise 
the problem.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing minimal client libraries on Windows

2012-12-07 Thread Mark Morgan Lloyd

Albe Laurenz wrote:

Mark Morgan Lloyd wrote:

Apologies for this old chestnut, but I think it's a question more often
asked than answered.

If I want to install a minimal binary libpq.dll on a non-developer
machine to support Lazarus/FPC programs, where do I get it?

I definitely don't want to tell the users to install a full server, or
to build from source. I'd rather not give them pgadmin or psql. I'd
prefer not to install ODBC since I need to get at listen/notify that it
doesn't support.


You can extract it from the precompiled ZIP archive that
EnterpriseDB provides.


Thanks, but in the end I installed ODBC as the path of least resistance 
and relied on the OS knowing where the DLLs were.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installing minimal client libraries on Windows

2012-12-06 Thread Mark Morgan Lloyd
Apologies for this old chestnut, but I think it's a question more often 
asked than answered.


If I want to install a minimal binary libpq.dll on a non-developer 
machine to support Lazarus/FPC programs, where do I get it?


I definitely don't want to tell the users to install a full server, or 
to build from source. I'd rather not give them pgadmin or psql. I'd 
prefer not to install ODBC since I need to get at listen/notify that it 
doesn't support.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding Default Privileges to a schema for a role

2012-11-26 Thread Mark Volz
Hello,

I created a role called Workers. I am able edit the privileges for the Workers 
role in the public schema of a database.  I also would like to edit the default 
privileges of the schema as well so that the Workers group will automatically 
have read permissions to any tables created in the future.  In pgAdmin III I 
opened up the Default privileges for the public schema and I only see the 
Public role in the Role/ Group dropdown.  Is there any reason why I can see 
other groups in the Privileges tab, but not the Default Privileges tab? 

P.S.  I did notice on an older (9.0?) 32 bit version of PostGIS I am able to 
see more than just the public role in the Default Privileges tab.  I don't 
remember doing anything special to see other groups on the older machine.  This 
isn't a bug is it?

Thank You

Mark Volz
GIS Specialist



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for ODBC drivers for NT4.

2012-08-28 Thread Mark Morgan Lloyd

Mouse Dresden wrote:

OK. I won't go into details about why I need them, since no one wants
a long story.

Simply put, I'm looking for odbc drivers for postgresql that will work with NT4.


I've been using 8.02.01.04 on NT4 app servers for an extended period, 
look for psqlodbc.msi of the appropriate version. You might also need a 
separate MDAC installer.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to analyze load average ?

2012-08-06 Thread Mark Felder

On Mon, 06 Aug 2012 09:38:33 -0500, Tomas Vondra t...@fuzzy.cz wrote:


Load average is defined as a number of processes in the run queue


That depends on if he's running Linux or BSD.

http://www.undeadly.org/cgi?action=articlesid=20090715034920

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to analyze load average ?

2012-08-06 Thread Mark Felder

On Mon, 06 Aug 2012 10:27:18 -0500, Tomas Vondra t...@fuzzy.cz wrote:



Although the OP mentioned he's using ext4, so I suppose he's running  
Linux

(although I know there was some ext4 support e.g. in FreeBSD).
Still, the load average 0.88 means the system is almost idle, especially
when there's no I/O activity etc.


Ahh, I didn't see the mention of ext4 initially. I tend to just use iostat  
for getting a better baseline of what's truly happening on the system. At  
least on FreeBSD (not sure of Linux at the moment) the iostat output also  
lists CPU usage in the last columns and if id (idle) is not close to  
zero it's probably OK. :-)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BI tools and postgresql

2012-07-28 Thread Mark Phillips
Thanks to all who have posted. Your comments are very helpful.

On Jul 26, 2012, at 11:04 AM, Vincent Veyron wrote:

 I had assumed the OP _was_
 using Postgres already, and that this was being challenged by the new
 manager.

Yes, that is accurate. The new manager is not familiar with OSS and PostgreSQL, 
having come from a large corporate environment into a smaller more lean 
organization. 

Thus my request here. I would like to offer the new manager an opportunity to 
extend the range of options.

 - Mark

[GENERAL] Terminating a rogue connection

2012-07-27 Thread Mark Morgan Lloyd
Assuming a *nix server: if a monitoring program determines that an 
established connection appears to be trying to so something 
inappropriate, what's the best way of terminating that session rapidly?


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Terminating a rogue connection

2012-07-27 Thread Mark Morgan Lloyd

Chris Angelico wrote:

On Fri, Jul 27, 2012 at 6:27 PM, Mark Morgan Lloyd
markmll.pgsql-gene...@telemetry.co.uk wrote:

Assuming a *nix server: if a monitoring program determines that an
established connection appears to be trying to so something inappropriate,
what's the best way of terminating that session rapidly?


select pg_terminate_backend(procpid) from pg_stat_activity where .

The main difficulty is recognizing which PID to terminate, though.


Exactly :-)

I'd add that this is a hypothetical situation at present, I'm just 
trying to plan ahead.



There's a good lot of information available in pg_stat_activity;
logins, application names, and connection IP addresses are handy here.
But ultimately, it's just pg_terminate_backend.

ChrisA


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Switching from OSX to Linux, multi-line queries in \copy don't work anymore

2012-07-27 Thread Mark Morgan Lloyd

Craig Ringer wrote:

On 07/27/2012 09:28 PM, Ryan Kelly wrote:

I recently switched from OSX to Linux and \copy in psql no longer
accepts multi-line queries. For instance:

\copy (
 select
 *
 from
 pg_settings
) to '/tmp/settings.csv' with csv header

This works fine on OSX. On Linux I get:
\copy: parse error at end of line

Am I missing something here?
A wild guess: I'd say this is a consequence of the fact that psql on OS 
X uses libedit, on Linux it uses readline.


Personally I had no idea that multiline \copy was possible at all. I've 
always though the way backslash commands are EOL-terminated while 
everything else is semicolon terminated is a bit of a wart, though.


I don't have an answer for you. Using the --no-readline argument makes 
no difference on my 9.1.4 here. This may just be an area where libedit 
is smarter than readline - or it might be that I'm totally wrong and the 
real issue is something else entirely.


Thanks for making the effort to produce a good post with all the 
detailed version info, exact error text, etc.


Also appear to get it here on single-line queries:

markMLl= \copy (select * from pg_settings) to '/tmp/settings.csv' with 
csv header;

\copy: parse error at select
markMLl=

However my psql and server are rather old which could be an issue 
(8.1.19 to server 8.4 if I recall correctly).


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Terminating a rogue connection

2012-07-27 Thread Mark Morgan Lloyd

Chris Angelico wrote:

On Fri, Jul 27, 2012 at 7:09 PM, Mark Morgan Lloyd
markmll.pgsql-gene...@telemetry.co.uk wrote:

Chris Angelico wrote:

On Fri, Jul 27, 2012 at 6:27 PM, Mark Morgan Lloyd
markmll.pgsql-gene...@telemetry.co.uk wrote:

Assuming a *nix server: if a monitoring program determines that an
established connection appears to be trying to so something
inappropriate,
what's the best way of terminating that session rapidly?


select pg_terminate_backend(procpid) from pg_stat_activity where .

The main difficulty is recognizing which PID to terminate, though.


Exactly :-)

I'd add that this is a hypothetical situation at present, I'm just trying to
plan ahead.


Something I've been developing at work lately combines this with
editing pg_hba.conf to ensure that a kicked connection cannot
reconnect. Services register themselves with a particular user name,
then SET USER to switch to the one actual user who owns tables and
stuff, so my overlording monitor can kick off any service based on IP
and usename (note the spelling - it's not username in the table).
Rewrite pg_hba.conf, SIGHUP, then pg_terminate_backend in a searched
SELECT as seen above.

This may be overkill for what you're doing, though. It's part of our
prevent split-brain problems technique.


One problem there is that if somebody is doing something that causes a 
significant CPU or memory overcommit, it might be some while before 
SIGHUP etc. works. I'm currently eyeballing the Linux capabilities 
stuff, it looks as though if a monitor has CAP_NET_ADMIN that it will be 
able to temporarily add a firewall rule that blocks the rogue client's 
traffic.


I'm hoping to be able to avoid on the fly editing of configuration 
files, there's too much could go wrong. Which I suppose leads into 
another question...


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding users connection via SSL

2012-07-27 Thread Mark Morgan Lloyd
I'm hoping to be able to avoid on the fly editing of configuration 
files, there's too much could go wrong.


Is it possible to create or modify a user connecting via an 
authenticated/encrypted protocol as an SQL activity, in the same way 
that CREATE ROLE ... PASSWORD creates one authenticated by password?


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] BI tools and postgresql

2012-07-25 Thread Mark Phillips
I am seeking suggestions for business intelligence and data mining tools 
compatible with postgresql. A new manager at a client's shop is leaning toward 
the MS offerings. I would like to be able to speak to the issue.

TIA,

 - Mark
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BI tools and postgresql

2012-07-25 Thread Mark Phillips
Thank you. I will look into those projects. 

The initiative is at the earliest stage so there is not a lot of detail 
available at the moment.

On Jul 25, 2012, at 11:07 AM, Henry Drexler wrote:

 The combination of pandas ipython and psycopg2 work wonders for pulling data 
 from db and manipulating/plotting,
 although I don't know in more detail of what the client's use cases are.
 
 
 
 On Wed, Jul 25, 2012 at 1:41 PM, Mark Phillips mark.phill...@mophilly.com 
 wrote:
 I am seeking suggestions for business intelligence and data mining tools 
 compatible with postgresql. A new manager at a client's shop is leaning 
 toward the MS offerings. I would like to be able to speak to the issue.
 
 TIA,
 
  - Mark
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



[GENERAL] Problem using a pl/pgsql function to populate a geometry column with x,y data

2012-07-24 Thread Mark Wynter
Within a pl/pgsql function block, I'm wanting to populate a geometry column 
with x,y data.  However when I invoke the function, I get the error message 
that column y doesn't exist, even though it does.

The error message is:

SELECT load_xyz_data('/var/tmp/rscp_coverage_test.txt',32754);
ERROR:  column y does not exist
LINE 1: ...temp SET wkb_geometry = PointFromText(''POINT('|| y ||' '|| ...  
^
QUERY:  SELECT 'UPDATE xyz_points_temp SET wkb_geometry = 
PointFromText(''POINT('|| y ||' '|| x ||')'','|| sourcesrid ||')'
CONTEXT:  PL/pgSQL function load_xyz_data line 24 at EXECUTE statement

My function is:

CREATE OR REPLACE FUNCTION load_xyz_data(sourcefile text, sourcesrid integer)
RETURNS text AS $$
DECLARE
BEGIN

EXECUTE 'CREATE TEMPORARY TABLE xyz_points_temp
(
x numeric,
y numeric,
z numeric
)
WITH (
OIDS=FALSE
)
ON COMMIT DROP';

-- Load xyz data
EXECUTE 'COPY xyz_points_temp FROM '|| quote_literal(sourcefile) ||' DELIMITER 
'',''';
-- Add geometry column
EXECUTE 'ALTER TABLE xyz_points_temp ADD COLUMN wkb_geometry geometry(POINT,'|| 
sourcesrid ||')';

-- Populate geometry column with x,y data
EXECUTE 'UPDATE xyz_points_temp SET wkb_geometry = ST_GeomFromText(''POINT('|| 
y ||' '|| x ||')'','|| sourcesrid ||')';

-- Now do something else

RETURN 'DATA LOADED';
END;
$$ LANGUAGE plpgsql STRICT;


I suspect it's a syntax issue, but can't get it to work?   Any suggestions?


Thanks





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem using a pl/pgsql function to populate a geometry column with x,y data - SOLVED

2012-07-24 Thread Mark Wynter
Notwithstanding the fact I had my x and y's around the wrong way, I've got it 
working using ST_MakePoint.See below

On 25/07/2012, at 12:39 AM, Mark Wynter wrote:

 CREATE OR REPLACE FUNCTION load_xyz_data(sourcefile text, sourcesrid integer)
 RETURNS text AS $$
 DECLARE
 BEGIN
 
 EXECUTE 'CREATE TEMPORARY TABLE xyz_points_temp
 (
 x numeric,
 y numeric,
 z numeric
 )
 WITH (
 OIDS=FALSE
 )
 ON COMMIT DROP';
 
 -- Load xyz data
 EXECUTE 'COPY xyz_points_temp FROM '|| quote_literal(sourcefile) ||' 
 DELIMITER '',''';
 -- Add geometry column
 EXECUTE 'ALTER TABLE xyz_points_temp ADD COLUMN wkb_geometry 
 geometry(POINT,'|| sourcesrid ||')';
 
 -- Populate geometry column with x,y data
-- This works
EXECUTE 'UPDATE xyz_points_temp SET wkb_geometry = ST_SetSRID(ST_MakePoint(x, 
y), '|| sourcesrid ||')';

--This does not work
 -- EXECUTE 'UPDATE xyz_points_temp SET wkb_geometry = 
 ST_GeomFromText(''POINT('|| y ||' '|| x ||')'','|| sourcesrid ||')';
 
 -- Now do something else
 
 RETURN 'DATA LOADED';
 END;
 $$ LANGUAGE plpgsql STRICT;
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem using a pl/pgsql function to populate a geometry column with x,y data - SOLVED

2012-07-24 Thread Mark Wynter
Nothwithstanding the fact I had my x and y's around the wrong way, I've got it 
working using ST_MakePoint.See below


On 25/07/2012, at 12:39 AM, Mark Wynter wrote:

 CREATE OR REPLACE FUNCTION load_xyz_data(sourcefile text, sourcesrid integer)
 RETURNS text AS $$
 DECLARE
 BEGIN
 
 EXECUTE 'CREATE TEMPORARY TABLE xyz_points_temp
 (
 x numeric,
 y numeric,
 z numeric
 )
 WITH (
 OIDS=FALSE
 )
 ON COMMIT DROP';
 
 -- Load xyz data
 EXECUTE 'COPY xyz_points_temp FROM '|| quote_literal(sourcefile) ||' 
 DELIMITER '',''';
 -- Add geometry column
 EXECUTE 'ALTER TABLE xyz_points_temp ADD COLUMN wkb_geometry 
 geometry(POINT,'|| sourcesrid ||')';
 
 -- Populate geometry column with x,y data
-- This works
EXECUTE 'UPDATE xyz_points_temp SET wkb_geometry = ST_SetSRID(ST_MakePoint(x, 
y), '|| sourcesrid ||')';

--This does not work
 -- EXECUTE 'UPDATE xyz_points_temp SET wkb_geometry = 
 ST_GeomFromText(''POINT('|| y ||' '|| x ||')'','|| sourcesrid ||')';
 
 -- Now do something else
 
 RETURN 'DATA LOADED';
 END;
 $$ LANGUAGE plpgsql STRICT;
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem setting environmental variables for postgresql or plpythonu

2012-07-19 Thread Mark Wynter
I''m hoping someone can help me out.   I'm wanting to run GRASS GIS from within 
a plpythonu function block.  But to run GRASS GIS externally, the following 
environmental variables need to be available to the Postgresql server...

GISBASE='/usr/local/grass-6.4.3svn'
PATH='$PATH:$GISBASE/bin:$GISBASE/scripts:$GISBASE/lib'
PYTHONPATH='${PYTHONPATH}:$GISBASE/etc/python/'
PYTHONPATH='${PYTHONPATH}:$GISBASE/etc/python/grass'
PYTHONPATH='${PYTHONPATH}:$GISBASE/etc/python/grass/script'
LD_LIBRARY_PATH='$LD_LIBRARY_PATH:$GISBASE/lib'
GIS_LOCK='$$'
GISRC='$HOME/.grassrc6'

Reference:  http://grass.osgeo.org/wiki/GRASS_and_Python

So far I've added these variables to /etc/bash.bashrc  
which makes them available to the postgres user (but not the server or 
pl/python).

And I've had no success adding them to...
/etc/postgresql/9.1/main/environment
#Shell commands are not evaluated.

Any suggestions would be appreciated...

Thanks
Mark
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] db server processes hanging around

2012-06-25 Thread Mark Rostron
thank you
yes we are continuing to observe connection count.
so far we have made changed TCP stack configuration and will continue to
observe.
mr


On Wed, Jun 20, 2012 at 12:53 AM, Sumit Raja 
sumit.r...@raja-consulting.co.uk wrote:

 Or ask your Java devs to investigate why the shut down does not close
 the physical connection properly. Does IDEA claim to shut down Tomcat
 but actually it is still running because of a threads not being
 cleaned up?

 Are you sure this isn't happening during normal operation of the
 application? If its bad connection/thread management, something like
 this might show up in production.

 - Sumit


 On 19 June 2012 18:28, Steve Crawford scrawf...@pinpointresearch.com
 wrote:
  On 06/19/2012 09:29 AM, Mark Rostron wrote:
 
  hi
 
  we are running out of database connections.
 
  we are using pg 9.0.6 on linux centos 5.7 64bit.
  we are not using any go-between connection pools such as pgbouncer or
  pgpool - connections occur directly from client to database.
  the connection setup on the client (java) is default, only providing
  (user,password,dbhost,dbname).
 
  we have about 10 developers developing java thru IDEA who start/stop the
  local tomcat server frequently.
  i have observed that tomcat doesn't disconnect from pg cleanly when they
  cycle, and the server processes persist for a long time.
  I have had them reduce their local connection factory pool size to 1
 (this
  helped) and increased our max_connection value to 1000.
  yet the problem persists.
 
  I have noticed that the server processes do die after some time - due
 to
  inactivity?
  we are looking for a way to control server processes better than we are
  doing now.
 
  thnx for your time.
  mr
 
 
  I am unaware of any system setting like max_connection_idle_time (though
 it
  might be a useful addition). I have not had to mess with tcp_keepalive
  settings but you might be able to alter those (perhaps at the OS instead
 of
  PostgreSQL) to reduce the delay before the backend terminates. But this
  won't work for socket connections.
 
  You could hack together a tailored solution by having cron run a script
 that
  would query pg_stat_activity for queries equal to IDLE and with a
  backend_start age greater than whatever you find reasonable and then
 execute
  pg_terminate_backend() on those PIDs. You could even have a table of
  developer IP addresses and only terminate those processes. Alternately,
 if
  Tomcat connected to a different port you could only kill those.
 
  Cheers,
  Steve
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general



 --
 Raja Consulting Ltd.
 Incorporated in England and Wales No. 06454814,  Registered Office: 4
 Calder Court, Shorebury Point, Amy Johnson Way, Blackpool FY4 2RH

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] db server processes hanging around

2012-06-19 Thread Mark Rostron
hi

we are running out of database connections.

we are using pg 9.0.6 on linux centos 5.7 64bit.
we are not using any go-between connection pools such as pgbouncer or
pgpool - connections occur directly from client to database.
the connection setup on the client (java) is default, only providing
(user,password,dbhost,dbname).

we have about 10 developers developing java thru IDEA who start/stop the
local tomcat server frequently.
i have observed that tomcat doesn't disconnect from pg cleanly when they
cycle, and the server processes persist for a long time.
I have had them reduce their local connection factory pool size to 1 (this
helped) and increased our max_connection value to 1000.
yet the problem persists.

I have noticed that the server processes do die after some time - due to
inactivity?
we are looking for a way to control server processes better than we are
doing now.

thnx for your time.
mr


Re: [GENERAL] full text index / search

2012-06-15 Thread Mark Phillips
I am not an expert on FTS, but I have been reading and experimenting. Further, 
I don't know what you are really attempting. With those warnings behind us, I 
think a GIN or GiST index are helpful in full text searches.

You may find this useful:
Understanding Full Text Search
 http://linuxgazette.net/164/sephton.html

I suggest that you review the Postgres Documentation for FTS:
 http://www.postgresql.org/docs/9.1/interactive/textsearch.html

One option you may find interesting is the pg_trgm module:
 http://www.postgresql.org/docs/9.1/static/pgtrgm.html

hth,

- Mark Phillips

On Jun 15, 2012, at 8:18 AM, Philipp Kraus wrote:

 Hello,
 
 I have created a table with a text field under PG 9.1, that should store 
 source codes. I would like to search in this text field with regular 
 expressions. I think I need a full-text-index, do I?
 How can I create this index, do I need some additional extensions? The PG 
 server runs under OSX (installed on the DMG package).
 
 Thanks
 
 Phil
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] timestamps, formatting, and internals

2012-06-03 Thread Mark Morgan Lloyd

Jasen Betts wrote:

On 2012-05-29, David Salisbury salisb...@globe.gov wrote:


On 5/27/12 12:25 AM, Jasen Betts wrote:

The query: show integer_datetimes; should return 'on' which means
timestamps are microsecond precision if it returns 'off' your database
was built with floating point timstamps and equality tests will be
unreliable,

I find that rather interesting.  I was told that I was losing microseconds
when I extracted an epoch from the difference between two timestamps and casted
that value to an integer.  So if I have integer timestamps ( your case above )
I get microseconds, but integer epochs is without microseconds?


yeah, the microseconds appear as fractions of seconds, so in the
conversion to integer epoch they get rounded off.


I think you need to consider what you're actually computing and 
measuring. My understanding is that Meeus's Equation of Time calculation 
is good to something like 250mSec so that's the limit of your accuracy, 
but as soon as you start taking refraction and atmospheric turbulence 
into account- even with the Sun high above the horizon- you're going to 
degrade that.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Libpq question

2012-06-01 Thread Mark Morgan Lloyd

zeljko wrote:

John Townsend wrote:


It appears that some developers (Davart) are by-passing the standard
client library, “libpq.dll”, and directly accessing the server using
Delphi or FPC. I am not sure of the advantage here. All libpq.dll


I'm FPC user and I use libpq.so(.dll,.dylib) via zeoslib. 
Those who bypass libpq probably uses odbc connections or similar.


Apologies for my rather late comment.

ODBC, JDBC, Zeos etc. are all appropriate if you want an app to be able 
to contact different types of backend. However, the further you move 
from native connections, the more difficult it becomes to use 
PostgreSQL-specific functionality such as the listen/notify mechanism: 
I've used this to good effect in an FPC/Lazarus program and while I 
believe it can be hacked in via ODBC the result is hardly pretty.


In addition, while it is in principle possible to roll your own 
interface library, the libpq+pg combination has been in use for 10+ 
years, is actively maintained, and (hopefully) is bug- and 
backdoor-free. Reinventing this particular wheel is definitely not 
something that should be approached casually.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] full text searching

2012-05-29 Thread Mark Phillips
I have read the pg 9.1 doc's about full text search, rambled about the web a 
bit looking for related articles, and had good experiences implementing 
lightweight prototypes. Now I can sense the depth of the water as I try to 
determine a solid approach for implementation. 

I would be most grateful for links to articles that discuss the design issues 
one should consider before implementing a long term strategy. Case study, or 
architectural articles, and similar. 

Thanks,

 - Mark
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL: lock file postmaster.pid already exists

2012-05-24 Thread Mark Dilger
We have lots of partition tables that inherit
from a smaller number of parents.  Some,
but not all of these tables also have indexes.
The number actually varies depending on 

the data loaded.  For some other database
instances, fortunately on Linux, the number
is in the millions.

I have been testing with passing FindFirstFile
a pattern to match the temporary file names,
rather than letting FindFirstFile/FindNextFile
return all names and then having postgres
do the pattern match itself.  So far, this looks
very promising, with a stand-alone program
that uses this technique cutting the runtime
from 4 minutes down to less than a second.

I have a fairly clean patch in the works that
I will submit after I have verified it on
Windows 2003, Windows 2008 and Linux.






 From: Magnus Hagander mag...@hagander.net
To: Mark Dilger markdil...@yahoo.com 
Cc: Tom Lane t...@sss.pgh.pa.us; deepak deepak...@gmail.com; Alban Hertroys 
haram...@gmail.com; pgsql-general@postgresql.org 
pgsql-general@postgresql.org 
Sent: Thursday, May 24, 2012 3:58 AM
Subject: Re: [GENERAL] FATAL: lock file postmaster.pid already exists
 
On Thu, May 24, 2012 at 12:47 AM, Mark Dilger markdil...@yahoo.com wrote:
 I am running this code on Windows 2003.  It
 appears that postgres has in src/port/dirent.c
 a port of readdir() that internally uses the
 WIN32_FIND_DATA structure, and the function
 FindNextFile() to iterate through the directory.
 Looking at the documentation, it seems that
 this function does collect file creation time,
 last access time, last write time, file size, etc.,
 much like performing a stat.

 In my case, the code is iterating through roughly
 56,000 files.  Apparently, this is doing the
 equivalent of a stat on each of them.

how did you end up with 56,000 files? Lots and lots and lots of tables?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

Re: [GENERAL] FATAL: lock file postmaster.pid already exists

2012-05-23 Thread Mark Dilger
I tried moving the call to RemovePgTempFiles until
after the PID file is fully written, but it did not help.
pg_ctl attempts to connect to the database, and does
not report the database as running until that connection
succeeds.  I am not comfortable moving the call to
RemovePgTempFiles after the point in the postmaster
where child processes are spawned and connections
made available to clients because by that point the
temporary files encountered may be valid ones from
the current incarnation of Postgres and not from the
incarnation before the reboot.

I do not know precisely why the filesystem is so slow,
except to say that we have many relations:

xyzzy=# select count(*) from pg_catalog.pg_class;
 count 
---
 27340
(1 row)

xyzzy=# select count(*) from pg_catalog.pg_attribute;
 count  

 236252
(1 row)


Running `find . | wc -l` on the data directory gives
55219




 From: deepak deepak...@gmail.com
To: Tom Lane t...@sss.pgh.pa.us 
Cc: Alban Hertroys haram...@gmail.com; pgsql-general@postgresql.org; 
markdil...@yahoo.com 
Sent: Wednesday, May 23, 2012 9:03 AM
Subject: Re: [GENERAL] FATAL: lock file postmaster.pid already exists
 

Thanks, I have put one of the other developers working on this issue, to 
comment.

--
Deepak


On Mon, May 21, 2012 at 10:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:

deepak deepak...@gmail.com writes:
 We could reproduce the start-up problem on Windows 2003. After a reboot,
 postmaster, in its start-up sequence cleans up old temporary files, and
 this step used to take several minutes (a little over 4 minutes), delaying
 the writing of line 6 onwards into the PID file. This delay caused pg_ctl
 to timeout, leaving behind an orphaned postgres.exe process (which
 eventually forks off many other postgres.exe processes).

Hmm.  It's easy enough to postpone temp file cleanup till after the
postmaster's PID file is completely written, so I've committed a patch
for that.  However, I find it mildly astonishing that such cleanup could
take multiple minutes.  What are you using for storage, a man with an
abacus?

                       regards, tom lane


  1   2   3   4   5   6   7   >