Re: [GENERAL] Postgresql and github

2017-11-09 Thread Steve Atkins

> On Nov 9, 2017, at 9:37 AM, Poul Kristensen  wrote:
> 
> No it isn't.
> 
> What I want to do is:
> 
> ansible-playbook   somepostgresql.yml 
> 
> and postgresql is then changed on some server
> the way things are done by e.g.
> github.com/oravirt/ansible-oracle 

You're looking for help with an Ansible recipe, not with anything to do with 
PostgreSQL itself.

Mentioning it here, in case someone already has one, is worth a try but you're 
likely going to need to go talk to the Ansible people. Or write your own.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logical decoding error

2017-11-02 Thread Steve Atkins

> On Nov 2, 2017, at 9:34 AM, Mark Fletcher  wrote:
> 
> Hello,
> 
> Running Postgres 9.6.5, we're using logical decoding to take changes to the 
> database and propagate them elsewhere in our system. We are using the PGX Go 
> Postgres library, at https://github.com/jackc/pgx, and we are using the 
> test_decoding plugin to format the changes. We are using 6 slots/have 6 
> processes streaming the changes from our database.
> 
> This setup works great, except that every 20 hours or so, some or all of the 
> processes encounter a problem, all at the same time. They receive an 
> unexpected message type 'w'.  At this point the processes restart, and when 
> they do, they encounter another error: "ERROR: got sequence entry 0 for toast 
> chunk 20559160 instead of seq 6935 (SQLSTATE XX000)" (the chunk number/seq 
> number varies). This causes them to restart again. They will encounter the 
> sequence entry error up to 3 more times, before things magically start to 
> work again.
> 
> We are also doing standard streaming replication to a slave off this 
> database, and that has never seen a problem.
> 
> Does this ring a bell for anyone? Do you have any suggestions for how I 
> should go about figuring out what's happening?

Where are the errors coming from - your code or pgx? If it's from pgx, what's 
the exact error? ('w' is regular replication payload data, so it'd be expected 
as a copydata payload message type, but would be an error for a replication 
message).

Do you capture the raw data from the replication connection when the error 
happens?

(If you're using pgx you might be interested in 
https://github.com/wttw/pgoutput - it's support for the pgoutput logical 
decoder in PG10, which might be a bit more robust to deal with than the 
test_decoding one).

Cheers,
  Steve





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql CDC tool recommendations ?

2017-10-05 Thread Steve Atkins

> On Oct 5, 2017, at 10:28 AM, avi Singh  wrote:
> 
> Guys
>  Any recommendation on a good CDC tool that can be used to push 
> postgresql changes to Kafka in json format ?

Not sure whether json is a constraint, but I'd look at http://debezium.io  and 
(maybe) the no longer supported https://github.com/confluentinc/bottledwater-pg

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New interface to PG from Chapel?

2017-09-15 Thread Steve Atkins

> On Sep 15, 2017, at 12:56 PM, Thelonius Buddha  wrote:
> 
> I’m interested to know the level of effort to build a psycopg2-like library 
> for Chapel: http://chapel.cray.com/ Not being much of a programmer myself, 
> does someone have an educated opinion on this?

It looks like you can call C libraries from Chapel, so you can use libpq 
directly. Psycopg2 complies with the python database API, but there doesn't 
seem anything like that for Chapel, so there's not really an equivalent.

So it depends on how complex a wrapper you want. At the low end, very little 
effort - libpq exists; you can call it from Chapel if you just declare the api, 
I think.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-14 Thread Steve Atkins

> On Sep 14, 2017, at 8:38 AM, Karl Czajkowski  wrote:
> 
> On Sep 14, vinny modulated:
> 
>> If it is only one database, on one server, then couldn't you just
>> use one sequence?
>> If oyu prefix the value with some identifier of the current table
>> then you cannot get duplicates
>> across tables even if you reset the sequence.
>> 
> 
> I didn't follow the whole thread, so I apologize if I'm repeating
> earlier suggestions.
> 
> We use a shared sequence to issue the new identifiers, and in fact
> limited the sequence to 43 bits so they can serialize as JSON numbers
> for the benefit of our clients.  We disabled wrapping, so it will fail
> if we exhaust the range.
> 
> If you rapidly churn through identifiers and could envision exhausting
> 64 bits in your database's lifetime, you should probably just use

2^63 nanoseconds is about three centuries.

Unless you need to generate identifiers in multiple places a simple
bigserial is good enough. (If you do need to generate unique identifiers
at facebook / instagram / twitter scale then there are other options, but
you're almost certainly not that big and you probably don't).

For distributed ids on a system you control there are a bunch of 64 bit
id generation algorithms that work well. Twitter snowflake was one of the
earlier ones.

Where UUIDs or GUIDs shine is when you want to be able to generate
ids with a reasonably guarantee that nobody else, anywhere on the planet
or off, ever has or ever will generate the same ID. If you're not in that
situation you don't really need the behaviour they try to guarantee.

> UUIDs instead of a sequence.  A timestamp-based UUID still has 
> reasonably sorting and indexing properties.
> 
> To "guarantee" uniqueness with a shared sequence or UUID generator,
> you can use a trigger to prevent override of identifiers from SQL. As
> long as you always use the correct value generator during INSERT and
> disallow mutation of identifiers during UPDATE, the rows will not
> share identifiers.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema/table replication

2017-09-06 Thread Steve Atkins

> On Sep 6, 2017, at 8:48 AM, Marcin Giedz  wrote:
> 
> Does pglogical support views replication as I can't find it in any 
> restrictions ?

There's no need to replicate the contents of a view, as it doesn't contain any 
data.

pglogical can replicate the initial schema, including any views, but won't 
replicate DDL changes automatically after that. It does provide a clean way to 
replicate DDL from the master to slaves with pglogical.replicate_ddl_command().

Cheers,
  Steve

> > On Sep 6, 2017, at 6:00 AM, Marcin Giedz  wrote:
> > 
> > Hi, is there any way (3rd party software) to replicate particular 
> > schema/table not the whole database with streaming replication built-in 
> > mechanism ?
> 
> I don't believe so. You can do that with logical replication in v10 - 
> https://www.postgresql.org/docs/10/static/logical-replication.html.
> 
> pglogical will give you much the same functionality on current releases. 
> https://www.2ndquadrant.com/en/resources/pglogical/ - installation isn't too 
> painful (though the docs are a little sparse when it comes to which node you 
> should run which command on. Make the postgres.conf changes on master and 
> slave nodes, as slave nodes need replication slots too(?)).
> 
> There are a bunch of trigger-based replication frameworks that'll work too, 
> though less efficiently - Slony is widely used, and I used Bucardo 
> successfully for years before moving to pglogical.
> 
> Cheers,
>   Steve
> 
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema/table replication

2017-09-06 Thread Steve Atkins

> On Sep 6, 2017, at 6:00 AM, Marcin Giedz  wrote:
> 
> Hi, is there any way (3rd party software) to replicate particular 
> schema/table not the whole database with streaming replication built-in 
> mechanism ?

I don't believe so. You can do that with logical replication in v10 - 
https://www.postgresql.org/docs/10/static/logical-replication.html.

pglogical will give you much the same functionality on current releases. 
https://www.2ndquadrant.com/en/resources/pglogical/ - installation isn't too 
painful (though the docs are a little sparse when it comes to which node you 
should run which command on. Make the postgres.conf changes on master and slave 
nodes, as slave nodes need replication slots too(?)).

There are a bunch of trigger-based replication frameworks that'll work too, 
though less efficiently - Slony is widely used, and I used Bucardo successfully 
for years before moving to pglogical.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Create Action for psql when NOTIFY Recieved

2017-09-05 Thread Steve Atkins

> On Sep 4, 2017, at 10:25 PM, Nico Williams  wrote:
> 
> On Mon, Sep 4, 2017 at 4:21 PM Steve Atkins  wrote:
> > 
> 
> Me too.
> 
> https://github.com/wttw/pgsidekick
> 
> Select-based, sends periodic keep-alives to keep the connection open, outputs 
> payloads in a way that's friendly to pipe into xargs. (Also the bare bones of 
> a notify-based scheduler).
> 
> Without any kind of access controls on NOTIFY channels, nor any kind of 
> payload validation, i just don't feel comfortable using the payload at all.  
> Besides, the payload is hardly necessary given that there's a database on 
> which you can scribble the payload :)  It suffices that you receive a 
> notification, and you can then check if there's anything to do.
> 
> My version of this doesn't have connection keepalives, but that's ok because 
> that can be added in the form of notifications, and the consumer of 
> pqasyncnotifier can implement timeouts.  But i agree that timeouts and 
> keepalives would be nice, and even invoking a given SQL function would be 
> nice.
> 
> But the question i have is: how to get such functionality integrated into 
> PostgreSQL?  Is a standalone program (plus manpage plus Makefile changes) 
> enough, or would a psql \wait command be better?

There's not really any need to integrate it into postgresql at all. It doesn't 
rely on any details of the core implementation - it's just a normal SQL client, 
a pretty trivial one.

(Whether psql could usefully be reworked to listen for activity on the 
connection when it's not actively executing a query is another question).

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Create Action for psql when NOTIFY Recieved

2017-09-04 Thread Steve Atkins

> On Sep 3, 2017, at 3:32 PM, Nico Williams  wrote:
> 
> 
> My principal problem with psql(1) relative to NOTIFY/LISTEN is that
> psql(1) won't check for them until it has had some input on stdin.  So
> it will appear to do nothing when it's idle, even if there millions of
> notifies for it to respond to!
> 
> So I wrote a program to just LISTEN: 
> https://github.com/twosigma/postgresql-contrib/blob/master/pqasyncnotifier.c

Me too.

https://github.com/wttw/pgsidekick

Select-based, sends periodic keep-alives to keep the connection open, outputs 
payloads in a way that's friendly to pipe into xargs. (Also the bare bones of a 
notify-based scheduler).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would you add a --dry-run to pg_restore?

2017-08-02 Thread Steve Atkins

> On Aug 2, 2017, at 9:02 AM, Edmundo Robles  wrote:
> 
> I mean,  to   verify the integrity of backup  i do:
> 
> gunzip -c backup_yesterday.gz | pg_restore  -d my_database  && echo 
> "backup_yesterday is OK"
> 
> but my_database's size, uncompresed,  is too big  more than 15G  and 
> sometimes  i  have  no space  to restore it, so always i must declutter my  
> disk first. 
> 
> By the way i have programmed  backups on many databases so,  i must check the 
> integrity one by one  deleting the database  to avoid  disk space issues. By 
> the way the restores takes too long time an average of 1 hour by  backup.
> 
> Will be great to have a dry  run option, because   the time  to verify  
> reduces a lot and  will save space on disk, because just  execute  with no 
> write to disk.

If the gunzip completes successfully then the backups weren't corrupted and the 
disk is readable. They're very likely to be "good" unless you have a systematic 
problem with your backup script.

You could then run that data through pg_restore, redirecting the output to 
/dev/null, to check that the compressed file actually came from pg_dump. 
(gunzip backup_yesterday.gz | pg_restore >/dev/null)

The only level of checking you could do beyond that would be to ensure that the 
database was internally self-consistent and so truly restorable - and to do 
that, you'll need to restore it into a real database.

You could do an intermediate check by restoring into a real database with 
--schema-only, I guess.

As an aside, pg_dump with custom format already compresses the dump with gzip, 
so the additional gzip step may be redundant. You can set pg_dump's compression 
level with -Z.

Cheers,
  Steve


> 
> if pg_restore have a dry  option i  will do:
> 
> (gunzip -c  mydata.gz | pg_restore -d mydata --dry  &&  echo "mydata0 is ok")&
> (gunzip -c  my_other_data.gz | pg_restore -d my_other_data --dry  &&  echo 
> "my_other_data is ok")&
> (gunzip -c  my_another_data.gz | pg_restore -d my_another_data --dry  &&  
> echo "my_another_data is ok")&
> wait
> 
> 
> and  the time to  verify only will take 1 hour  instead of  3 hours.
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing postgres data on dropbox

2017-06-18 Thread Steve Atkins

> On Jun 18, 2017, at 10:58 AM, Karsten Hilbert  wrote:
> 
> On Sun, Jun 18, 2017 at 05:30:44PM +, Martin Mueller wrote:
> 
>> Thank for this very helpful answer, which can be
>> implemented for less than $100. For somebody who started
>> working a 128k Mac in the eighties, it is mindboggling that
>> for that amount you can buy a terabyte of storage in a device
>> that you put in a coat pocket. I'll read up on rsync
> 
> I seem to remember that for this to work the two machines
> must be *very* close in architecture, and the PostgreSQL
> versions best be exactly the same.

If the two machines have the same architecture you can also just
have the data directory (or the whole postgresql installation) installed
on an external drive and run it from there.

Plug it in, start postgresql, use it.Shut down postgresql, unplug it.

I've been running from an external drive for years with no problems,
but backing it up regularly to the machines you plug it into (with
pg_dump) is probably a good idea.

I have the entire postgresql installation on the external drive, and
have /Volumes/whatever/pgsql/bin early in my path, so if the drive
is plugged in pg_ctl, psql etc go to the installation on the external
drive.

With one of the little samsung usb3 SSDs it'll fit in your pocket.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Double Precision or Numeric

2017-06-01 Thread Steve Atkins

> On Jun 1, 2017, at 9:26 AM, Louis Battuello  
> wrote:
> 
> Is the round() function implemented differently for double precision than for 
> numeric? Forgive me if this exists somewhere in the documentation, but I 
> can't seem to find it.

https://www.postgresql.org/docs/current/static/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL

"When rounding values, the numeric type rounds ties away from zero, while (on 
most machines) the real and double precision types round ties to the nearest 
even number.".

> Why does the algorithm vary by data type?

Just guessing, but I'd assume because the NUMERIC type behaves as required by 
the SQL spec, while float and double are vanilla IEEE754 arithmetic and will do 
whatever the underlying hardware is configured to do, usually round to nearest 
even.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] column names and dollar sign

2017-05-17 Thread Steve Atkins

> On May 17, 2017, at 2:02 PM, Armand Pirvu (home)  
> wrote:
> 
> Hi 
> 
> Ran into the following statement
> 
> CREATE TABLE test(
>   Date$ date,
>   Month_Number$ int,
>   Month$ varchar(10),
>   Year$ int
> );
> 
> 
> While it does execute, I wonder if the $ has any special meaning ?
> 
> Can anyone shed some light please ?

No special meaning to postgresql - in postgresql a dollar sign is a valid 
character in an identifier.

It might have some special meaning to the app that was using it, perhaps.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-07 Thread Steve Atkins

> On May 7, 2017, at 9:16 AM, Adam Brusselback  
> wrote:
> 
> there's also pg_agent which is a cron-like extension, usually bundled with 
> pg_admin but also available standalone
> 
> https://www.pgadmin.org/docs4/dev/pgagent.html
> 
> 
> -- 
> john r pierce, recycling bits in santa cruz
> 
> In addition to that, there is also  jpgAgent: 
> https://github.com/GoSimpleLLC/jpgAgent
> 
> It uses the same schema as pgagent in the database, and just replaces the 
> actual agent portion of it with a compatible re-write.  Has been way more 
> stable for us since we switched to it, as well as providing features we 
> needed like email notifications and parallel running of steps.
> 
> Disclosure: I wrote it for my company... started on it well before all the 
> alternatives like pg_cron, pg_bucket, etc came out.

There's also pglater, which is a minimal external process that'll let you 
implement any sort of cron-ish functionality entirely inside the database 
without needing to be woken up every minute by an external cron.

https://github.com/wttw/pgsidekick

More proof-of-concept than anything remotely production-ready.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-04-30 Thread Steve Atkins

> On Apr 30, 2017, at 4:37 AM, Thomas Güttler  
> wrote:
> 
> Is is possible that PostgreSQL will replace these building blocks in the 
> future?
> 
> - redis (Caching)
> - rabbitmq (amqp)
> - s3 (Blob storage)

No.

You can use postgresql for caching, but caches don't require the data
durability that a database offers, and can be implemented much more
efficiently.

You can use postgresql to provide message queue services and it
does so reasonably well, particularly when the messages are generated within
the database. But it's not going to do so as efficiently, or be as easy to
monitor, to make highly redundant or to scale across a whole datacenter
as a dedicated message queue service.

You could use postgresql to store binary blobs, but it'd be a horrifically
inefficient way to do it. (Using postgresql to store the metadata, while
the content is stored elsewhere, sure).

Use the right tool for the job.

Cheers,
  Steve

> 
> One question is "is it possible?", then next "is it feasible?"
> 
> I think it would be great if I could use PG only and if I could
> avoid the other types of servers.
> 
> The benefit is not very obvious on the first sight. I think it will saves you
> time, money and energy only in the long run.
> 
> What do you think?
> 
> Regards,
>  Thomas Güttler
> 
> 
> -- 
> I am looking for feedback for my personal programming guidelines:
> https://github.com/guettli/programming-guidelines
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mysql_config_editor feature suggestion

2017-03-21 Thread Steve Atkins

> On Mar 21, 2017, at 3:03 PM, Tom Ekberg  wrote:
> 
> I have been working with MySQL a bit (yes, I know, heresy) and encountered a 
> program called mysql_config_editor. In my opinion it does a better job of 
> local password management than using a ~/.pgpass file. Instead of assuming 
> that a mode of 600 will keep people from peeking at your password, it 
> encrypts the password, but keeps the other parameters like host, port and 
> user available for viewing as plaintext. You can read more about it here:
> 
>  https://dev.mysql.com/doc/refman/5.7/en/mysql-config-editor.html
> 
> The host, user, password values are grouped into what are called login paths 
> which are of the form:
> 
>  [some_login_path]
>  host = localhost
>  user = localuser

Looks rather like a postgresql service file. :)

> 
> Just like the config files you have no doubt seen before. The only way to set 
> a password is to use the command:
> 
>  mysql_config_editor set --login-path=some_login_path --password
> 
> which will prompt the user to enter the password for the specified login 
> path. The password is never seen as plain text. There are other commands to 
> set, remove, print and reset values for a login path. The print command that 
> shows a password will display this instead:
> 
>  password = *

This seems like it'd give people a false sense of security. If someone can read 
that file, they can log in to that account. Obfuscating the password just makes 
naive users think they're secure when they're anything but, and means they're 
less likely to be careful about making that file unreadable and avoiding 
checking it into revision control and so on. It'd protect against 
shoulder-surfing, but it's not like you're going to have .pg_pass open in an 
editor too often.

A commandline tool for managing pgpass might be interesting, I guess. Though 
for local databases using peer authentication is likely better than saving 
passwords in a file.

> Adding a similar feature for PostgreSQL will also require a change to the 
> psql program to specify and handle --login-path used for authentication. This 
> may also be the case for some of the other pg_* utilities.
> 
> I think adding a feature like mysql_config_editor to PostgreSQL is an easy 
> way to set up multiple "personalities" for connecting to different PostgreSQL 
> servers. The password protection will deter the curious user from gaining 
> access to your data. It will not stop a determined hacker, but the idea is to 
> make it more difficult.
> 
> Other than this mailing list, is there a way to make a feature request for 
> PostgreSQL?

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] appropriate column for storing ipv4 address

2017-03-01 Thread Steve Atkins

> On Mar 1, 2017, at 8:39 AM, jonathan vanasco  wrote:
> 
> 
> I have to store/search some IP data in Postgres 9.6 and am second-guessing my 
> storage options.  
> 
> 
> The types of searching I'm doing:

[...]

> 
>   2. on tracked_ip_block, i search/join against the tracked_ip_address to 
> show known ips in a block, or a known block for an ip.
> 
> i used cidr instead of inet for the ip_address because it saved me a cast on 
> joins and appears to work the same.  was that the right move?  is there a 
> better option?

If you're looking to do fast searches for "is this IP address in any of these 
CIDR blocks" you might want to look at https://github.com/RhodiumToad/ip4r as a 
possible alternative.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not translate host name

2017-02-24 Thread Steve Atkins

> On Feb 24, 2017, at 1:37 PM, Tom Ekberg  wrote:
> 
> I'm running postgres 9.6.2 (also happened on 9.3.14) and have a cron job that 
> runs hourly that runs a program that does mostly postgres SELECTs on a 
> different host. Occasionally I get email (not hourly) from the cron daemon 
> that contains a stack trace that ends with this:
> 
> sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not 
> translate host name "db3.labmed.uw.edu" to address: Name or service not known
> 
> It has happened about 9 times so far this month. I have one of our network 
> people look into this but there is no real answer. I could use the IP address 
> but I'd rather not. This problem only happens on one host. I moved the data 
> from db2 to db3. I was getting similar emails regarding db2 which runs the 
> older postgres.
> 
> Any ideas on how to proceed?

It looks like a DNS issue. That hostname authoritatively doesn't exist, 
according to any of UW's nameservers.

If it works sometimes then you have some sort of internal name resolution hack, 
and it's not reliable.

Cheers,
  Steve

> 
> Tom Ekberg
> Senior Computer Specialist, Lab Medicine
> University of Washington Medical Center
> 1959 NE Pacific St, MS 357110
> Seattle WA 98195
> work: (206) 598-8544
> email: tekb...@uw.edu
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to evaluate "explain analyze" correctly after "explain" for the same statement ?

2017-02-15 Thread Steve Atkins

> On Feb 15, 2017, at 3:58 PM, Patrick B  wrote:
> 
> Hi all,
> 
> I just got a quick question about warm-cache. I'm using PG 9.2.
> 
> When I execute this statement soon after I start/restart the database:
> 
> explain select id from test where id = 124;
> 
> The runtime is 40ms.
> 
> Then, If I execute this statement just after the above one;
> 
> explain analyze select id from test where id = 124;
> 
> The runtime is 0.8ms.

This doesn't make seem to make sense.

"explain select ..." doesn't run the query. All it shows is the plan the 
planner chose and some estimates of the "cost" of different steps, with no 
time. Where are you getting 40ms from in this case?

"explain analyze select ..." does run the query, along with some - potentially 
non-trivial - instrumentation to measure each step of the plan, so you can see 
whether the planner estimates are reasonable or wildly off.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't restart Postgres

2017-02-15 Thread Steve Atkins

> On Feb 14, 2017, at 8:47 PM, Shawn Thomas  wrote:
> 
> No it doesn’t matter if run with sudo, postgres or even root.  Debian 
> actually wraps the command and executes some some initial scripts with 
> different privileges but ends up making sure that Postgres ends up running 
> under the postgres user.  I get the same output if run with sudo:
> 
> sudo systemctl status postgresql@9.4-main.service -l
>Error: could not exec   start -D /var/lib/postgresql/9.4/main -l 
> /var/log/postgresql/postgresql-9.4-main.log -s -o  -c 
> config_file="/etc/postgresql/9.4/main/postgresql.conf”

There's a suspicious hole between "exec" and "start" where I'd expect to see 
the full path to the pg_ctl binary. As though a variable were unset in a script 
or config file.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] get inserted id from transaction - PG 9.2

2017-02-14 Thread Steve Atkins

> On Feb 14, 2017, at 2:55 PM, Patrick B  wrote:
> 
> Hi all,
> 
> I'm simply doing an insert and I want to get the inserted id with a select. 
> I'm doing this all in the same transactions.
> 
> Example:
> 
> BEGIN;
> 
> INSERT INTO test (id,name,description) VALUES (default,'test 1','testing 
> insert');
> SELECT FROM test ORDER BY id DESC; -- I don't see the inserted row here

You want "select * from test ..." or "select id from test ..." here. Should 
work fine then.

> 
> COMMIT;
> 
> I only can see that inserted row if I do the select outside of this 
> transaction.
> 
> How could I get that ? 

This'd be the idiomatic way of doing it:

INSERT INTO test (name,description) VALUES ('test 1','testing insert') 
RETURNING id;

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 64 and 32 bit libpq

2017-02-12 Thread Steve Atkins

> On Feb 12, 2017, at 5:03 PM, Jerry LeVan  wrote:
> 
> Hello, I am trying to upgrade my postgresql ‘stuff’ to 64 bits from 32 bits.
> 
> I am running MaxOS Sierra.
> 
> I have built the 64 bit version of the server and have loaded
> my database into the new server ( 9.6.2 ). Everything seems to be working.
> 
> I have some legacy apps that are 32 bit and talk to the server 
> via libpq.dylib. They no longer work with a complaint about
> not finding a libpq with the right architecture.
> 
> Can I go back and do a 32 bit rebuild of everything and then
> take the 32 bit libpq and the 64 bit libpq and use lipo to
> glue them together and create a ‘fat’ libpq and replace the
> installed libpq? 
> 
> Is this a safe thing to do?

I've done it in the past (http://labs.wordtothewise.com/postgresql-osx/) and it
seemed to work fine.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logging broken messages

2017-02-07 Thread Steve Atkins

> On Feb 6, 2017, at 9:21 AM, Rui Pacheco  wrote:
> 
> Hello,
> 
> I’m trying to implement a version of the wire protocol but I’ve hit a 
> problem: whenever I send a Close Statement message to the remote, it just 
> hangs indefinitely. I suspect the problem could be on my side but I can’t 
> find anything on my code that doesn’t match the manual.
> 
> Is there a way to configure Postgres to debug this? I’ve tried setting a 
> number of log parameters but nothing shows up on syslog:

Wireshark has a v3 postgresql protocol dissector. It might be worth comparing 
your code with libpq and see if anything looks different.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Testing an extension exhaustively?

2017-02-01 Thread Steve Atkins

> On Feb 1, 2017, at 4:03 PM, John R Pierce  wrote:
> 
> On 2/1/2017 3:39 PM, postgres user wrote:
>> If I have the Postgresql server installed on my machine i.e I have all the 
>> bins, libs and share directories of the Postgresql and I have the libs and 
>> sql's installed for one of the contrib extensions lets say "chkpass", how 
>> does one go about testing this extension exhaustively on the server? I ask 
>> this because I would want to do this manually first and then go about 
>> automating the testing of this extension. So rather than just execute CREATE 
>> EXTENSION and DROP EXTENSION I want some solid evidence that the extension 
>> is working fine under all circumstances and is not crashing the server at 
>> any moment? Looking for some new strategies and ideas to come my way through 
>> this.
> 
> you would write test cases for all the functionality provided by this 
> extension, same as you'd test any other sort of API.

And you might find http://pgtap.org convenient for doing that.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does this hot standy archive_command work

2017-01-20 Thread Steve Atkins

> On Jan 20, 2017, at 7:03 PM, bto...@computer.org  
> wrote:
> 
> While learning a bit about basic hot standby configuration, I was reviewing 
> an article that used these parameters
> 
> wal_level = 'hot_standby'
> archive_mode = on
> archive_command = 'cd .'
> max_wal_senders = 1
> hot_standby = on
> 
> 
> How or why that particular archive_command actually works (... and it does 
> ... I tried it ...) is not clear to me based on reading of the Postgresql 
> documentation on this topic. I would have expected to see an actual copy or 
> rsync command, as described in the fine manual at section 25.3.1. "Setting Up 
> WAL Archiving"
> 
> The entire example appears at 
> 
> https://www.digitalocean.com/community/tutorials/how-to-set-up-master-slave-replication-on-postgresql-on-an-ubuntu-12-04-vps
> 
> Can anyone enlighten on this topic, or provide a link to an existing 
> explanation?

It's not archiving logs at all, instead relying on streaming them directly to 
the slave.

Changing archive_mode requires a server restart, while changing archive_command 
from a command that does nothing, successfully, to a command that actually 
archives logs just requires a reload. So this lets you enable archiving without 
halting the server by changing the command.

Or that's how I vaguely recall it working some years ago. Things may have 
changed now - you're following a very old tutorial.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY to question

2017-01-17 Thread Steve Atkins

> On Jan 17, 2017, at 10:23 AM, Rich Shepard  wrote:
> 
>  Running -9.6.1. I have a database created and owned by me, but cannot copy
> a table to my home directory. Postgres tells me it cannot write to that
> directory. The only way to copy tables to files is by doing so as the
> superuser (postgres).
> 
>  Why is this, and can I change something so I, as a user, can copy tables
> directly to ~/?

You can use "\copy" from psql to do the same thing as the SQL copy command,
but writing files as the user running psql, rather than the postgresql superuser
role. That's probably what you need.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Means to emulate global temporary table

2017-01-11 Thread Steve Atkins

> On Jan 11, 2017, at 7:02 PM, David G. Johnston  
> wrote:
> 
> ​"throughout" mustn't mean "by other sessions" or this becomes unwieldy.
> 
> Here's a mock-up:
> 
> CREATE TABLE template_table ();
> CREATE VIEW view_over_my_template_instance AS SELECT * FROM 
> my_instance_of_template_table; --fails if done here without the desired 
> feature
> 
> In a given session:
> 
> CREATE TEMP TABLE my_instance_of_template_table LIKE template_table;
> SELECT * FROM view_over_my_template_table; -- returns only this session's 
> temp table data
> 
> Other sessions can simultaneously execute the same SELECT * FROM view_over_* 
> and get their own results.
> 
> The goal is to avoid having to CREATE TEMP TABLE within the session but 
> instead be able to do:
> 
> CREATE GLOBAL TEMP TABLE my_instance_of_template_table LIKE template_table;
> 
> And have the CREATE VIEW not fail and the session behavior as described.

Would this differ in any user-visible way from what you'd have if you executed 
at the start of each session:

CREATE TEMPORARY TABLE my_instance_of_template_table LIKE template_table;
CREATE TEMPORARY VIEW view_over_my_template_instance AS SELECT * FROM 
my_instance_of_template_table;

There'd be a small amount of session startup overhead, but that could be 
handled at the pooler level and amortized down to zero.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LYDB: What advice about stored procedures and other server side code?

2016-12-27 Thread Steve Atkins

> On Dec 27, 2016, at 2:03 PM, Guyren Howe  wrote:
> 
> I am putting together some advice for developers about getting the most out 
> of SQL servers in general and Postgres in particular. I have in mind the 
> likes of most web developers, who through ignorance or a strange cultural 
> preference that has emerged, tend to treat their database server as a dumb 
> data bucket.
> 
> I call the project Love Your Database (LYDB). It is starting as a series of 
> blog posts:
> 
> https://medium.com/@gisborne/love-your-database-lydb-23c69f480a1d#.4jngp2rcb
> https://medium.com/@gisborne/love-your-database-simple-validations-68d5d6d0bbf3#.az4o2s152
> 
> I would next like to cover server-side code such as stored procedures and 
> triggers.
> 
> I am inclined to advise folks to use PL/V8 on Postgres, because it is a 
> reasonable language, everyone knows it, it has good string functions, decent 
> performance and it tends to be installed everywhere (in particular, Amazon 
> RDF offers it).
> 

Think hard about the "impedance mismatch" between parts of the system.

pl/pgsql uses sql data types and operators, and so interfaces very cleanly with 
the rest of postgresql. pl/v8 uses javascript data types and *for database 
related things* is likely to be a less perfect match to the rest of the system 
- as it's translating (or, in some cases, failing to translate) between sql 
data types and javascript data types that may not be entirely compatible, or 
which may not exist at all.

So if your functions are mostly doing databasey things, pl/pgsql may well be a 
better choice. If they're mostly doing appy things, that just happen to be in 
the database, then pl/v8 may be a better choice (but so might just doing the 
work in the app, perhaps with some listen/notify assistance).

Most of the functions I write are short trigger functions, or data 
wrapper/modification functions for migration or making business logic available 
for SQL. For the majority of those I find pl/pgsql the best match (if I can't 
get away with sql functions).

If you're trying to convince people to get the most out of their database, 
pushing them towards pl/v8 as their first choice of embedded language might not 
be the best path. (That it might encourage them to write code to iterate 
through tables rather than taking advantage of SQL where they can might be a 
thing too).

Cheers,
  Steve


> Broadly, what advice should I offer that isn’t obvious? Not just about PL/V8 
> but server side code in general.
> 
> TIA



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing SQLite database

2016-12-10 Thread Steve Atkins

> On Dec 10, 2016, at 11:32 AM, Igor Korot  wrote:
> 
> Hi, guys,
> I'm working thru my script and I hit a following issue:
> 
> In the script I have a following command:
> 
> CREATE TABLE playersinleague(id integer, playerid integer, ishitter
> char, age integer, value decimal, currvalue decimal, draft boolean,
> isnew char(1), current_rank integer, original_rank integer, deleted
> integer, teamid integer, notes varchar(125), PRIMARY KEY(id,playerid),
> foreign key(id) references leagues(id), foreign key(playerid)
> references players(playerid),foreign key(teamid) references
> teams(teamid));
> 
> Now this command finished successfully, however trying to insert a
> record with following command:
> 
> INSERT INTO  playersinleague  
> VALUES(1,1,'1',27,42.0,42.0,0,'0',1,1,0,23,NULL);
> 
> gives following error:
> 
> psql:/Users/igorkorot/draft.schema:10578: ERROR:  column "draft" is of
> type boolean but expression is of type integer
> 
> Looking at https://www.postgresql.org/docs/9.5/static/datatype-numeric.html,
> I don't see a 'boolean' as supported data type.

Booleans aren't numeric.

https://www.postgresql.org/docs/9.5/static/datatype-boolean.html

Boolean will take a range of formats, including '0' - an untyped literal
"0". But it won't take an integer, which is what an unquoted 0 is.

You'll need to modify your insert statement slightly to use a valid boolean
value for that field ("true" or "false" are idiomatic).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index size

2016-12-03 Thread Steve Atkins

> On Dec 3, 2016, at 3:57 PM, Samuel Williams  
> wrote:
> 
> Thanks everyone for your feedback so far. I've done a bit more digging:
> 
> MySQL in MBytes (about 350 million rows):
> 
> index_user_event_on_what_category_id_created_at_latlng | 22806.00
> index_user_event_for_reporting | 18211.00
> index_user_event_on_created_at | 9519.00
> index_user_event_on_user_id | 6884.00
> index_user_event_on_poi_id | 4891.00
> index_user_event_on_deal_id | 3979.00
> 
> Postgres (about 250 million rows):
> 
> index_user_event_on_what_category_id_created_at_latlng | 25 GB
> index_user_event_for_reporting | 19 GB
> index_user_event_on_created_at | 7445 MB
> index_user_event_on_user_id | 7274 MB
> index_user_event_on_deal_id | 7132 MB
> index_user_event_on_poi_id | 7099 MB
> 
> So, the index is a bit bigger, plus there is also the PKEY index which
> increases disk usage by another whole index. Keep in mind in the
> above, MySQL has about 40% more data.
> 
> With some indexes, it looks like MySQL might not be adding all data to
> the index (e.g. ignoring NULL values). Does MySQL ignore null values
> in an index? Can we get the same behaviour in Postgres to minimise
> usage? What would be the recommendation here?

It's unlikely anyone will be able to usefully answer the questions you
should be asking without seeing the schema and index definitions,
and maybe some clues about how you're querying the data.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hardware recommendations?

2016-11-02 Thread Steve Atkins
I'm looking for generic advice on hardware to use for "mid-sized" postgresql 
servers, $5k or a bit more.

There are several good documents from the 9.0 era, but hardware has moved on 
since then, particularly with changes in SSD pricing.

Has anyone seen a more recent discussion of what someone might want for 
PostreSQL in 2017?

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Integer fields and auto-complete clients

2016-10-26 Thread Steve Atkins

> On Oct 26, 2016, at 6:59 AM, Tim Smith  wrote:
> 
> Hi,
> 
> I'm curious as to what the current advice would be in relation to
> auto-complete type applications (e.g. "AJAX" type java-script "guess
> as you type" applicatoins).
> 
> In relation to text fields, I know the general suggestion is gin_trgm_ops.
> 
> Is there much point even thinking about using gin_trgm_ops on integers
> ?  I'm thinking perhaps the 'prefix' extension is better suited ?  Or
> is there something else altogether I should be considering to support
> such applications ?

It depends on whether you want to return results that have the typed value
as a prefix or results that include the typed value as a substring. i.e. where
foo like 'bar%' vs where foo like '%bar%'.

If the latter, pg_trgm is the way to go. If the former then a regular btree 
index
on the (case-folded text form of the) value, possibly using text_pattern_ops, 
is the right thing.

The prefix module isn't what you want - it's for matching, e.g., an entire 
phone number
against a table of possible prefixes, not a prefix against a table of possible 
matches.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Database performance

2016-09-06 Thread Steve Atkins

> On Sep 6, 2016, at 12:08 PM, Scott Marlowe  wrote:
> 
> On Fri, Sep 2, 2016 at 9:38 PM, Pradeep  wrote:
>> 
>> max_connections = 100
>> shared_buffers = 512MB
>> effective_cache_size = 24GB
>> work_mem = 110100kB
> 
> This is WAY too high for work_mem. Work_mem is how much memory a
> single sort can grab at once. Each query may run > 1 sort, and you
> could have 100 queries running at once.
> 
> This setting is 110GB. That's about 109.9GB too high for safety. When
> things go wrong with this too big, they go very wrong, sending the
> machine into a swap storm from which it may not return.

It's an oddly spelled 110MB, which doesn't seem unreasonable.

> 
> It's far more likely that you've just got poorly written queries. I'd
> make a post with explain analyze output etc. Here's a good resource
> for reporting slow queries:
> 
> https://wiki.postgresql.org/wiki/Slow_Query_Questions

+1

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgresSQL and HIPAA compliance

2016-06-17 Thread Steve Atkins

> On Jun 17, 2016, at 3:03 AM, Alex John  wrote:
> 
> Hello, I have a few questions regarding the use of PostgreSQL and HIPAA
> compliance. I work for a company that plans on storing protected health
> information (PHI) on our servers. We have looked at various solutions for 
> doing
> so, and RDS is a prime candidate except for the fact that they have explicitly
> stated that the Postgres engine is *not* HIPAA compliant.

There's nothing fundamental to postgresql that would make HIPAA compliance
difficult, and *probably* nothing major with the way it's deployed on RDS. 
Actual
certification takes time and money, though.

> 
> Users on the IRC channel generally say that the guidelines are more catered
> towards building better firewalls and a sane access policy, but I would like 
> to
> know if there is anything within the implementation of Postgres itself that
> violates said compliance.
> 
> If anyone works at a similar company and utilizes postgresql to store PHI,
> please let me know.

EnterpriseDB are helping provide HIPAA compliant postgresql on AWS; it
might be worth having a chat with them.

http://www.enterprisedb.com/postgres-plus-edb-blog/fred-dalrymple/postgres-meets-hipaa-cloud
http://www.slideshare.net/EnterpriseDB/achieving-hipaa-compliance-with-postgres-plus-cloud-database

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Steve Atkins

> On Jun 10, 2016, at 1:01 PM, Ken Tanzer  wrote:
> 
> Hi.  I was hoping this list might be able to offer some 
> help/advice/suggestions/opinions about feasibility for something I want to 
> implement, namely converting Postgres constraints into PHP logic.  Here's the 
> context and explanation:
> 
> I work on a PHP web app using Postgres.  When possible, we try to build as 
> much logic as possible directly into the DB.  The app already automatically 
> reads NOT NULL and foreign key constraints from the DB, and enforces them 
> through the UI thus preventing people from getting ugly database errors.  It 
> doesn't do that with check constraints and table constraints though, which 
> means we either end up duplicating the constraint logic in PHP, or else 
> sometimes get lazy/expedient and only put the constraint into PHP.  Obviously 
> neither of those is ideal.
> 
> What would be ideal is for the app to handle those constraints automatically. 
>  It looks like I can pull them out (as SQL) from 
> information_schema.check_constraints, with the remaining issue being how to 
> make them usable in PHP.
> 
> I'm wondering if anyone has done this already, or if there is some kind of 
> library available for this purpose?
> 
> If not, and absent any better suggestions,

You could name the check constraints, catch the errors and use a client-side 
mapping between constraint name and a friendly error message for display in the 
web interface.

You could implement the checks in PHP in the database. 
https://public.commandprompt.com/projects/plphp/wiki

You could look at one of the existing SQL parsers implemented in PHP, and use 
those to parse the constraint to a tree from which you could easily pull PHP.

I'd go for that first one, if possible. Robust, and zero overhead in the happy 
path.

> I'm looking at trying to parse/search/replace.  This might well be imperfect, 
> and error-prone.  But if I can get something that at least works in a lot of 
> cases, that would help a lot.  So as a simple example, converting from

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Thoughts on "Love Your Database"

2016-05-20 Thread Steve Atkins

> On May 20, 2016, at 1:43 PM, Guyren Howe  wrote:
> 
> On May 20, 2016, at 13:38 , Pierre Chevalier Géologue 
>  wrote:
>> 
>> Le 04/05/2016 18:29, Szymon Lipiński a écrit :
>>> On the other hand, when I was trying to store all my logic in a
>>> database, there was just one thing that made me hate it. Testing.
>>> Testing the procedures inside the database was not easy, not funny, and
>>> too much time consuming.
>> 
>> Yes, very good point.
> 
> Are there any best practices or tricks to make this easier?

In-database unit tests help. pgTap is a decent framework
for building that sort of test-suite in a way that'll play nice with
reporting and CI tools.

http://pgtap.org

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Beta testers for database development tool wanted

2016-05-11 Thread Steve Atkins

> On May 11, 2016, at 11:24 PM, Martijn Tonies (Upscene Productions) 
>  wrote:
> 
> Hello everyone,
>  
> I’ll just get at it right away --
>  
> We’re developing a database development tool called Database Workbench, it 
> currently supports MySQL, InterBase, Firebird, Oracle, SQL Server, NexusDB 
> and SQL Anywhere (see http://www.upscene.com/database_workbench/ )

Windows only, judging from the screenshots?

Cheers,
  Steve

>  
> We’re adding PostgreSQL support and the first beta is ready for testing
>  
> We would like to have people who:
> - would use this product on a daily basis, like they're using any other 
> PostgreSQL tool (eg PgAdmin) now
> - work with larger databases, both data volume and meta data object count
> - are able to report bugs in a (reasonable) detailed manner
> - are able to discuss new features or enhancements
> - are able to regularly download updates and use them
> - don’t mind being put on a private e-mail list to report issues
> 
> Limitations:
> - stored function overloading currently not supported
> - version 9.1 and up supported
>  
>  
> If anyone of you is interested is testing this tool, with a free license for 
> the PostgreSQL module and a second module of your choice, drop me an e-mail 
> at m.tonies @ upscene.com
>  
>  
> With regards,
> 
> Martijn Tonies
> Upscene Productions
> http://www.upscene.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE EXTENSION without superuser access

2016-04-28 Thread Steve Atkins

> On Apr 28, 2016, at 6:44 AM, Vik Fearing  wrote:
> 
> On 04/27/2016 10:22 PM, Steve Atkins wrote:
>> Is there any way to install an extension either from a SQL connection or 
>> from a user-defined directory instead of .../extensions?
>> 
>> (And if not, is there a TODO here?)
> 
> There is actually a TODON'T here. Search the pgsql-hackers archives for
> some very, very long reading about "extension templates".
> 
> This was (I think) the last thread about it:
> www.postgresql.org/message-id/flat/m2bo5hfiqb@2ndquadrant.fr

That's a very long thread. OK, I'll give up on using the extension
infrastructure.

Time to either fork pg_partman or write a little perl script that converts
extensions to not-extensions, I guess.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE EXTENSION without superuser access

2016-04-27 Thread Steve Atkins

> On Apr 27, 2016, at 3:47 PM, Adrian Klaver  wrote:
> 
> On 04/27/2016 03:30 PM, Steve Atkins wrote:
>> 
>>>> 
>>>> Is there any way to install an extension either from a SQL connection or 
>>>> from a user-defined directory instead of .../extensions?
>>> 
>>> Have not tried it, but you might want to take a look at:
>>> 
>>> http://www.postgresql.org/docs/9.5/interactive/extend-extensions.html
>>> 
>>> A control file can set the following parameters:
>>> 
>>> directory (string)
>>> 
>>>The directory containing the extension's SQL script file(s). Unless an 
>>> absolute path is given, the name is relative to the installation's SHAREDIR 
>>> directory. The default behavior is equivalent to specifying directory = 
>>> 'extension'.
>> 
>> That's a useful feature, for sure, but I think the control file itself still 
>> needs to be in the .../extensions directory.
> 
> Yeah, that would be a problem, I needed to read a couple of paragraphs up 
> from the above:(
> 
> Got to thinking, when you say root do you really mean root or the user the 
> database cluster is initdb'ed as?

Neither - I don't have write access to the postgresql extensions directory.

The use case for me is distributing an enterprise app that talks to a database 
which the people who use the app don't have administrative access to. The 
admins who install and run the database don't install extensions (they're not 
responsive, they just don't trust third party extensions, ...). For most of my 
users the database would be running on a machine they have filesystem access 
to, so being able to point to SQL scripts in another directory would be enough, 
but in a few cases it's running on a separate system and they only have access 
via port 5432.

Any solution that didn't require filesystem access at all would probably be 
really convenient for people using managed PostgreSQL services too.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CREATE EXTENSION without superuser access

2016-04-27 Thread Steve Atkins

> On Apr 27, 2016, at 2:47 PM, Adrian Klaver  wrote:
> 
> On 04/27/2016 01:22 PM, Steve Atkins wrote:
>> I have an app that would benefit from being able to use pg_partman rather 
>> than doing it's own ad-hoc partition management.
>> 
>> Unfortunately, some of the places where the app needs to run don't have root 
>> access to the database server filesystem, so I can't install the extension 
>> in the postgresql extensions directory.
>> 
>> I could get all the pg_partman functionality by modifying the extension SQL 
>> script to remove the extension functionality and installing the functions in 
>> it manually, but then I can't take advantage of the extension features for 
>> backups, config tables, upgrades and so on - and it's probably not going to 
>> interact well if someone does a "create extension pg_partman" in the 
>> database.
>> 
>> Is there any way to install an extension either from a SQL connection or 
>> from a user-defined directory instead of .../extensions?
> 
> Have not tried it, but you might want to take a look at:
> 
> http://www.postgresql.org/docs/9.5/interactive/extend-extensions.html
> 
> A control file can set the following parameters:
> 
> directory (string)
> 
>The directory containing the extension's SQL script file(s). Unless an 
> absolute path is given, the name is relative to the installation's SHAREDIR 
> directory. The default behavior is equivalent to specifying directory = 
> 'extension'.

That's a useful feature, for sure, but I think the control file itself still 
needs to be in the .../extensions directory.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CREATE EXTENSION without superuser access

2016-04-27 Thread Steve Atkins
I have an app that would benefit from being able to use pg_partman rather than 
doing it's own ad-hoc partition management.

Unfortunately, some of the places where the app needs to run don't have root 
access to the database server filesystem, so I can't install the extension in 
the postgresql extensions directory.

I could get all the pg_partman functionality by modifying the extension SQL 
script to remove the extension functionality and installing the functions in it 
manually, but then I can't take advantage of the extension features for 
backups, config tables, upgrades and so on - and it's probably not going to 
interact well if someone does a "create extension pg_partman" in the database.

Is there any way to install an extension either from a SQL connection or from a 
user-defined directory instead of .../extensions?

(And if not, is there a TODO here?)

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving Specific Data Across Schemas Including FKs

2015-04-23 Thread Steve Atkins

On Apr 23, 2015, at 10:09 AM, Cory Tucker  wrote:

> I have the need to move a specific set of data from one schema to another.  
> These schemas are on the same database instance and have all of the same 
> relations defined.  The SQL to copy data from one table is relatively 
> straightforward:
> 
> INSERT INTO schema_b.my_table 
> SELECT * FROM schema_a.my_table WHERE ...

Would ALTER TABLE ... SET SCHEMA do what you need? A schema is mostly just a 
name space, so there's no need to create new tables or copy data around.

Cheers,
  Steve

> 
> What I am trying to figure out is that if I also have other relations that 
> have foreign keys into the data I am moving, how would I also move the data 
> from those relations and maintain the FK integrity?  
> 
> The tables are setup to use BIGSERIAL values for the id column which is the 
> primary key, and the foreign keys reference these id columns.  Ideally each 
> schema would use it's own serial for the ID values, but I'm open to clever 
> alternatives.
> 
> I am using PG 9.3.5 (Amazon RDS), but with an option to move to 9.4 should 
> the situation call for it.
> 
> thanks
> --Cory



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql Development Options

2015-04-05 Thread Steve Atkins

On Apr 5, 2015, at 1:21 PM, Ray Madigan  wrote:

> I have been using postgresql in java off and on for many years.   I now have 
> an assignemtn where I have to build a very straight forward networked 
> application that needs to be able to insert Array data types from a windows 
> environment to a remote Linux Postgresql database.  
> 
> My first attempt was to use Qt.  Their is a postgresql driver and found out 
> the hard way that it doesn't support the Array data type.  I need a small ui 
> so the user can set options for the application.   My question is, are there 
> other UI options that I can use to development this application.

The Qt database driver is not great for general use. Where it shines is when 
you want to do simple CRUD queries and to have them connected to widgets with 
minimal work. It should support arrays, though, with a little data 
transformation.

If you're looking to use C++ then Qt is an excellent framework for a GUI app - 
one that you won't beat for cross-platform work - but you might consider 
whether using libpqxx or libpq to connect to the database might suit your needs 
better.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] The case of PostgreSQL on NFS Server (II)

2015-04-02 Thread Steve Atkins

On Apr 2, 2015, at 5:09 PM, Octavi Fors  wrote:

> 
> And second, because I need the database to be accessible from two computers 
> in the same LAN.

If you do this, you will destroy your database[1].

Why not have the database running on one machine, all the time, potentially 
with a real disk subsystem then just access it from wherever on the LAN you 
need to? Postgresql is a client-server database, and you can access it over the 
network as easily as you can from the machine it's running on.

Cheers,
  Steve

[1] Almost certainly, unless you make absolutely sure postgresql is only 
started on one machine at a time, amongst other things.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Longest prefix matching CTE

2015-02-24 Thread Steve Atkins

On Feb 24, 2015, at 3:50 PM, Tim Smith  wrote:

> 
> 
> The goal being to match the longest prefix given a full phone number, e.g.
> 
> 
> 61234567890  would match "australia proper 61"
> whilst
> 61134567890 would match "Australia premium 6113"
> and
> 61894321010 would match "Australia - Sydney 61893"
> 
> I know the answer involves Postgres CTE, but I haven't used CTEs much
> yet... let alone in complex queries such as this.
> 
> Thanking you all in advance for your kind help.

There's probably a CTE approach for it, but you might want to look
at https://github.com/dimitri/prefix too - it's an extension that's designed
specifically for longest prefix matching, and that uses gist indexes to
do it efficiently.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 4B row limit for CLOB tables

2015-01-29 Thread Steve Atkins

On Jan 29, 2015, at 9:53 AM, Roger Pack  wrote:

> On 1/29/15, Roger Pack  wrote:
>> Hello.  I see on this page a mention of basically a 4B row limit for
>> tables that have BLOB's
> 
> Oops I meant for BYTEA or TEXT columns, but it's possible the
> reasoning is the same...

It only applies to large objects, not bytea or text.

>> https://wiki.postgresql.org/wiki/BinaryFilesInDB

Some of that looks incorrect or out of date. (e.g. large objects can be a lot
bigger than 2GB in 9.3+).


>> 
>> Is this fact mentioned in the documentation anywhere? Is there an
>> official source for this? (If not, maybe consider this a feature
>> request to mention it in the documentation on BLOB).
>> Cheers and thanks.
>> -roger

I'm not sure whether it's mentioned explicitly, but large objects are
referenced by an OID, which is a 32 bit value (and a global resource).

If you had 4B BLOBs, though, running out of OIDs would probably be
the least of your worries.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Blocking access by remote users for a specific time period

2014-12-14 Thread Steve Atkins

On Dec 13, 2014, at 10:38 PM, Michael Nolan  wrote:

> Yeah, a cron job to swap pg_hba.conf files is the best solution I've come up 
> with so far.  It's not one web app, it's closer to two dozen of them, on 
> multiple sites.   

If they use persistent connections you'll also have to kill existing 
connections after you've prevented new connections via pg_hba.conf

This ...

psql -q -c 'select application_name as "Client", procpid as "PID", 
pg_terminate_backend(procpid) as "Disconnected" from pg_stat_activity where 
procpid <> pg_backend_pid()' database_name

... will kill all connections to the server. You can use application_name, 
client_addr, datname or usename to be more selective about who lives and who 
dies.

Cheers,
  Steve

> --
> Mike Nolan
> 
> On Sat, Dec 13, 2014 at 11:10 PM, Adrian Klaver  
> wrote:
> On 12/13/2014 08:13 PM, Michael Nolan wrote:
> I have several web apps that access our Postgresql database that I'd
> like to lock out of the database for about an hour during a weekly
> maintenance interval. (There are some internal users that do not get
> locked out, because they're running the maintenance tasks.)
> 
> There are no time-of-day access limitation parameters in the pg_hba.conf
> file, are there any simple ways to do this?
> 
> Use a cron job that at beginning of period swaps out the pg_hba.conf with one 
> that denies access, reloads server and then at end of time period reverse 
> procedure ?
> 
> --
> Mike Nolan
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to_char(timestamp, format) is changing the year!

2014-11-30 Thread Steve Atkins

On Nov 30, 2014, at 1:05 PM, Stephen Woodbridge  wrote:

> Hi,
> 
> I am have a problem when I format a timestamp in that it is changing the 
> year. This can't be right, so either I don't understand

You're using "IYYY" which is the "ISO year", which is based on Mondays or 
Thursdays or something equally useless. You probably want "" instead.

Cheers,
  Steve

> or I have found a nasty corner case bug.
> 
> This does not happen on all dates
> 
> select '2014-12-31 00:00:00'::timestamp without time zone,
>   to_char('2014-12-31 00:00:00'::timestamp without time zone, 'IYYY-MM-DD 
> HH24:MI:SS');
> 
> "2014-12-31 00:00:00";"2015-12-31 00:00:00"
> 
> It appears that this also happens for all timestamps after "2014-12-28 
> 23:59:59" to the end of the year and then "2015-01-01 00:00:00" is ok again.
> 
> I have found this on 9.2 and 9.3.
> 
> "PostgreSQL 9.2.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 
> 4.6.3-1ubuntu5) 4.6.3, 64-bit"
> 
> "PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 
> 4.6.3-1ubuntu5) 4.6.3, 64-bit"
> 
> Any thoughts on how to work around this?
> 
> Thanks,
>  -Steve
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comparing results of regexp_matches

2014-11-16 Thread Steve Atkins

On Nov 16, 2014, at 3:52 PM, Seamus Abshere  wrote:

> hi,
> 
> I want to check if two similar-looking addresses have the same numbered
> street, like 20th versus 21st.
> 
>2033 21st Ave S
>2033 20th Ave S (they're different)
> 
> I get an error:
> 
># select regexp_matches('2033 21st Ave S', '\m(\d+(?:st|th))\M') =
>regexp_matches('2033 20th Ave S', '\m(\d+(?:st|th))\M');
>ERROR:  functions and operators can take at most one set argument
> 
> I've tried `()[1] == ()[1]`, etc. but the only thing that works is
> making it into 2 subqueries:
> 
># select (select * from regexp_matches('2033 21st Ave S',
>'\m(\d+(?:st|th))\M')) = (select * from regexp_matches('2033 20th
>Ave S', '\m(\d+(?:st|th))\M'));
> ?column?
>--
> f
>(1 row)
> 
> Is there a more elegant way to compare the results of
> `regexp_matches()`?

Probably not - that's the documented way to force regexp_matches() to return a 
single row, whether it matches or not.

But I think you want to use substring(), rather than regexp_matches(), eg:

select substring('2033 21st Ave S' from '\m(\d+(?:st|th))\M') = 
substring('2033 20th Ave S' from '\m(\d+(?:st|th))\M');

substring() will return the first capturing group, if there is one, or the 
whole match otherwise.

Given that the whole pattern you're using here, other than some zero-width 
assertions, is a capturing group the result is the same either way. You could 
rewrite it without capturing and get the same result:

select substring('2033 21st Ave S' from '\m\d+(?:st|th)\M') = 
substring('2033 20th Ave S' from '\m\d+(?:st|th)\M');

Cheers,
  Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] installing on mac air development machine

2014-10-02 Thread Steve Atkins

On Oct 2, 2014, at 8:04 PM, John R Pierce  wrote:

> On 10/2/2014 4:37 PM, Adrian Klaver wrote:
>> On 10/02/2014 03:50 PM, john.tiger wrote:
>>> we've always installed on linux so need help with a new mac air running
>>> latest osx
>>> 
>>> in the instructions it shows several methods:
>>> 1) enterprisedb (but this does not look open source ?)
>> 
>> It is just the community version of Postgres behind a graphical installer, 
>> so yes it is open source. 
> 
> postgres is of course open source.  the enterprisedb installer I'm less sure 
> of, but its free to use.
> 
> if you just need postgres running while you're doing software development, 
> the postgresql.app version may be the simplest to use.   you run it on the 
> desktop and postgres is running.  close it and its not.  your user id owns 
> the pgdata and the process, so you don't have to jump through sudo hoops to 
> edit the config files.
> 
> http://postgresapp.com/
> 
> (Caveat:  I don't own a mac)

I do, and use postgres.app to develop against - and you're right. postgres.app 
is a trivial install, and it works beautifully for development using 
postgresql. It isn't really a desktop app, it's a tiny GUI controller that 
lives in your menu bar and controls a fairly standard postgresql installation 
under the covers. It can start up and shut down as you log in and log out, or 
you can start and stop it manually.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Portable

2014-09-10 Thread Steve Atkins

On Sep 10, 2014, at 2:00 PM, Daniel Begin  wrote:

> First, I am a Newbie regarding PostgreSQL …
>  
> I just started to look at PostgreSQL to implement a large GIS DB (1Tb).  The 
> data must reside in an external disk with eSATA connection and may be moved 
> to different locations (and Windows desktops/laptops). I was looking to 
> install PostgreSQL and PostGIS extensions on each PC (setting-up the proper 
> PGDATA directory to the external disk) until I read about PostgreSQL and 
> PgAdmin Portable …
>  
> http://sourceforge.net/projects/pgadminportable/
> http://sourceforge.net/projects/postgresqlportable/


>  Is that a viable alternative considering the expected size of the DB? Any 
> comments or proposal would be appreciated .

Adding postgis to that, if it's not already included, might take some work. Not 
impossible but you'd be making some unneeded work for yourself.

The external disk isn't going to be blindingly fast, however you use it. For 
Windows in particular, it's probably going to be more conservative in caching 
the external drive than it would an internal one. Any large or unindexed 
queries are likely to be a bit painful.

I do use an external drive for some work, though, and it's usable. I have all 
of postgresql and the tools I use installed on the drive, with nothing for that 
instance installed on my laptop. I just have the external drives bin directory 
early in my PATH, so I can plug in the drive and do pg_ctl start, and it all 
works. That's on a mac, I'm sure you could do the same with Windows.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Async IO HTTP server frontend for PostgreSQL

2014-09-10 Thread Steve Atkins

On Sep 10, 2014, at 12:16 AM, Dmitriy Igrishin  wrote:

> Hello, David
> 
> 2014-09-10 4:31 GMT+04:00 David Boreham :
> Hi Dmitriy, are you able to say a little about what's driving your quest for 
> async http-to-pg ?
> I'm curious as to the motivations, and whether they match up with some of my 
> own reasons for wanting to use low-thread-count solutions.
> For many web projects I consider Postgres as a development platform. Thus,
> I prefer to keep the business logic (data integrity trigger functions and
> API functions) in the database. Because of nature of the Web, many concurrent
> clients can request a site and I want to serve maximum possible of them with
> minimal overhead. Also I want to avoid a complex solutions. So, I believe that
> with asynchronous solution it's possible to *stream* the data from the 
> database
> to the maximum number of clients (which possible can request my site over a
> slow connection).

That's going to require you to have one database connection open for each
client. If the client is over a slow connection it'll keep the database 
connection
open far longer than is needed, (compared to the usual "pull data from the
database as fast as the disks will go, then spoonfeed it out to the slow client"
approach). Requiring a live database backend for every open client connection
doesn't seem like a good idea if you're supporting many slow concurrent clients.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.1 issues running data directory from VMware shared folder

2014-08-26 Thread Steve Atkins

On Aug 26, 2014, at 3:08 PM, Arze, Cesar  wrote:
> 
> I probably should be posting to the VMware mailing list with this question 
> but I wanted to see if anyone had any insight or suggestions here. I’ve seen 
> many similar issues but none of the solutions proposed there worked for me.

This might not be what you're seeing, but there was a hideous bug in the shared 
folder (hgfs) driver for linux guest OSes that'll silently corrupt your 
filesystem if it's accessed via more than one filehandle (e.g. multiple opens, 
multiple processes, ...).

If you're using vmware tools bundled with workstation 10.0.1 or fusion 6.0.2, 
you have that bug and cannot safely use hgfs mounts for any files, let alone 
postgresql. (There was a different bug, with similar results, for earlier 
versions too, including at least fusion 5.0.1). VMWare claim it's fixed in the 
tools bundled with 10.0.2 / 6.0.3 (I've not tested it). If you're not running 
the very latest vmware, upgrade to it and install the latest tools (or avoid 
using hgfs).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] change the value of "unix_socket_directories" , must used "-h /xx/xx" to use the Unix domain socket

2014-08-16 Thread Steve Atkins

On Aug 16, 2014, at 9:01 AM, Nick Guenther  wrote:

> 
> 
> On August 16, 2014 11:41:02 AM EDT, lin  wrote:
>> Hi all,
>> I change the value of "unix_socket_directories" in postgresql.conf , 
>> then restart the database, but it cannot connect the database used like
>> this 
>> "psql -d postgres -p 5432" ,  it must given the parameter " -h /xx/xx"
>> to use the Unix domain socket。
>>   how to fix   this issue ?
> 
> I'll start by saying that your test case is very clear, and thank you for it. 
> I am unsure what your goal is, however. I assume you are trying to set up 
> parallel postgres processes, for debugging. I've done this recently, for that 
> reason.
> 
> First thing to point out is that you need only one of -h and -p.  They are 
> redundant options, because you only connect to postgres either over TCP (-p) 
> or with a unix domain socket (-h).

Not really. In the case of a TCP connection you need -h for the hostname and -p 
for the port. For a unix socket connection you use -h to specify the directory 
the unix socket is in, and -p is used to generate the name of the socket within 
that directory. If you omit one or both then the compiled-in defaults will be 
used, but it still uses both values to connect.

> 
> Second, what you're seeing is necessary. If you change the default, then psql 
> doesn't know where to look. However, you can recover the old behaviour with 
> shell tricks:
> $ alias psql='psql -h /xx/xx'
> $ psql -d postgres

Setting environment variables to point to your preferred instance will also 
work - and it'll work with any client that uses libpq (which is probably almost 
everything that's not java).

Cheers,
  Steve

> 
> (Personally, I wrote a short wrapper script called "client.sh" which 
> depth-first searches for a postgres db directory and the runs 'psql -h' with 
> the first one it finds; equally well you could have this script install an 
> alias)
> 
> Are you perhaps confused about what a unix domain socket is? Why are you 
> changing it? This is a decent description of it: 
> http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man4/unix.4
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] change the value of "unix_socket_directories" , must used "-h /xx/xx" to use the Unix domain socket

2014-08-16 Thread Steve Atkins

On Aug 16, 2014, at 8:49 AM, John R Pierce  wrote:

> On 8/16/2014 8:41 AM, lin wrote:
>> I change the value of "unix_socket_directories" in postgresql.conf , then 
>> restart the database, but it cannot connect the database used like this
>> "psql -d postgres -p 5432" , it must given the parameter " -h /xx/xx" to use 
>> the Unix domain socket。
>> how to fix this issue ?
> 
> the client has no access to postgresql.conf, it has no idea you changed it. 
> the default value is baked into libpq.so at compile time.

You might find the environment variable PGHOST useful.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Copying a database.

2014-07-20 Thread Steve Atkins

On Jul 20, 2014, at 11:09 AM, maillis...@gmail.com wrote:

> I send a nightly dump of my production database to a development server. A 
> script drops the existing development database and replaces it with the 
> current production copy. 
> 
> Each dev uses her own copy of the database. Is there a way to copy the 
> current development database to a differently named db on the same machine, 
> including the data,  without using the sql dump? 

Look at CREATE DATABASE developer_db WITH TEMPLATE production_copy;

createdb with the --template option is a convenient way to do that from the 
commandline.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Windows Installation User account - Correct database for us

2014-07-10 Thread Steve Atkins

On Jul 10, 2014, at 5:01 PM, Don Brown  wrote:

> Hello
> 
> We are writing a small application and we are trying to determine if 
> PostgreSQL is the right database for us. 
> 
> The application at this stage is only for a single user and commonly for 
> persons with little computer expertise. 
> 
> When the database is installed a postgreSQL user account is created which in 
> most cases will be the second user account on the PC.  The result of this is 
> the user now has to select the user account when ever the computer is 
> restarted.

I'd be surprised if that was required behaviour, but I'm not really familiar 
with current PostgreSQL packaging for Windows.

>  The programmer working on this application has suggested looking at an 
> imbedded database, something like H2 as an alternative. 

Installation and management of PostgreSQL on Windows hits occasional minor 
roadbumps - not a problem for someone deploying and using PostgreSQL, but 
potentially a cause of support overhead if you're "invisibly" installing the 
database along with your app and not expecting your user to be aware of it.

If you need the power and flexibility of PostgreSQL, or want to allow your 
users direct database access and want to give them a good experience there, 
then the advantages probably outweigh the possible issues. If you don't need 
that then an embedded database might be a better match.

I'd look at SQLite as an embedded alternative, myself. It's a solid embedded 
SQL database. If you're entirely a Java shop then H2 might well be a good 
choice too.

> 
> I was hoping the members of this group may have some comments or suggestions 
> as to the direction we should look at. 
> 
> Thank you and appreciate any comments/suggestions 

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How is sorting work?

2014-05-30 Thread Steve Atkins

On May 30, 2014, at 5:13 PM, Quang Thoi  wrote:

> Any one knows how sorting works?
> 
> I am using postgresql 9.3 and runs on Linux machines.
> I see different sorting order for the same set of return data.
> 
> On linux machines, databases are configured the same.
> Database have encoding set to 'utf8' and locale='C'
>  
> query:
> Select host_id, host_name from host_view order by host_id
> 
> hos_id (character varying 128)
> host_name (character varying 255)
> 
> - On one linux machine (locate in U.S) , the query returned following:
> 
> host_id   host_name
>   --
> "00017486";"lnx2.xx.yy.com"
> "00017486";"lnx1.xx.yy.com"
> 
> - On a different linux machine (locate in India), the query returned 
> following:
> 
> host_id   host_name
>   --
> "00017486";"lnx1.xx.yy.com"
> "00017486";"lnx2.xx.yy.com"

Both results are correct. If you don't specify a sort order, postgresql
will return results in whatever order is convenient - which won't be
consistent from query to query or machine to machine.

You're only sorting by host_id. If you want to sort consistently by
host_id and host_name, so that the sort order is well defined for
identical host_ids, you'll want to do something like

select host_id, host_name from host_view order by host_id, host_name.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux vs FreeBSD

2014-04-09 Thread Steve Atkins

On Apr 9, 2014, at 1:33 PM, Scott Marlowe  wrote:

> On Wed, Apr 9, 2014 at 9:02 AM, Christofer C. Bell
>  wrote:
>> On Thu, Apr 3, 2014 at 11:03 PM, François Beausoleil
>>  wrote:
>>> Hi all!
>>> 
>>> Does PG perform that much better on FreeBSD? I have some performance issues 
>>> on a Ubuntu 12.04 which I'd like to resolve. iowait varies a lot, between 5 
>>> and 50%. Does FreeBSD better schedule I/O, which could alleviate some of 
>>> the issues, or not at all? I have no experience administering FreeBSD, but 
>>> I'm willing to learn if I'll get some performance enhancements out of the 
>>> switch.
>>> 
>>> $ uname -a
>>> Linux munn.ca.seevibes.com 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 
>>> 17:37:58 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
>> 
>> From the research I've done online, this is likely your issue.  Kernel
>> 3.2.0 has some issues that directly and severely impact I/O wait times
>> for PostgreSQL.  The suggested fixes (that seem to have worked for
>> most people reporting in) are to revert the OS to Ubuntu Server 10.04
>> or to install one of the HWE (HardWare Enablement) kernels into the
>> 12.04 system (this would be one of the kernels from a later release of
>> Ubuntu provided in the 12.04 repositories).
> 
> 12.04 supports 3.8.0 directly. There's a site on putting 3.10.17 or so
> on it as well I found by googling for it. You need 3.10+ if you wanna
> play with bcache which is how I found it. But you don't need to jump
> through any hoops to get 3.8.0 on 12.04.4 LTS

Or wait  8 days for14.04 LTS with kernel 3.14.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Steve Atkins

On Apr 2, 2014, at 1:14 PM, Rob Sargent  wrote:

> On 04/02/2014 01:56 PM, Steve Atkins wrote:
>> On Apr 2, 2014, at 12:37 PM, Rob Sargent 
>>  wrote:
>> 
>>> 
>>> Impatience got the better of me and I killed the second COPY.  This time it 
>>> had done 54% of the file in 6.75 hours, extrapolating to roughly 12 hours 
>>> to do the whole thing.
>>> 
>> That seems rather painfully slow. How exactly are you doing the bulk load? 
>> Are you CPU limited or disk limited?
>> 
>> Have you read 
>> http://www.postgresql.org/docs/current/interactive/populate.html
>>  ?
>> 
>> Cheers,
>>   Steve
>> 
>> 
> The copy command was pretty vanilla:
> copy oldstyle from '/export/home/rob/share/testload/' with 
> delimiter ' ';
> I've been to that page, but (as I read them) none sticks out as a sure thing. 
>  I'm not so worried about the actual performance as I am with the relative 
> throughput (sixes so far).
> 
> I'm not cpu bound, but I confess I didn't look at io stats during the copy 
> runs. I just assume it was pegged :)

If each row is, say, 100 bytes including the per-row overhead (plausible for a 
uuid and a couple of strings), and you're inserting 800 rows a second, that's 
80k/second, which would be fairly pathetic.

On my laptop (which has an SSD, sure, but it's still a laptop) I can insert 40M 
rows of data that has a few integers and a few small strings in about 52 
seconds. And that's just using a simple, single-threaded load using psql to run 
copy from stdin, reading from the same disk as the DB is on, with no tuning of 
any parameters to speed up the load.

12 hours suggests there's something fairly badly wrong with what you're doing. 
I'd definitely look at the server logs, check system load and double check what 
you're actually running.

(Running the same thing on a tiny VM, one that shares a single RAID5 of 7200rpm 
drives with about 40 other VMs, takes a shade under two minutes, mostly CPU 
bound).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Steve Atkins

On Apr 2, 2014, at 12:37 PM, Rob Sargent  wrote:

> I'm playing with various data models to compare performance and 
> practicalities and not sure if I should be surprised by the numbers I'm 
> getting. I hope this report isn't too wishy-washy for reasoned comment.
> 
> One model says a genotype is defined as follows:
>  Table "public.oldstyle"
> +-+--+---+
> |   Column| Type | Modifiers |
> +-+--+---+
> | id  | uuid | not null  |
> | sample_name | text | not null  |
> | marker_name | text | not null  |
> | allele1 | character(1) |   |
> | allele2 | character(1) |   |
> +-+--+---+
> (0. id is a Primary Key)
> (1. Take what you will from the table name.)
> (2. I hadn't thought of "char" type at this point)
> (3. Ultimately the names would become ids, RI included)
> (4. We're loading 39 samples and ~950K markers)
> I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ hours 
> (800+ records/sec).  Then I tried COPY and killed that after 11.25 hours when 
> I realised that I had added on non-unque index on the name fields after the 
> first load. By that point is was on line 28301887, so ~0.75 done which 
> implies it would have take ~15hours to complete.
> 
> Would the overhead of the index likely explain this decrease in throughput?
> 
> Impatience got the better of me and I killed the second COPY.  This time it 
> had done 54% of the file in 6.75 hours, extrapolating to roughly 12 hours to 
> do the whole thing.

That seems rather painfully slow. How exactly are you doing the bulk load? Are 
you CPU limited or disk limited?

Have you read http://www.postgresql.org/docs/current/interactive/populate.html ?

Cheers,
  Steve





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] char array overhead

2014-03-31 Thread Steve Atkins

On Mar 31, 2014, at 8:08 AM, Rob Sargent  wrote:

> I'm angling toward using a very wide char(1) array.  Is the one-byte overhead 
> for char(n<126) applied to each element or to the array?

Each element, it's a variable length type.

There's probably a better way of storing your data, but if you end up really 
needing a one-byte long character type, there is "char" (with the quotes).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] puzzling perl DBI vs psql problem

2014-03-13 Thread Steve Atkins

On Mar 13, 2014, at 1:20 PM, Susan Cassidy 
 wrote:

> Yes, I am running with use strict.  The statement I pasted in is after perl 
> quoting, being written out by the same perl program.  I just take that 
> statement and paste it into the psql window.
> 
> DBI->trace showed nothing out of the ordinary.  It just shows the lines being 
> fetched that I am seeing in the web program, not the lines I get from psql.
> 
> Another odd thing is that it is apparently not logging statements from Perl, 
> only from psql.  I don't know why.  I thought I had it set up right to log to 
> syslog.  I've had good luck before with that on other installations.

That’s supports Tom’s theory that you’re not connecting to the database you 
think you are. There’s no difference from the database’s PoV between queries 
from psql and perl.

Conceivably connecting as different users could change the result too, though 
it seems unlikely here.

> Here are the log settings in postgresql.conf:
> 

[fairly normal settings deleted]

> 
> Could it have something to do with permissions on /var/log/postgresql?  It is 
> writeable by root only.  The perl program runs under apache.

No, that file is written by syslog, nothing to do with the client.

If the perl is running under apache, though, is it possible that it’s a 
long-running process that’s kept a transaction open and is seeing old data? 
Bounce apache and see if anything changes.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] puzzling perl DBI vs psql problem

2014-03-13 Thread Steve Atkins

On Mar 13, 2014, at 12:18 PM, Susan Cassidy 
 wrote:

> I have a weird issue that I can't figure out.
> 
> If I run the exact same query through psql as through perl DBI, I get 
> different results.  I get far fewer results with DBI than through the psql 
> command line.
> 
> Any ideas why that would be?
> 
> The query is:
> SELECT st.description, st.scene_thing_instance_id, 
>   st.scene_id, sc.description, st.scene_thing_id, s.description,
>   st.position_x, st.position_y, st.position_z,
>CASE
> when (st.description = 'absolute root'::text) then 1
> when (st.description ilike 'root%') then  2
> else 3
>END as s1, s.shape_name_id, sn.shape_name 
>  from scene_thing_instances st
> left join scene_things s on st.scene_thing_id = s.scene_thing_id
> left join scenes sc on st.scene_id = sc.scene_id
> left outer join shape_names sn on s.shape_name_id = sn.shape_name_id
> 
>   order by  s1, st.description
> 
> I get 14 rows back via psql, but I only get 5 rows back via DBI.  It's very 
> puzzling.
> 
> I copied and pasted the query from the program's log file, so I know I'm 
> doing the exact same query.  If it matters, I'm only seeing the rows with 
> 'root' in them via DBI, which the CASE statement refers to.  

How are you quoting the string in perl, and are you running with use strict?

My first thought would be that you’re not running the query you think you are - 
logging it at the postgresql side will let you check that (or if that’s not 
possible, DBI’s trace methods can help).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log_statement per table

2014-03-04 Thread Steve Atkins

On Mar 4, 2014, at 2:19 AM, David Janssens  wrote:

> Hello,
> I would like to log statements that modify a small subset of tables in a 
> databases.
> (not all tables, because the log files become too big in that case and I also 
> worry about performance)
> I currently use log_statement='mod' but I didn't find a way to limit this to 
> the set of tables I want.
> What is the best way to do this?

You might want to look at trigger based audit logs.

Some example code, and a couple of useful packages:

http://www.postgresql.org/docs/current/static/plpgsql-trigger.html#PLPGSQL-TRIGGER-AUDIT-EXAMPLE
https://github.com/disqus/pg_audit
http://pgfoundry.org/projects/tablelog/
http://jimmyg.org/blog/2007/audit-tables-in-postgresql-with-debian.html
http://www.varlena.com/GeneralBits/104.php

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing Ordinal Suffixes

2014-03-01 Thread Steve Atkins

On Mar 1, 2014, at 11:45 AM, George Weaver  wrote:

> 
>> Actually, I found that the double backslashes are required whether the E is
>> used or not:
> 
>> You must be using a relatively old PG version then.  Default behavior
>> since around 9.1 has been that backslashes aren't special except
>> in E'' strings.
> 
> Hmm.
> 
> development=# select version();
>  version
> 
> PostgreSQL 9.1.9, compiled by Visual C++ build 1500, 32-bit
> (1 row)

steve=# select version();

 version
--
 PostgreSQL 9.1.4 on x86_64-apple-darwin12.0.0, compiled by 
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) 
(LLVM build 2336.11.00), 64-bit
(1 row)

steve=# select regexp_replace('300 North 126th Street', '(\d+)(?:st|nd|rd|th)', 
'\1', 'gi');
regexp_replace
--
 300 North 126 Street
(1 row)


I suspect you have standard_conforming_strings set to off (it defaults to on in 
9.1), possibly for backwards compatibility to support an app you’re using 
that’s not been updated, possibly accidentally.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing Ordinal Suffixes

2014-02-28 Thread Steve Atkins

On Feb 28, 2014, at 2:43 PM, George Weaver  wrote:

> From: Steve Atkins
> 
> >Maybe this?
> 
> >select regexp_replace('300 North 126th Street', '(\d+)(?:st|nd|rd|th)', 
> >'\1', 'gi');
> 
> Hi Steve,
> 
> Thanks, but no luck:
> 
> select regexp_replace('300 North 126th Street', E'(\d+)(?:st|nd|rd|th)', 
> E'\1', 'gi');
>  regexp_replace
> 
>  300 North 126th Street
> 
> George

Those E’s you added completely change the meaning. If you want to
use E-style literals (and you probably don’t) you’ll need to double the
backslashes in all the strings.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing Ordinal Suffixes

2014-02-28 Thread Steve Atkins

On Feb 28, 2014, at 2:04 PM, George Weaver  wrote:

> Hi list,
>  
> I'm stumped.
>  
> I am trying to use Regexp_Replace to replace ordinal suffixes in addresses 
> (eg have '126th' want '126') for comparison purposes.  So far no luck.
>  
> I have found that
>  
> SELECT REGEXP_REPLACE(LOWER('300 North 126th Street'), '(?!/D)(st|nd|rd|th)', 
> '', 'g');
>   regexp_replace
> --
>  300 nor 126 reet
>  
> but
>  
> SELECT REGEXP_REPLACE(LOWER('300 North 126th Street'), '(?=/D)(st|nd|rd|th)', 
> '', 'g');
>  regexp_replace
> 
>  300 north 126th street
>  
> I'm a novice with regular expressions and google hasn't helped much.
>  
> Any suggestions?

Maybe this?

select regexp_replace('300 North 126th Street', '(\d+)(?:st|nd|rd|th)', '\1', 
'gi');

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql and asciidoc output

2014-02-11 Thread Steve Atkins

On Feb 11, 2014, at 2:56 PM, Bruce Momjian  wrote:

> Someone suggested that 'asciidoc'
> (http://en.wikipedia.org/wiki/AsciiDoc) would be a good output format
> for psql, similar to the existing output formats of html, latex, and
> troff.
> 
> Would this be useful?

Less so than Markdown[1], IMO. (Or CSV or xlsx, come to that.)

There’s a long list of potentially useful output formats. How pluggable
is the output formatter?

Cheers,
  Steve

[1] Markdown tables aren’t quite as built-in as asciidocs, but I suspect
a lot more people use markdown.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tables ending with _copy appeared in the database

2014-02-05 Thread Steve Atkins

On Feb 5, 2014, at 12:50 PM, Eliott  wrote:

> Dear community,
> 
> I've just noticed that in one of our databases many duplicate tables had 
> appeared all ending  in _copy. These tables are owned by postgres not the db 
> owner user, other than that they seem to be a replica from a specific time in 
> the past. I suspect that a failed backup might have cased these, but I am not 
> sure. I couldn't find any info tables ending in _copy and also I thought that 
> backups run in a transaction so even if they create temporary tables, they 
> should disappear even when the backups fail.
> 
> Can you confirm that this was due to a failed backup?

It’s not.

It sounds like something that a client app would do, possibly as part of a 
(failed) data migration. What app is using the database?

> Is it safe to delete all _copy tables? Should I do any additional 
> housekeeping to clean this up?

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSYSCONFDIR?

2014-01-14 Thread Steve Atkins

On Jan 14, 2014, at 10:58 AM, John Sutton  wrote:

> Hi there
> 
> Having spent about 2 hours trying to solve a simple problem, I think it might 
> be worthwhile to record my efforts.  Perhaps someone can point out how 
> extremely silly I have been… or is the documentation lacking?
> 
> My original question was: where is the system-wide psqlrc file located?

The default is a compile-time configuration option.

You can get that for your installation using "pg_config --sysconfdir”

The environment PGSYSCONFDIR variable can override it if it’s set. Like a lot 
of client configuration settings it’s not really handled by the client, but by 
libpq. That’s good; makes for a nice consistent UI. What’s less good is that it 
means that they’re documented in the libpq docs - 
http://www.postgresql.org/docs/current/static/libpq-envars.html

> 
> Some material on the web suggests that this is ~postgres/.psqlrc but this not 
> true, this is just the postgres user’s user-specific config file.
> 
> I tried putting it alongside pg_hba.conf etc but that didn’t work.
> 
> The psqlrc.sample file contains the wording “Copy this to your sysconf 
> directory (typically /usr/local/pqsql/etc) …”  but that directory doesn’t 
> exist on either of my target systems!  (I’m using postgres 9.1 on Ubuntu and 
> Mac OS X.)
> 
> As a last resort (which surely shouldn’t be necessary) on the Ubuntu system I 
> did:
> 
> strings /usr/bin/psql | grep -i sysconf
> 
> $ENV{'PGSYSCONFDIR'} = '/etc/postgresql-common' if !$ENV{'PGSYSCONFDIR’};

On Ubuntu that’s not really psql, it’s a shell script wrapper that runs the 
real psql - and it looks like they’re overriding whatever the built-in default 
is in their wrapper.

> 
> So that’s where it needs to be: /etc/postgresql-common/psqlrc
> 
> I’ve still no clue for Mac OS X however, since the same trick only finds a 
> placeholder :( :
> 
> strings /Applications/Postgres.app/Contents/MacOS/bin/psql | grep -i sysconf

satsuke:shared (develop)$ pg_config --sysconfdir
/Applications/Postgres.app/Contents/MacOS/etc

:)

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about memory usage

2014-01-10 Thread Steve Atkins

On Jan 10, 2014, at 8:35 AM, Preston Hagar  wrote:

> tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors despite 
> the server now having 32 GB instead of 4 GB of RAM and the workload and 
> number of clients remaining the same.
> 
> 
> Details:
> 
> We have been using Postgresql for some time internally with much success.  
> Recently, we completed a migration off of an older server running 8.3 to a 
> new server running 9.3.  The older server had 4GB of RAM, the new server has 
> 32 GB. 
> 
> For some reason, since migrating we are getting lots of "out of memory" and 
> "cannot allocate memory" errors on the new server when the server gets under 
> a decent load.  We have upped shmmax to 17179869184 and shmall to 4194304.  

What are the exact error messages you’re getting, and where are you seeing them?

> 
> We had originally copied our shared_buffers, work_mem, wal_buffers and other 
> similar settings from our old config, but after getting the memory errors 
> have tweaked them to the following:
> 
> shared_buffers= 7680MB  
> temp_buffers  = 12MB
> max_prepared_transactions = 0
> work_mem  = 80MB
> maintenance_work_mem  = 1GB 
> wal_buffers = 8MB 
> max_connections = 350 
> 
> The current settings seem to have helped, but we are still occasionally 
> getting the errors.
> 
> The weird thing is that our old server had 1/8th the RAM, was set to 
> max_connections = 600 and had the same clients connecting in the same way to 
> the same databases and we never saw any errors like this in the several years 
> we have been using it.
> 
> One issue I could see is that one of our main applications that connects to 
> the database, opens a connection on startup, holds it open the entire time it 
> is running, and doesn't close it until the app is closed.  In daily usage, 
> for much of our staff it is opened first thing in the morning and left open 
> all day (meaning the connection is held open for 8+ hours).  This was never 
> an issue with 8.3, but I know it isn't a "best practice" in general.

That’s probably not related to the problems you’re seeing - I have apps that 
hold a connection to the database open for years. As long as it doesn’t keep a 
transaction open for a long time, you’re fine.

> 
> We are working to update our application to be able to use pgbouncer with 
> transaction connections to try to alleviate the long held connections, but it 
> will take some time.

Using pgbouncer is probably a good idea - to reduce the number of concurrent 
connections, rather than the length of connections, though.

> 
> In the meantime, is there some other major difference or setting in 9.3 that 
> we should look out for that could be causing this?  Like I said, the same 
> database with the same load and number of clients has been running on a 8.3 
> install for years (pretty much since 2008 when 8.3 was released) with lesser 
> hardware with no issues.
> 
> Let me know if any other information would help out or if anyone has 
> suggestions of things to check.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: postgresql.org inconsistent (Re: [GENERAL] PG replication across DataCenters)

2013-12-10 Thread Steve Atkins

On Dec 10, 2013, at 8:47 AM, Wolfgang Keller  wrote:

>> http://www.postgresql.org/docs/9.3/static/different-replication-solutions.html?
> 
>> Synchronous Multimaster Replication
> 
> *snip*
> 
>> PostgreSQL does not offer this type of replication (...)
> 
> Now I compare that statement with:
> 
> http://wiki.postgresql.org/wiki/Postgres-XC
> 
>> Project Overview
> 
> *snip*
> 
>> Features of PG-XC include:
> 
> *snip*
> 
>> 2. Synchronous multi-master configuration 
> 
> Seems to me that the editing process of the different parts of
> postgresql.org somewhat lacks transactional semantics.

Postgres-XC isn't PostgreSQL. Entirely different product.

Anyone can add pages to the wiki, and there's lots of information
there about things that aren't postgresql, Postgres-XC is just
one of those.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] network protocol and compatibility question

2013-12-09 Thread Steve Atkins

On Dec 9, 2013, at 2:39 AM, Joek Hondius  wrote:

> Hi All,
> 
> We have been running our product on PostgreSQL for quite some time now. 
> Started out with 7.0
> 
> I have this situation where i want to migrate my (many) PostgreSQL 8.3 
> databases to a new server running 9.2
> In my case this is not a problem for schema and data.
> But i do not want to update the client's drivers if i don't need to right 
> now, but prefer to do that in future updates.
> 
> Reason is we are moving to new server infrastructure and want to go 9.2,
> but the  3rd party making the database driver for our legacy product has 
> issues.
> This will otherwise keep us from moving to better/supported version.
> (insert legacy headache here)
> 
> I found that the pgsql network protocol spec has not changed between 8.3 and 
> 9.2.

It's backwards compatible to at least 8.3.

> 
> Can i use my 8.3 clients on 9.2 servers since the protocol has not changed 
> and the database schema remains the same?
> I would think yes, i will test but it would be best if backed by some kind of 
> theorethical rationale.

The low level network protocol is compatible, but the system tables have 
changed. If your
database driver provides any sort of schema metadata to your app, or uses that 
itself,
then it may fail ungracefully. Without knowing more about your app and the 
database
access library it uses it's hard to say more than that.

> Any alternatives?
> Can i put pgBouncer 9.2 in front of a 8.3 server for example?

That won't have any effect on any issues you might see.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] what's the efficient/safest way to convert database character set ?

2013-10-17 Thread Steve Atkins

On Oct 17, 2013, at 3:13 PM, "Huang, Suya"  wrote:

> Hi,
>  
> I’ve got a question of converting database from ascii to UTF-8, what’s the 
> best approach to do so if the database size is very large? Detailed procedure 
> or experience sharing are much appreciated!
>  

The answer to that depends on what you mean by "ascii".

If your current database uses SQL_ASCII encoding - that's not ascii. It could 
have anything in there, including any mix of encodings and there's been no 
enforcement of any encoding, so there's no way of knowing what they are. If 
you've had, for example, webapps that let people paste word documents into 
them, you potentially have different encodings used in different rows of the 
same table.

If your current data is like that then you're probably looking at doing some 
(manual) data cleanup to work out what encoding your data is really in, and 
converting it to something consistent rather than a simple migration from ascii 
to utf8.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Forms for entering data into postgresql

2013-10-11 Thread Steve Atkins

On Oct 11, 2013, at 8:57 AM, Bret Stern  
wrote:

> My interpretation of "Forms these days are written in HTML" means
> most interfaces are web front ends to the cloud. Not a GUI framework.


Yup.

But embedding an HTML renderer in your desktop app does allow you to
use HTML where it's appropriate - and it works really well for dynamically
generated forms and tabular output.

The IBM 3270 wasn't the crowning achievement of data entry technology.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] oids on disk not in pg_class

2013-10-07 Thread Steve Atkins

On Oct 7, 2013, at 2:48 PM, Guy Rouillier  wrote:

> We have a fairly large (1 TB) database we put on all SSDs because of a very 
> high insert and update rate ().  As our business has grown, we've been 
> running into space constraints, so we went looking for files we might be able 
> to delete.
> 
> We found a large number (662 out of 1465 total ) and size (219 GB) of files 
> in the data directory whose name does not correspond to an oid in the 
> pg_class system catalog table.  That amount of space would address our 
> current space constraint problems.  Some of these tables are recent (from 
> today), while others are quite old (large volume in August and May, with some 
> smaller ones as far back as February.

You need to be looking at pg_class.relfilenode, not pg_class.oid. They're often 
the same value, but often not.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download specific Postgres.App version

2013-09-18 Thread Steve Atkins

On Sep 18, 2013, at 7:40 AM, Gordon Ross  wrote:

> On the postgresapp.com you can only download the latest version of Postgres 
> for the Mac. Is it possible to download a specific version?

There are tagged builds on github - 
https://github.com/PostgresApp/PostgresApp/releases

Cheers,
  Steve

> 
> Thanks,
> 
> GTG
> -- 
> Gordon Ross
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint exclusion and overlapping range checks

2013-09-07 Thread Steve Atkins

On Sep 7, 2013, at 7:20 AM, Tom Lane  wrote:

> Alban Hertroys  writes:
>> On Sep 7, 2013, at 6:54, Steve Atkins  wrote:
>>>> If I have a partitioned table that has some range constraints that look 
>>>> kinda like they're intended for constraint exclusion, but aren't quite 
>>>> non-overlapping, will that break anything?
> 
>> Next to that, putting data in the tables becomes ambiguous for records that 
>> match both constraints - in which table should the records go? That is 
>> something that you need to do programatically anyway, so with the knowledge 
>> of how to decide which records go where, you could also define your 
>> exclusion constraints to not be ambigous.
> 
>> I don't see any benefit of having ambiguous exclusion constraints - IMHO 
>> you're better off fixing them.

If the check constraints are there for reasons other than partitioning and 
exclusion there isn't anything to fix. But if the constraint exclusion code can 
use those overlapping range constraints to reduce queries down to one or two 
partitions that seems like a fairly useful benefit.

> 
> I agree with that advice in principle; but if the true partitioning
> constraint is too complicated, you might defeat the planner's ability to
> prove that particular tables don't need to be scanned as a consequence of
> a particular WHERE clause.  The simple range constraints Steve showed
> should work fine with constraint exclusion.  The proofs are done
> separately for each sub-table, so the fact that the ranges overlap doesn't
> bother the planner.  We might in future have a more efficient partitioning
> method that does assume non-overlapping ranges ... but it's not there
> today.

Constraint exclusion is a global setting, so I'm mostly concerned about the 
planner mistaking range constraints that are there for other reasons breaking 
queries. That it doesn't - and can even usefully use those range constraints to 
optimize queries - isn't really surprising, but is reassuring.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint exclusion and overlapping range checks

2013-09-06 Thread Steve Atkins

On Sep 6, 2013, at 9:37 PM, François Beausoleil  wrote:

> 
> Le 2013-09-07 à 00:29, Steve Atkins a écrit :
> 
>> If I have a partitioned table that has some range constraints that look 
>> kinda like they're intended for constraint exclusion, but aren't quite 
>> non-overlapping, will that break anything?
>> 
>> e.g.
>> 
>> create table jan ( …, check(created >= '2013-01-01' and created < 
>> '2013-02-01'), check(id >=0 and id < 1100) ) inherits(foo);
>> create table feb ( …, check(created >= '2013-02-01' and created < 
>> '2013-03-01'), check(id >=100 and id < 2100) ) inherits(foo);
>> create table mar ( …, check(created >= '2013-03-01' and created < 
>> '2013-04-01'), check(id >=200 and id < 3100) ) inherits(foo);
>> 
>> Querying by created should be fine, and take advantage of constraint 
>> exclusion, but will querying by id work? And if it does work, will it take 
>> any advantage of those constraints at all, or just search all the child 
>> partitions?
> 
> I don't know, but I suspect a quick EXPLAIN ANALYZE will tell you, even with 
> empty tables.

Explain suggests it'll work fine, and make good use of the constraints to prune 
partitions from the plan. But the docs are pretty specific about overlapping 
range constraints being a bad thing so I'm wondering if there's potential for 
problems.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Constraint exclusion and overlapping range checks

2013-09-06 Thread Steve Atkins
If I have a partitioned table that has some range constraints that look kinda 
like they're intended for constraint exclusion, but aren't quite 
non-overlapping, will that break anything?

e.g.

  create table jan ( …, check(created >= '2013-01-01' and created < 
'2013-02-01'), check(id >=0 and id < 1100) ) inherits(foo);
  create table feb ( …, check(created >= '2013-02-01' and created < 
'2013-03-01'), check(id >=100 and id < 2100) ) inherits(foo);
  create table mar ( …, check(created >= '2013-03-01' and created < 
'2013-04-01'), check(id >=200 and id < 3100) ) inherits(foo);

Querying by created should be fine, and take advantage of constraint exclusion, 
but will querying by id work? And if it does work, will it take any advantage 
of those constraints at all, or just search all the child partitions?

Cheers,
  Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] batch insertion

2013-08-24 Thread Steve Atkins

On Aug 24, 2013, at 5:15 PM, Korisk  wrote:

> Hi!
> I want quick insert into db a lot of data (in form of triplets). Data is 
> formed dynamical so "COPY" is not suitable.

COPY works just fine for dynamically generated data, and it's probably the 
right thing to use if you're bulk loading data (it's about as fast as you can 
get for a single threaded load).

Take a look at the PQputCopyData() and PQputCopyEnd() functions.

Cheers,
  Steve

> I tried batch insert like this:
> 
> insert into triplets values (1,1,1);
> insert into triplets values (1,1,1), (3,2,5), (4,5,5);
> ...
> insert into triplets values (1,1,1), (3,2,5), (4,5,5)  ;
> 
> The more triplets I use the quicker operation is.
> With preparation it looks like this:
> 
> res = PQprepare(conn, "qu", "insert into triplets values ($1::bigint, 
> $2::bigint, $3::float);",3, NULL);
> ...
> res = PQprepare(conn, "qu", "insert into triplets values ($1::bigint, 
> $2::bigint, $3::float), ($4::bigint, $5::bigint, $6::float), ($7::bigint, 
> $8::bigint, $9::float), ($10::bigint, $11::bigint, $12::float);",12, NULL);
> ...
> 
> The question:
> Is there any way to prepare query with any number of triplets without casting 
> such a long string?
> 
> Thank you.
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostrgreSQL Commercial restrictions?

2013-08-07 Thread Steve Atkins

On Aug 7, 2013, at 8:23 AM, Eliseo Viola  wrote:

> Hello.
> I have been reading the -http://opensource.org/licenses/postgresql- to know 
> if i can use PostgreSQL in a Privative Closed Commercial Software  (The worst 
> of the worst :P ). in the company where i work.
> Is there any restriction or limit to do it.?

There isn't, really. It's very liberally licensed, and you're free to 
distribute it.

However, as someone who writes "private closed commercial software" that uses 
postgresql as a backend I'd fairly strongly suggest that you at least provide 
end users the opportunity to use their own installation of postgresql if they 
want to. "Hiding" the existence of postgresql in your package doesn't magically 
make it as low maintenance as sqlite, so users will still know it's there and 
might want to point your app at their supported, tuned installation instead. 
(And may want to fire up psql to see what's in there - you'll get happier 
customers if you put connection info and a database schema diagram in your docs 
than if you try to hide your use of a database).

I no longer distribute postgresql bundled with the app at all. On platforms 
with decent package managers I just ship a package that relies on the 
OS-installed postgresql. For other environments I provide a package for the 
app, and a separate package with postgresql.

If you do decide to distribute postgresql with your app (by bundling the 
enterprisedb installer as part of a windows app installer, say) be very careful 
that it's configured to not clash with a used-installed copy of postgresql.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sharing data directories between machines

2013-08-06 Thread Steve Atkins

On Aug 6, 2013, at 10:45 AM, JD Wong  wrote:

> Hi all!
> 
> I have two servers, which share a large mounted drive.  I would like to share 
> postgres databases between them dynamically so that when one makes changes, 
> they are immediately available in the other.  
> 
> I tried moving the data directory over to the mounted drive, and pointing 
> both postgresql.confs to that one.  I was able to have both access the same 
> databases, but they can't share changes.  It's like they're running on two 
> separate data directories, even though show data_directory reports the same 
> path for each.

That'll likely damage your database, probably irrecoverably.

> How can I make them play nicely?

You can't do it by sharing the disk files, at all. The two instances will trash 
each others data.

If you want multiple database servers for redundancy, or you want to be able to 
offload read access to a second server, take a look at hot standby servers. 
http://www.postgresql.org/docs/9.2/static/high-availability.html

If you really want to be able to write to either database and have it 
replicated to the other one immediately, you should probably rethink what you 
need. It can be done (with multimaster replication) but it's almost never the 
right approach.

Cheers,
  Steve





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are stored procedures looked on so negatively?

2013-07-25 Thread Steve Atkins

On Jul 25, 2013, at 1:44 AM, Some Developer  wrote:
>> 
> 
> When I was talking about improving speed I was talking about reducing load on 
> the app servers by putting more of the work load on the database server. I 
> know that it won't actually save CPU cycles (one of the machines has to do 
> it) but it will save load on the app servers. As I said above using the 
> asynchronous abilities of libpq helps keep the app servers serving requests 
> whilst the database gets on with its tasks.
> 

App servers don't tend to maintain much global state, so are almost perfectly 
parallelizable. If you run out of CPU there, drop another cheap box in the rack.

Database servers aren't. Once you top out a database server your main options 
are to replace it with a bigger box (increasingly expensive) or rearchitect the 
application (even more expensive).

I'll always put more work on the cheaply scalable app servers if I can reduce 
the load on the database. Moving code to the database server for reasons of CPU 
cost (as opposed to, say, data or business rule consistency) seems an odd 
approach.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dataset lock

2013-04-16 Thread Steve Atkins

On Apr 16, 2013, at 7:50 AM, Philipp Kraus  wrote:

> Hello,
> 
> I use a PG database on a HPC system (cluster). My processes get a dataset 
> from the database and change the row, each process is independend.
> My table shows something like: id, status, data
> 
> id = PK a unqiue number
> status a enum value which "open", "waiting", "working", "done"
> 
> So each process calls a SQL statement select * from where status = "waiting", 
> so the process should get the next waiting task, after the process
> gets the task, the status should be changed to "working", so no other process 
> shouldn't get the task. My processes are independed, so it can
> be, that 2 (or more) processes call the select statement at the same time and 
> get in this case equal tasks, so I need some locking. How can
> I do this with Postgres, that each row / task in my table is read / write by 
> one process. On threads I would create a mutex eg:
> 
> lock()
> row = select * from table where status = waiting
> update status = working from table where id = row.id
> unlock()
> 
> do something with row
> 
> Which is the best solution with postgres? should I create a procedure which 
> takes the next job, change it and returns the id, so each process
> calls "select getNextJob()" ?

"select for update" might be the answer to what you're asking for - it'll lock 
the rows matched until the end of the transaction, blocking any other select 
for update on the same rows.

If performance is important then you might want to look at some of the off the 
shelf queuing systems instead - PgQ or queue_classic, for instance.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] crash proof for semi-embedded system

2013-04-11 Thread Steve Atkins

On Apr 11, 2013, at 5:11 AM, David Welton  wrote:

> Hi,
> 
> I'm going to be deploying Postgres in a semi-embedded system where end
> users might simply power the thing off from one moment to the next.
> Or the disk might start to go wonky, or any number of other problems.
> Because it's a standalone device, it may well run in an environment
> where we can't ship backups off of the machine.
> 
> I've been reading this:
> 
> http://www.postgresql.org/docs/9.2/static/continuous-archiving.html

You should take a look at http://www.postgresql.org/docs/current/static/wal.html
too.

> 
> And it looks pretty good.  It appears that, since I can't really back
> things up to a separate disk unit, it will "just work" unless the disk
> gets corrupted?  In other words, in the case of someone pulling the
> power plug, it ought to be able to get things up and running more or
> less automatically, correct?  Besides utilizing that, and keeping the
> fsync option set to true, what other steps can I take to make sure
> that data is not lost even in extraordinary circumstances?  Having to
> manually fix things up is acceptable in the use case we're planning
> for, even if it's clearly preferable to not have to intervene.

If you don't do anything extra, postgresql should survive power
being pulled, the disk being pulled and anything else you do, 
as long as the underlying filesystem isn't damaged in the process.
It writes, and commits, all changes to the WAL as they're made,
and uses that to replay changes at startup if needed. That's
all assuming that your storage layer doesn't lie about fsync -
which is something to check, especially on embedded hardware
(there's a tool to do that at the link above).

It does mean that the database might not be immediately
available at system startup, while it's recovering, so your app
should be able to deal with that.

If you only have a single storage device, that's about the best
you can do (though taking a periodic backup for disaster recovery
wouldn't be the worst idea in the world).

Cheers,
  Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When did this behavior change (and where else might it bite me)?

2013-03-18 Thread Steve Atkins

On Mar 18, 2013, at 9:49 AM, Jeff Amiel  wrote:

> In prepping for an upgrade to 9.2.3, I stumbled across this:
> 
> CREATE TABLE foo
> (
>   myint integer,
>   string1 text,
>   string2 text
> )
> WITH (
>   OIDS=FALSE
> );
> 
> insert into foo values (12345,'Y','N');
> 
>  select * from foo f where f.myint = 12345 or f.name='Y'
> 
> In 9.2.3, this returns:
> ERROR:  column f.name does not exist
> LINE 1:  select * from foo f where myint = 12345 or f.name='Y'
> 
> in 8.4.6 ,this returns no error (and gives me the row from the table)

That's (unintentionally) an attribute style data type cast - bar.name is the 
same as name(bar), and tries to cast bar to type "name" (an internal-use string 
type) 

Try "select foo from foo", "select name(foo::text) from foo" and "select 
name(foo) from foo" to see what's going on.

That was tightened up in 9.1, I think:

Casting

Disallow function-style and attribute-style data type casts for composite 
types (Tom Lane)

For example, disallow composite_value.text and text(composite_value). 
Unintentional uses of this syntax have frequently resulted in bug reports; 
although it was not a bug, it seems better to go back to rejecting such 
expressions. The CASTand :: syntaxes are still available for use when a cast of 
an entire composite value is actually intended.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Testing Technique when using a DB

2013-03-12 Thread Steve Atkins

On Mar 12, 2013, at 8:41 AM, Perry Smith  wrote:
> 
> 
> One choice would be to create the database, use it, and then drop it for each 
> test.  I would create the database from a template that already has data 
> taken from the production database (and probably trimmed down to a small 
> subset of it).  This requires some crafty dancing in the Rails set up since 
> it likes to just attach to a database and run but it could be done.  From 
> first blush, this sounds like it would be really slow but may be not. 

I do this. It's not blindingly fast, but plenty fast enough for automated 
testing as long as your tests aren't too fine-grained and your test database 
isn't too big. It takes no more than two or three seconds on my (slow, 
IO-starved) QA VMs.

By parameterizing the database name you can parallelize tests - each test 
creates it's own copy of the template database, runs whatever it needs to run, 
then drops the database. That lets you hide a lot of the setup/teardown latency.

> 
> The other choice would be to somehow copy the data to temporary tables before 
> the test run and then copy it back.  The advantage to this is it is not very 
> PostgreSQL specific.  Indeed, if the template database is already set up, 
> then only one copy would be needed at the start of the test. 

You'd also need to undo any other state that was changed. Sequences, for 
instance, if they can affect the meaning of your test or expected results in 
any way.

> 
> The other thought I had is if there is some type of "leaky" transaction.  A 
> transaction where another process can see the data but the roll back would 
> still work and be effective.  Essentially I'm asking if all the protections a 
> database offers could be dropped... but I thought I'd see if that was 
> possible. 

That - or anything else involving rolling back transactions - would only work 
if you were testing an app that didn't use transactions.

> The other thought is perhaps there is a "snap shot" type concept.  I don't 
> see it in the list of SQL commands.  A "snap shot" would do exactly what it 
> sounds like.  It would take a snap shot and save it somehow.  Then a "restore 
> to snap shot" would restore the DB back to that state. 

That's pretty much what creating a database from a template does, other than 
the need to have everybody disconnect from the database before doing the 
drop+create.

Dump / restore will do that too - somewhat slower, but doesn't require 
disconnecting from the DB.

File-system level snapshots are another option, but I'm pretty sure you'd need 
to shut down and restart the database server, which'd cost far more than you'd 
save.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PostgreSQL on OSX Server

2013-01-28 Thread Steve Atkins

On Jan 28, 2013, at 8:47 AM, Stephen Cook  wrote:

> On 1/28/2013 11:15 AM, Steve Atkins wrote:
>> You're not planning on using this in production, I hope? OS X is a very 
>> solid desktop OS, but it's server variant is packed full of weird and plain 
>> broken behaviour.
>> 
> 
> Ouch. These are the servers they have and use, I don't really get a say in 
> that.
> 
> Are these problems PostgreSQL-specific? They are already running Apache with 
> PHP and MySQL on these, so if it is a general "broken-ness" I guess they are 
> already used to it?

Not postgresql specific at all, just a lot of very flaky user-space code (and 
it's really just regular desktop OS X with more applications added, not really 
a differently tuned system). You're also going to find that it's really poorly 
suited for remote management via ssh, and the remote management apps have 
ridiculous OS version requirements for the management console. Plan on setting 
up VNC or using screen sharing (which is available on OS X client, 
/System/Library/CoreServices/Screen Sharing.app, or something like that).

If they're already using them in production with apache/php, nothing they're 
doing should break when they switch to postgresql.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing PostgreSQL on OSX Server

2013-01-28 Thread Steve Atkins

On Jan 28, 2013, at 6:45 AM, Stephen Cook  wrote:

> Hello!
> 
> I have convinced a client to use PostgreSQL instead of MySQL (hooray), which 
> means it falls on me to install and configure it. I'm planning on doing this 
> from the command line (I have SSH access).
> 
> I have installed and configured PostgreSQL on Windows, FreeBSD, and a few 
> Linux flavors, but never OSX. I have the basic directions 
> (http://www.enterprisedb.com/resources-community/pginst-guide) and found a 
> couple of articles / blogs, but really I'm not an Apple guy so I don't want 
> to miss anything or screw anything up.
> 
> Is there anything I should watch out for? I have some wiggle room (this is a 
> development server at first), but I'd rather not break anything.
> 
> uname -a returns: Darwin xxx.local 10.8.0 Darwin Kernel Version 10.8.0: Tue 
> Jun  7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
> PostgreSQL version: Latest and greatest - 9.2.2
> 
> Thanks!

You're not planning on using this in production, I hope? OS X is a very solid 
desktop OS, but it's server variant is packed full of weird and plain broken 
behaviour.

OS X includes the postgresql clients and libraries, and OS X Server includes 
the postgresql server. You don't want to use the included postgresql server, or 
the included libpq and binaries, but you'll want to make sure that they don't 
clash with the version you're installing - for the server that's not too 
painful, but for the clients you'll want to make sure that the PATH of all 
users is set up to find your installed versions of psql etc. before the ones in 
/usr/bin, and that they're linking with the right libpq.dylib, not the one in 
/usr/lib. "otool -L" is the OS X equivalent to ldd.

OS X doesn't have readline installed, it has libedit. Libedit is poor compared 
to readline, and the OS X installed version of libedit was, for years, 
hideously broken such that tab completion would cause SEGVs. It might have been 
fixed in the latest releases, or it might not. It's worth avoiding anyway.

If you'll be installing using the point-and-click enterprisedb installer it 
should take care of some of the issues for you. If you end up installing from 
source you need to be aware that OS X is a dual-architecture system with fat 
binaries and libraries (for i386 and x86_64 on recent releases). Depending on 
what your developers are doing that may be an issue. The usual way of building 
fat binaries doesn't work for postgresql, or didn't the last time I tried it; 
you may need to build twice, once for each architecture, then glue the results 
together to make fat libraries.

Also, on a non-postgreql note, you'll find that the OS X userspace, 
particularly when it comes to system administration tools, is strange and scary 
compared to the unixalikes you've used. You create users with multiple dscl 
commands, not adduser. Daemons are managed by launchd, not started from 
/etc/init.d.

http://labs.wordtothewise.com/postgresql-osx/ has a few notes on building and 
installing from source that might be useful.

Recent versions of OS X server (10.6 and later, I think) can be installed in 
VMWare, as long as the host is running on Apple hardware (so either VMWare 
Fusion or ESXi running on a mini) if you want to build a play / staging 
environment where you can roll back snapshots.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where Can I Find...

2013-01-25 Thread Steve Atkins

On Jan 25, 2013, at 8:21 AM, Gene Poole  wrote:

> I'm looking for a  tutorial on how to move a Oracle 11gR2 database 
> that consists on 4 instances with table spaces spread across multiple file 
> systems to PostgreSQL.  I need to do this because most of the open source CMS 
> packages do not play with Oracle.  The amount of data stored in the four 
> instances (or schemas) would fit on 6 single layer DVDs, so it's not a large 
> amount of data.

I doubt you'll find one. http://wiki.postgresql.org/wiki/Oracle probably has 
something useful, and http://ora2pg.darold.net

Given it's not a desperately huge database your best bet might just be to build 
a scratch system or VM (if a dump is less than 30gigs it's the sort of size you 
can handle on your laptop) and just do it. Convert the schema manually and 
import the exported data, or use ora2pg to automate some of it, then ask 
questions when you hit specific problems.

If you're using stored procedures things get more interesting, and they'll need 
to be rewritten. Once you've got the data ported you'll likely need to rewrite 
some of the queries your apps use, as oracle has some differences.

If you have budget, you might want to take a look at 
http://www.enterprisedb.com.

> Also I don't want to use RPM because I like to control where software is 
> installed. Another piece on information is that I am running on CentOS 5.8. 

Avoiding RPMs is probably not a good operational choice, BTW.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best method to compare subdomains

2013-01-16 Thread Steve Atkins

On Jan 16, 2013, at 12:23 PM, Robert James  wrote:

> Is there a recommended, high performance method to check for subdomains?
> 
> Something like:
> - www.google.com is subdomain of google.com
> - ilikegoogle.com is not subdomain of google.com
> 
> There are many ways to do this (lowercase and reverse the string,
> append a '.' if not there, append a '%', and do a LIKE).  But I'm
> looking for one that will perform well when the master domain list is
> an indexed field in a table, and when the possible subdomain is either
> an individual value, or a field in a table for a join (potentially
> indexed).

If you've already dealt with any punycode encoding then the lowercased,
reversed string works pretty well, either as a (probably trigger-managed)
field or as a functional index.

If you need to get fancier, you might want to take a look at the approach
https://github.com/dimitri/prefix takes.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux Distribution Preferences?

2013-01-13 Thread Steve Atkins

On Jan 13, 2013, at 10:27 AM, Shaun Thomas  wrote:

> Hey guys,
> 
> I'm not sure the last time I saw this discussion, but I was somewhat curious: 
> what would be your ideal Linux distribution for a nice solid PostgreSQL 
> installation? We've kinda bounced back and forth between RHEL, CentOS, and 
> Ubuntu LTS, so I was wondering what everyone else thought.


Either would be fine. RHEL is a bit more Enterprisey - which is either good or 
bad, depending on your use case. They're more conservative with updates than 
Ubuntu - which is good for service stability, but can be painful when you're 
stuck between using ancient versions of some app or stepping into the minefield 
of third party repos. (CentOS is pretty much just RHEL without support and 
without some of the management tools).

Ubuntu LTS is solid, and has good support for running multiple Postgresql 
clusters simultaneously, which is very handy if you're supporting multiple apps 
against the same database server, and they require different releases. I've 
been told that they occasionally make incompatible changes across minor 
releases, which is Bad, but it's never happened anywhere I've noticed - I've no 
idea if it's an actual issue or "Well, back in the 2004 release, they…" 
folklore.

I run both in production, both on VMs and real metal. I tend to use Ubuntu LTS 
for new installations just because I'm marginally more comfortable in the 
Ubuntu CLI environment, but there's really not much to choose between them.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] VALUES() evaluation order

2013-01-13 Thread Steve Atkins

On Jan 13, 2013, at 2:36 PM, Tom Lane  wrote:

> Steve Atkins  writes:
>> Is the order in which the expressions in a VALUES() clause defined?
>> I'm doing this: INSERT INTO foo (a, b) VALUES (nextval('bar'), 
>> currval('bar'))
> 
>> It works fine, but I'm wondering whether it's guaranteed to work or whether 
>> I'm relying on an artifact of the implementation.
> 
> I'd say it's an artifact.  It probably does work reliably at the moment,
> but if we had a reason to change it we'd not feel much compunction about
> doing so.  (The most obvious potential reason to change it is parallel
> evaluation of expressions, which is a long way off, so you probably
> don't have any near-term reason to worry.  But ...)
> 
> Consider sticking the nextval() into a WITH.

Thanks. WITH it is, then.

Cheers,
  Steve

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] VALUES() evaluation order

2013-01-13 Thread Steve Atkins
Is the order in which the expressions in a VALUES() clause defined?

I'm doing this: INSERT INTO foo (a, b) VALUES (nextval('bar'), currval('bar'))

It works fine, but I'm wondering whether it's guaranteed to work or whether I'm 
relying on an artifact of the implementation.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Steve Atkins

On Oct 1, 2012, at 10:26 AM, Shaun Thomas  wrote:

> On 10/01/2012 12:19 PM, Darren Duncan wrote:
> 
>> You should never put your passwords (or private keys) in source control;
>> it would be better to use the puppet/bcfg option.
> 
> That was kind of my point. Puppet / Bcfg2 have the same problem. About a 
> dozen people have access to our bcfg2 repo than I would want to know the 
> contents of .pgpass.
> 
> We have twenty machines. If I ever change that file, I have to change it in 
> 20 places. I'd love to put it in bcfg2, but that necessitates allowing anyone 
> with access to bcfg2 the ability to read it. No go.
> 
> You basically just reiterated my question back to me. ;) I'd like to *stop* 
> manually copying the files around, but can't because they're completely plain 
> text. It doesn't matter if it's source control, puppet, bcfg2, cfengine, or 
> anything else; unauthorized people can read them, and I rather they didn't.
> 
> Encrypted passwords would be nice, but apparently this isn't an option.

If the passwords were encrypted, you'd also need to distribute the password to 
decrypt the password.

You could obfuscate the passwords (with something that's somewhat equivalent to 
rot13) which would help with shoulder surfing, but you'd still be distributing 
a secret that's equivalent to a password. That's something you could do without 
any support from postgresql though - just deobfuscate as part of the 
distribution process.

Authentication that isn't based on a secret token would be one way to sidestep 
the issue - source IP based, for instance.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Official C++ API for postgresql?

2012-09-17 Thread Steve Atkins

On Sep 17, 2012, at 11:58 AM, niXman  wrote:

> Hello,
> 
> Tell me please, whether there is an official C++ API for postgresql?

http://pqxx.org/development/libpqxx/ is the main C++ specific library. But
you can also use libpq from C++, and for simpler work it's often easier
than libpqxx.

There are also several other C++ bindings - sqlapi++, soci and Qt are
some off the top of my head.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Too far out of the mainstream"

2012-08-31 Thread Steve Atkins

On Aug 31, 2012, at 4:15 PM, Scott Marlowe  wrote:
> 
> Unless things have changed, Andrew Sullivan in this message
> http://archives.postgresql.org/pgsql-advocacy/2002-09/msg00012.php
> says:
> 
> "All interactions with the shared registry system, and any whois
> queries against whois.afilias.net, are served by a PostgreSQL
> database."

That's likely still the case, a decade later.

> So yeah of course direct service of dns lookup is done via bind
> servers operating off harvested data,

dot-org is actually powered by UltraDNS tech (since bought out by
Afilias) rather than bind. And that is directly SQL database backed,
though likely not the database we know and love.

So unless someone from Afilias pops up and tells us they're using
PG there too I'm a little cautious about mentioning PostgreSQL, .org
and DNS together.

> but whois comes right out of a
> pg database, and live updates go right into a pg database.

Yup.

Cheers,
  Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Too far out of the mainstream"

2012-08-31 Thread Steve Atkins

On Aug 31, 2012, at 12:45 PM, Scott Marlowe  wrote:

> So do they ever go to a site that ends in .org or .info?  Tell them to
> stop it right now, as they are relying on PostgreSQL for those sites
> to resolve, and PostgreSQL is too far out of the mainstream.  Once
> they've stopped using or visiting .org and .info sites tell them to
> get back to you.

Mmm. Don't push this line of argument too hard. As I understand it,
Postgresql is used by the registry to keep track of their customers -
whois data, effectively.

The actual resolution is handled by a different database, or was back
when I knew the details of that end of .org.

I'm sure there's an Access database somewhere in Facebook, but that
doesn't mean Facebook runs on Access. :)

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   >