[GENERAL] Error : Ensure this value has at most 30 characters (it has 31) django

2014-12-10 Thread VENKTESH GUTTEDAR
Hello,

I am using DJango 1.7 and Postgresql 9.3.5.

I am trying to store email in the username field of
django.contrib.auth.models.User.
but when i try to store more than 30 characters i am getting this error
: Ensure this value has at most 30 characters (it has 31) django
i tried changing the max length to 75 of username field of auth_user.
but still i am getting the same error.
Please help me to solve this

-- 
Regards :
Venktesh Guttedar.


[GENERAL] Re: Error : Ensure this value has at most 30 characters (it has 31) django

2014-12-10 Thread David G Johnston
VENKTESH GUTTEDAR wrote
 Hello,
 
 I am using DJango 1.7 and Postgresql 9.3.5.
 
 I am trying to store email in the username field of
 django.contrib.auth.models.User.
 but when i try to store more than 30 characters i am getting this
 error
 : Ensure this value has at most 30 characters (it has 31) django
 i tried changing the max length to 75 of username field of auth_user.
 but still i am getting the same error.
 Please help me to solve this

This is a validation error your django middleware is issuing - you need to
fix the problem there.  Or maybe there is some other field that is being
checked besides auth_user...

David J.




--
View this message in context: 
http://postgresql.nabble.com/Error-Ensure-this-value-has-at-most-30-characters-it-has-31-django-tp5829910p5829917.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error : Ensure this value has at most 30 characters (it has 31) django

2014-12-10 Thread Adrian Klaver

On 12/10/2014 04:38 AM, VENKTESH GUTTEDAR wrote:

Hello,

 I am using DJango 1.7 and Postgresql 9.3.5.

 I am trying to store email in the username field of
django.contrib.auth.models.User.
 but when i try to store more than 30 characters i am getting this
error : Ensure this value has at most 30 characters (it has 31) django
 i tried changing the max length to 75 of username field of
auth_user. but still i am getting the same error.
 Please help me to solve this


On my install of Django 1.7 the email field is already varchar(75).

Are you updating from a previous version of Django?

Did you run python manage.py sycndb ?

Also where are you trying to insert the data from?

If from a form have you checked the field validation code for the email 
field?




--
Regards :
Venktesh Guttedar.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-10 Thread Adrian Klaver

On 12/10/2014 01:32 AM, Eric Svenson wrote:

So, one more success...

I have taken a part of the backup SQL file which fills the table

COPY dev_my_settings (.) from stdin;
12345  text   text   0   123.345345

This file ALONE works! (without changing ANYTHING!)


Hmm, almost like the encoding/locale is changing in the complete file.



So if I run the first (huge) SQL file and then the second, which fills
the dev_my_settings table, everything is ok.


FYI if you do a pg_dump using the custom format(-Fc) you have more 
control over the restore. You can run pg_restore -l against the dump 
file to get a Table of Contents(TOC), which you can edit by commenting 
out items you do not want to restore and then feed back to pg_restore 
via -L. You also have the option to 'restore' all or part of the custom 
file to a text file using the -f option. The details can be found here:


http://www.postgresql.org/docs/9.3/interactive/app-pgrestore.html


2014-12-10 10:23 GMT+01:00 Eric Svenson esvenso...@googlemail.com



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Idle Sessions inTransaction with RELEASE_EXEC_SVP queries and locks

2014-12-10 Thread Adrian Klaver

On 12/08/2014 02:05 AM, chris.jur...@primesoft.ph wrote:

​I am having a problem with having idle sessions in transactions. In
pgAdmin Server Status, it is showing RELEASE_EXEC_SVP_XX (XX
data are varied) as its query and it's locks also contain a lot of these
RELEASE_EXEC_SVP_XX entries. What do these mean? These commonly
cause lock blocking problems because these sessions sometimes lock other
rows but the session status is always Idle in transaction. I do not
know why there are sessions like these. I have reviewed our system and
do not see any possibility of transactions getting started that are not
rollbacked or committed and just stay idle so this must be something
internal to postgresql.

Version of PostgreSQL is 9.2.8 running on Windows using psqlodbc driver
9.0.3.210.


From the psqlodbc source this would seem to indicate SAVEPOINTs are 
involved, so you might want to take a look at this thread:


http://www.postgresql.org/message-id/52dd07051f6b42b09328c339c7b9c...@exch2013.mustinformatique.fr

in particular this message:

http://www.postgresql.org/message-id/54819fac.7030...@vmware.com






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] List of shorthand casts

2014-12-10 Thread FarjadFarid(ChkNet)

Is the list of shorthand casts documented somewhere? 
If so can you please direct me to it. A working URL  would be great.

Thank you.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help Optimizing a Summary Query

2014-12-10 Thread Robert DiFalco
I have users, friends, and friend_requests. I need a query that essentially
returns a summary containing:

* user (name, imageURL, bio, ...)
* Friend status (relative to an active user)
   * Is the user a friend of the active user?
   * Has the user sent a friend request to the active user?
   * Has the user received a friend request from the active user?
* # of mutualFriends
* Exclude the active user from the result set.

So I have mocked this up two ways but both have complicated query plans
that will be problematic with large data sets. I'm thinking that my lack of
deep SQL knowledge is making me miss the obvious choice.

Here's my two query examples:

SELECT u.id, u.name, u.imageURL, u.bio,
   CASE
  WHEN EXISTS(SELECT 1 FROM friends f WHERE f.user_id = 33 AND
f.friend_id = u.id)   THEN 'isFriend'
  WHEN EXISTS(SELECT 1 FROM friend_requests s WHERE s.to_id = 33   AND
s.from_id = u.id) THEN 'hasSentRequest'
  WHEN EXISTS(SELECT 1 FROM friend_requests r WHERE r.to_id = u.id AND
r.from_id = 33)   THEN 'hasReceivedRequest'
  ELSE 'none'
   END AS friendStatus,
   (SELECT COUNT(1)
  FROM friends f1
 JOIN friends f2 ON f1.friend_id = f2.friend_id
  WHERE f1.user_id = 33 AND f2.user_id = u.id) AS mutualFriends
FROM users u
WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

SELECT u.id, u.name, u.imageURL, u.bio,
   CASE
  WHEN f.friend_id IS NOT NULL THEN 'isFriend'
  WHEN s.to_id IS NOT NULL THEN 'hasSentRequest'
  WHEN r.to_id IS NOT NULL THEN 'hasReceivedRequest'
  ELSE 'none'
   END AS 'friendStatus',
   (SELECT COUNT(1) AS d
  FROM friends f1
 JOIN friends f2 ON f1.fiend_id = f2.friend_id
  WHERE f1.user_id = 33 AND f2.user_id = u.id)
FROM users u
LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
WHERE u.id != 33 AND u.name LIKE 'John%' ORDER BY u.name;

33 is just the id of the active user I am using for testing. The WHERE
clause could be anything. I'm just using u.name here but I'm more
concerned about the construction of the result set than the WHERE clause.
These have more or less similar query plans, nothing that would change
things factorially. Is this the best I can do or am I missing the obvious?

Here are the tables:


CREATE TABLE users (
  idBIGINT,
  name  VARCHAR,
  imageURL  VARCHAR
  created   TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  phone_natlBIGINT,   /* National Phone Number */
  country_e164  SMALLINT, /* E164 country code */
  email VARCHAR(255),
  PRIMARY KEY (id),
  UNIQUE (email),
  UNIQUE (phone_natl, country_e164)
);


CREATE TABLE friends (
  user_id  BIGINT,
  friend_id   BIGINT,
  PRIMARY KEY (user_id, user_id),
  FOREIGN KEY (user_id)REFERENCES users(id) ON DELETE CASCADE,
  FOREIGN KEY (friend_id)  REFERENCES users(id) ON DELETE CASCADE
);
CREATE INDEX idx_friends_friend ON friends(friend_id);

CREATE TABLE friend_requests (
  from_id  BIGINT,
  to_idBIGINT,
  created  TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (from_id, user_id),
  FOREIGN KEY (from_id)  REFERENCES users(id) ON DELETE CASCADE,
  FOREIGN KEY (to_id)REFERENCES users(id) ON DELETE CASCADE
);
CREATE INDEX idx_friend_requests_to ON friend_requests(to_id);

Let me know if you guys need anything else.


Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-10 Thread Eric Svenson
So, one more success...

I have taken a part of the backup SQL file which fills the table

COPY dev_my_settings (.) from stdin;
12345  text   text   0   123.345345

This file ALONE works! (without changing ANYTHING!)

So if I run the first (huge) SQL file and then the second, which fills the
dev_my_settings table, everything is ok.

2014-12-10 10:23 GMT+01:00 Eric Svenson esvenso...@googlemail.com:

  The restore left you with two empty tables. What happens if you log into
 Postgres  via psql and then INSERT one set of values containing floats
 into say, dev_my_settings?

 SUCCESS! This works OK!

 INSERT INTO dev_my_settings(123, 'test', 'test', 'test', 123, 123.345);

 Value 123.345 can be read from pg_admin.

 SHOW ALL shows English_United States.1252 for all lc_ settings (but I have
 tried it with German and C locale with same results)

 Regards, Eric

 2014-12-08 22:57 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/08/2014 06:53 AM, Eric Svenson wrote:

 Hi Adrian,

 I try to get access to the non-VM machine, at the moment access is not
 possible for me unfortunately.

 You are right, there are more tables in the database which are restored
 correctly but these tables do NOT contain float values. These two tables
 are the only tables in the database which contain floats.

 The errors occur with the first float in the table, the restore process
 seems to terminate with that table and seems to continue with the next
 table. The result are completely empty tables for dev_my_settings and
 file_item.

 There are float values in the table which can be viewed with pg_admin.

 The table definitions for dev_my_settings and file_item contain lots of
 BIGINTS, smallints and integers, and several double precision values.
 All other tables do not contain any double precision values.


 Alright a chance to think some more.

 So:

 The restore left you with two empty tables. What happens if you log into
 Postgres via psql and then INSERT one set of values containing floats into
 say, dev_my_settings?

 While you are in psql, what does SHOW ALL display for the lc_* settings?

 On the Windows server where the Postgres server is running what does SET
 show from the command line?



 Regards,
 Eric




 --
 Adrian Klaver
 adrian.kla...@aklaver.com





Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-10 Thread Eric Svenson
 The restore left you with two empty tables. What happens if you log into
Postgres  via psql and then INSERT one set of values containing floats
into say, dev_my_settings?

SUCCESS! This works OK!

INSERT INTO dev_my_settings(123, 'test', 'test', 'test', 123, 123.345);

Value 123.345 can be read from pg_admin.

SHOW ALL shows English_United States.1252 for all lc_ settings (but I have
tried it with German and C locale with same results)

Regards, Eric

2014-12-08 22:57 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/08/2014 06:53 AM, Eric Svenson wrote:

 Hi Adrian,

 I try to get access to the non-VM machine, at the moment access is not
 possible for me unfortunately.

 You are right, there are more tables in the database which are restored
 correctly but these tables do NOT contain float values. These two tables
 are the only tables in the database which contain floats.

 The errors occur with the first float in the table, the restore process
 seems to terminate with that table and seems to continue with the next
 table. The result are completely empty tables for dev_my_settings and
 file_item.

 There are float values in the table which can be viewed with pg_admin.

 The table definitions for dev_my_settings and file_item contain lots of
 BIGINTS, smallints and integers, and several double precision values.
 All other tables do not contain any double precision values.


 Alright a chance to think some more.

 So:

 The restore left you with two empty tables. What happens if you log into
 Postgres via psql and then INSERT one set of values containing floats into
 say, dev_my_settings?

 While you are in psql, what does SHOW ALL display for the lc_* settings?

 On the Windows server where the Postgres server is running what does SET
 show from the command line?



 Regards,
 Eric




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



[GENERAL] invalid memory alloc request size

2014-12-10 Thread Gabriel Sánchez Martínez

Hi all,

I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB 
of RAM.  When running pg_dump on a specific table, I get the following 
error:


pg_dump: Dumping the contents of table x_2013 failed: 
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request 
size 18446744073709551613

pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
pg_dump: [parallel archiver] a worker process died unexpectedly

If I run a COPY TO file from psql I get the same error.

Is this an indication of corrupted data?  What steps should I take?

Thanks in advance,
Gabriel


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error : Ensure this value has at most 30 characters (it has 31) django

2014-12-10 Thread Adrian Klaver

CCing list.

On 12/10/2014 08:06 AM, VENKTESH GUTTEDAR wrote:

Ya i used python manage.py syncdb


Well, per Davids suggestion,  you might want to check that it is indeed 
the email field that is triggering the validation error. Both the 
first_name and last_name fields in auth_user have a field length of 30 
and each would trigger the error mentioned if you entered 31 characters.





On Wed, Dec 10, 2014 at 7:26 PM, Adrian Klaver
adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com wrote:

On 12/10/2014 04:38 AM, VENKTESH GUTTEDAR wrote:

Hello,

  I am using DJango 1.7 and Postgresql 9.3.5.

  I am trying to store email in the username field of
django.contrib.auth.models.__User.
  but when i try to store more than 30 characters i am
getting this
error : Ensure this value has at most 30 characters (it has 31)
django
  i tried changing the max length to 75 of username field of
auth_user. but still i am getting the same error.
  Please help me to solve this


On my install of Django 1.7 the email field is already varchar(75).

Are you updating from a previous version of Django?

Did you run python manage.py sycndb ?

Also where are you trying to insert the data from?

If from a form have you checked the field validation code for the
email field?


--
Regards :
Venktesh Guttedar.



--
Adrian Klaver
adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com




--
Regards :
Venktesh Guttedar.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Adrian Klaver

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,

I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM.  When running pg_dump on a specific table, I get the following
error:

pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request
size 18446744073709551613
pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
pg_dump: [parallel archiver] a worker process died unexpectedly

If I run a COPY TO file from psql I get the same error.

Is this an indication of corrupted data?  What steps should I take?



What is the data that is being dumped, for example is there binary data 
in there?


What is the table definition?

Would it be possible to show the header and some sample data from the COPY?



Thanks in advance,
Gabriel





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Gabriel Sánchez Martínez


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,

I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM.  When running pg_dump on a specific table, I get the following
error:

pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request
size 18446744073709551613
pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
pg_dump: [parallel archiver] a worker process died unexpectedly

If I run a COPY TO file from psql I get the same error.

Is this an indication of corrupted data?  What steps should I take?



What is the data that is being dumped, for example is there binary 
data in there?


The data types are bigserial, integer, and character varying.



What is the table definition?


The data is confidential so I have obfuscated the names:

CREATE TABLE x
(
  c01 bigserial NOT NULL,
  c02 integer NOT NULL,
  c03 integer NOT NULL,
  c04 integer NOT NULL,
  c05 integer,
  c06 integer,
  c07 integer,
  c08 integer,
  c09 integer,
  c10 integer,
  c11 integer,
  c12 integer,
  c13 character varying(8),
  c14 integer,
  c15 integer,
  c16 character varying(8),
  c17 integer,
  c18 character varying(8),
  c19 integer,
  c20 integer,
  c21 integer,
  c22 integer,
  c23 integer,
  c24 integer,
  c25 integer,
  c26 integer,
  c27 integer,
  c28 integer,
  c29 integer,
  c30 integer,
  c31 integer,
  c32 integer,
  c33 integer,
  CONSTRAINT PK_x PRIMARY KEY (c01)
)



Would it be possible to show the header and some sample data from the 
COPY?


Here is the first row using SELECT * LIMIT 1:

776696816;12368;47728024;3959;0;256;765;645;309;336;36;102-1;0;0;0;0;-1;-1;0;0;5;28;;0;37





Thanks in advance,
Gabriel









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Adrian Klaver

On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,

I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM.  When running pg_dump on a specific table, I get the following
error:

pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request
size 18446744073709551613
pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
pg_dump: [parallel archiver] a worker process died unexpectedly

If I run a COPY TO file from psql I get the same error.

Is this an indication of corrupted data?  What steps should I take?



What is the data that is being dumped, for example is there binary
data in there?


The data types are bigserial, integer, and character varying.



What is the table definition?


The data is confidential so I have obfuscated the names:

CREATE TABLE x
(
   c01 bigserial NOT NULL,
   c02 integer NOT NULL,
   c03 integer NOT NULL,
   c04 integer NOT NULL,
   c05 integer,
   c06 integer,
   c07 integer,
   c08 integer,
   c09 integer,
   c10 integer,
   c11 integer,
   c12 integer,
   c13 character varying(8),
   c14 integer,
   c15 integer,
   c16 character varying(8),
   c17 integer,
   c18 character varying(8),
   c19 integer,
   c20 integer,
   c21 integer,
   c22 integer,
   c23 integer,
   c24 integer,
   c25 integer,
   c26 integer,
   c27 integer,
   c28 integer,
   c29 integer,
   c30 integer,
   c31 integer,
   c32 integer,
   c33 integer,
   CONSTRAINT PK_x PRIMARY KEY (c01)
)



Would it be possible to show the header and some sample data from the
COPY?


Here is the first row using SELECT * LIMIT 1:

776696816;12368;47728024;3959;0;256;765;645;309;336;36;102-1;0;0;0;0;-1;-1;0;0;5;28;;0;37


How did the database get to this machine?

Was it created from scratch or did it come from somewhere else?

Also how did the Postgres server get installed?









Thanks in advance,
Gabriel










--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Current log files when rotating?

2014-12-10 Thread Alex Shulgin

Sam Mason s...@samason.me.uk writes:

 On Mon, Nov 10, 2008 at 02:30:41PM -0800, Steve Atkins wrote:
 On Nov 10, 2008, at 1:35 PM, Tom Lane wrote:
 Alvaro Herrera alvhe...@commandprompt.com writes:
 It seems that there is enough need for this feature, that it has been
 implemented multiple times -- but most of them will fail in corner
 cases.  Seems an obvious candidate for an in-core function ...
 
 ... which will still fail in corner cases.  Not to mention the race
 condition when the logger has just/is about to switch.
 
 Also, it's going to be a fairly rare app that will need to both
 look at the current logfile and access the database, so providing
 visibility via a sql function seem clunky.

 Indeed, logfiles seem to be in a different world than from inside the
 database.  It seems there are a couple of options; go and ask PG when
 you want to know (pg_ctl seems like a reasonable place IMHO) or have
 PG tell you when it's rotating logs, ala the current archiving of WAL
 files.

 Maybe just have the logger maintain a symlink to the current
 logfile,

 If used; a log archival program could do this.

 or log to a static filename and only use a dynamically
 generated filename when renaming the log file during rotation?

 However, this seems like a simpler solution altogether---no need to even
 ask as you know the answer already!

 This does still leave the question of whether there should be a log
 archival program around to tell you when they actually rotate.  Going
 by the fact that people appear to cope with the current behavior, this
 just seems unnecessarily complicated and can reasonably be delayed for a
 while.

If I may revive this thread...

While generally sane and lightweight, the symlink approach still leaves
the chance for a race condition at the time of the switch.

What if we add an option for logging collector to duplicate every write
to the current file to a non-blocking fifo or something like that?  This
seems to solve the problem of running tail -f against the current log
stream, which was the original request:

  
http://www.postgresql.org/message-id/pine.gso.4.64.0811101325260.9...@westnet.com

The fifo, however only allows for only one reader connection if I'm not
missing something.

--
Alex


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Gabriel Sánchez Martínez


On 12/10/2014 11:49 AM, Adrian Klaver wrote:

On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,

I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM.  When running pg_dump on a specific table, I get the following
error:

pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc 
request

size 18446744073709551613
pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
pg_dump: [parallel archiver] a worker process died unexpectedly

If I run a COPY TO file from psql I get the same error.

Is this an indication of corrupted data?  What steps should I take?



What is the data that is being dumped, for example is there binary
data in there?


The data types are bigserial, integer, and character varying.



What is the table definition?


The data is confidential so I have obfuscated the names:

CREATE TABLE x
(
   c01 bigserial NOT NULL,
   c02 integer NOT NULL,
   c03 integer NOT NULL,
   c04 integer NOT NULL,
   c05 integer,
   c06 integer,
   c07 integer,
   c08 integer,
   c09 integer,
   c10 integer,
   c11 integer,
   c12 integer,
   c13 character varying(8),
   c14 integer,
   c15 integer,
   c16 character varying(8),
   c17 integer,
   c18 character varying(8),
   c19 integer,
   c20 integer,
   c21 integer,
   c22 integer,
   c23 integer,
   c24 integer,
   c25 integer,
   c26 integer,
   c27 integer,
   c28 integer,
   c29 integer,
   c30 integer,
   c31 integer,
   c32 integer,
   c33 integer,
   CONSTRAINT PK_x PRIMARY KEY (c01)
)



Would it be possible to show the header and some sample data from the
COPY?


Here is the first row using SELECT * LIMIT 1:

776696816;12368;47728024;3959;0;256;765;645;309;336;36;102-1;0;0;0;0;-1;-1;0;0;5;28;;0;37 



How did the database get to this machine?

Was it created from scratch or did it come from somewhere else?


The database was created from a pg_dump backup using pg_restore. The 
table has not changed since the backup date, so I could try re-creating 
it the same way, but that doesn't solve the mystery.




Also how did the Postgres server get installed?


apt-get install











Thanks in advance,
Gabriel














--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Adrian Klaver

On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:49 AM, Adrian Klaver wrote:

On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,

I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM.  When running pg_dump on a specific table, I get the following
error:

pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc
request
size 18446744073709551613
pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
pg_dump: [parallel archiver] a worker process died unexpectedly

If I run a COPY TO file from psql I get the same error.

Is this an indication of corrupted data?  What steps should I take?



What is the data that is being dumped, for example is there binary
data in there?


The data types are bigserial, integer, and character varying.







How did the database get to this machine?

Was it created from scratch or did it come from somewhere else?


The database was created from a pg_dump backup using pg_restore. The
table has not changed since the backup date, so I could try re-creating
it the same way, but that doesn't solve the mystery.


So where did the data get dumped from?

An instance of the same version of Postgres or something different?

Which version(s) of pg_dump/pg_restore where used?





Also how did the Postgres server get installed?


apt-get install


Install from what repo?

What was the exact package installed?

Was there another instance of Postgres already(or currently) installed 
on this machine?














Thanks in advance,
Gabriel




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Gabriel Sánchez Martínez


On 12/10/2014 12:47 PM, Adrian Klaver wrote:

On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:49 AM, Adrian Klaver wrote:

On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,

I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 
64 GB
of RAM.  When running pg_dump on a specific table, I get the 
following

error:

pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc
request
size 18446744073709551613
pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
pg_dump: [parallel archiver] a worker process died unexpectedly

If I run a COPY TO file from psql I get the same error.

Is this an indication of corrupted data?  What steps should I take?



What is the data that is being dumped, for example is there binary
data in there?


The data types are bigserial, integer, and character varying.







How did the database get to this machine?

Was it created from scratch or did it come from somewhere else?


The database was created from a pg_dump backup using pg_restore. The
table has not changed since the backup date, so I could try re-creating
it the same way, but that doesn't solve the mystery.


So where did the data get dumped from?

An instance of the same version of Postgres or something different?

Which version(s) of pg_dump/pg_restore where used?





Also how did the Postgres server get installed?


apt-get install


Install from what repo?


The default Ubuntu repository.



What was the exact package installed?


I don't recall, but it was probably postgresql.  The database was 
upgraded to 9.3 when Ubuntu was upgraded to 14.04.  But I ran it for a 
while without issues.  This issue is recent.




Was there another instance of Postgres already(or currently) installed 
on this machine?


No.  There has always been only one instance.















Thanks in advance,
Gabriel








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Adrian Klaver

On 12/10/2014 09:54 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 12:47 PM, Adrian Klaver wrote:

On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:49 AM, Adrian Klaver wrote:

On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,





Also how did the Postgres server get installed?


apt-get install


Install from what repo?


The default Ubuntu repository.



What was the exact package installed?


I don't recall, but it was probably postgresql.


Try:

dpkg -l|grep postgresql

to get the actual package.

The database was

upgraded to 9.3 when Ubuntu was upgraded to 14.04.  But I ran it for a
while without issues.  This issue is recent.


So what was the version before?

Where I am going with this, is trying to determine whether you have a 
'contaminated' data directory.




Was there another instance of Postgres already(or currently) installed
on this machine?


No.  There has always been only one instance.



















Thanks in advance,
Gabriel









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Gabriel Sánchez Martínez


On 12/10/2014 01:00 PM, Adrian Klaver wrote:

On 12/10/2014 09:54 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 12:47 PM, Adrian Klaver wrote:

On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:49 AM, Adrian Klaver wrote:

On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,





Also how did the Postgres server get installed?


apt-get install


Install from what repo?


The default Ubuntu repository.



What was the exact package installed?


I don't recall, but it was probably postgresql.


Try:

dpkg -l|grep postgresql

to get the actual package.


Output is :

ii  postgresql 9.3+154   all 
object-relational SQL database (supported version)
ii  postgresql-9.3 9.3.5-0ubuntu0.14.04.1amd64 
object-relational SQL database, version 9.3 server
ii  postgresql-9.3-pgrouting 2.0.0-release-ppa1~trusty1
amd64Routing functionality support for PostgreSQL/PostGIS
ii  postgresql-9.3-postgis-2.1 2.1.2+dfsg-2  
amd64Geographic objects support for PostgreSQL 9.3
ii  postgresql-9.3-postgis-2.1-scripts 
2.1.2+dfsg-2  all  PostGIS for 
PostgreSQL 9.3 -- scripts -- dummy package
ii  postgresql-9.3-postgis-scripts 2.1.2+dfsg-2  
all  Geographic objects support for PostgreSQL 9.3 -- scripts
ii  postgresql-client-9.3 9.3.5-0ubuntu0.14.04.1
amd64front-end programs for PostgreSQL 9.3
ii  postgresql-client-common 154   
all  manager for multiple PostgreSQL client versions
ii  postgresql-common 154   all  
PostgreSQL database-cluster manager
ii  postgresql-contrib 9.3+154   
all  additional facilities for PostgreSQL (supported version)
ii  postgresql-contrib-9.3 9.3.5-0ubuntu0.14.04.1
amd64additional facilities for PostgreSQL
ii  postgresql-server-dev-9.3 9.3.5-0ubuntu0.14.04.1
amd64development files for PostgreSQL 9.3 server-side programming






The database was

upgraded to 9.3 when Ubuntu was upgraded to 14.04.  But I ran it for a
while without issues.  This issue is recent.


So what was the version before?


9.1.  It has always been whatever was on the Ubuntu repos.



Where I am going with this, is trying to determine whether you have a 
'contaminated' data directory.


I appreciate it!  Perhaps it is worth mentioning that the server had 
some issues with the linux md raid a while back.  Two disks failed in 
quick succession and some data was corrupted.  Those disks have been 
replaced and the RAID has been fine ever since.  The database cluster 
was recreated from scratch after that, from backups of a few months 
before the disks started misbehaving.  This is when that table was 
created with pg_restore.  There were no issues with the restore, and the 
backup was from a few months before the disk issue, so I don't suspect 
corruption in the backup.






Was there another instance of Postgres already(or currently) installed
on this machine?


No.  There has always been only one instance.



















Thanks in advance,
Gabriel













--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Tomas Vondra
On 10.12.2014 17:07, Gabriel Sánchez Martínez wrote:
 Hi all,
 
 I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
 of RAM.  When running pg_dump on a specific table, I get the following
 error:
 
 pg_dump: Dumping the contents of table x_2013 failed:
 PQgetResult() failed.
 pg_dump: Error message from server: ERROR:  invalid memory alloc request
 size 18446744073709551613
 pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
 pg_dump: [parallel archiver] a worker process died unexpectedly
 
 If I run a COPY TO file from psql I get the same error.
 
 Is this an indication of corrupted data?  What steps should I take?

In my experience, issues like this are caused by a corrupted varlena
header (i.e. corruption in text/varchar/... columns).

How exactly that corruption happened is difficult to say - it might be a
faulty hardware (RAM, controller, storage), it might be a bug (e.g.
piece of memory gets overwritten by random data). Or it might be a
consequence of incorrect hardware configuration (e.g. leaving the
on-disk write cache enabled).

If you have a backup of the data, use that instead of recovering the
data from the current database - it's faster and safer.

However, it might be worth spending some time analyzing the corruption
to identify the cause, so that you can prevent it next time.

The are tools that might help you with that - pageinspect extension is
a way to look at the data files on a low-level. It may be quite tedious,
though, and it may not work with badly broken data.

Another option is pg_check - an extension I wrote a few years back. It
analyzes the data file and prints info on all corruption occurences.
It's available at https://github.com/tvondra/pg_check and I just pushed
some minor fixes to make it 9.3-compatible.

regards
Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Gabriel Sánchez Martínez


On 12/10/2014 01:48 PM, Tomas Vondra wrote:

On 10.12.2014 17:07, Gabriel Sánchez Martínez wrote:

Hi all,

I am running PostgreSQL 9.3.5 on Ubuntu Server 14.04 64 bit with 64 GB
of RAM.  When running pg_dump on a specific table, I get the following
error:

pg_dump: Dumping the contents of table x_2013 failed:
PQgetResult() failed.
pg_dump: Error message from server: ERROR:  invalid memory alloc request
size 18446744073709551613
pg_dump: The command was: COPY public.x_2013 (...) TO stdout;
pg_dump: [parallel archiver] a worker process died unexpectedly

If I run a COPY TO file from psql I get the same error.

Is this an indication of corrupted data?  What steps should I take?

In my experience, issues like this are caused by a corrupted varlena
header (i.e. corruption in text/varchar/... columns).

How exactly that corruption happened is difficult to say - it might be a
faulty hardware (RAM, controller, storage), it might be a bug (e.g.
piece of memory gets overwritten by random data). Or it might be a
consequence of incorrect hardware configuration (e.g. leaving the
on-disk write cache enabled).

If you have a backup of the data, use that instead of recovering the
data from the current database - it's faster and safer.

However, it might be worth spending some time analyzing the corruption
to identify the cause, so that you can prevent it next time.

The are tools that might help you with that - pageinspect extension is
a way to look at the data files on a low-level. It may be quite tedious,
though, and it may not work with badly broken data.

Another option is pg_check - an extension I wrote a few years back. It
analyzes the data file and prints info on all corruption occurences.
It's available at https://github.com/tvondra/pg_check and I just pushed
some minor fixes to make it 9.3-compatible.


Thanks.  I'll restore from the earliest available backup, and I'll take 
a look at pg_check.




regards
Tomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] new index type with clustering in mind.

2014-12-10 Thread Jack Douglas
 in 9.4, GIN indexes are pretty close to this already

Do I understand correctly that BRIN indexes will be even closer to this?

Kindest regards
Jack

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: 24 May 2014 22:46
To: Martijn van Oosterhout
Cc: Jack Douglas; pgsql-general@postgresql.org
Subject: Re: [GENERAL] new index type with clustering in mind.

Martijn van Oosterhout klep...@svana.org writes:
 On Sat, May 24, 2014 at 05:58:37PM +0100, Jack Douglas wrote:
 Would the following be practical to implement:
 A btree-like index type that points to *pages* rather than individual
rows.

 It's an interesting idea, but, how can you *ever* delete index entries?
 I.e. is there a way to maintain the index without rebuilding it 
 regularly?

The discussions at PGCon pointed out that with the posting-list compression
logic added in 9.4, GIN indexes are pretty close to this already.  Multiple
items on the same heap page will typically only take one byte of index space
per item; but there is an identifiable entry, so you don't get into these
questions of when VACUUM should remove entries, and it's not lossy so you're
not forced to pay the overhead of rechecking every entry on the linked-to
page.

Not to say that 9.4 GIN is necessarily the last word on the subject, but it
would be worth testing it out before deciding that we need something better.
(beta1 is out.  It needs testing.  Hint hint.)

regards, tom lane



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Use cases for lateral that do not involve a set returning function

2014-12-10 Thread Merlin Moncure
On Tue, Dec 9, 2014 at 4:24 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 SELECT ...
 FROM people p
  LEFT JOIN LATERAL (SELECT * FROM names n
 WHERE n.people_id = p.people_id
 AND current_timestamp  n.validfrom
 ORDER BY n.validfrom DESC LIMIT 1) n
 ON TRUE
 WHERE p.id = ...

 With the correct index this touched fewer blocks and worked faster.
 Also, though this is of course a matter of taste, it is more readable.

 Of course this forces a nested loop, but that is not bad as such.
 In my case it was not problem (I tried to hint at that with the WHERE clause).

I don't know...forcing a nestloop is a dealbreaker for many
distributions of data.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Adrian Klaver

On 12/10/2014 10:08 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 01:00 PM, Adrian Klaver wrote:

On 12/10/2014 09:54 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 12:47 PM, Adrian Klaver wrote:

On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:49 AM, Adrian Klaver wrote:

On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,





Also how did the Postgres server get installed?


apt-get install


Install from what repo?


The default Ubuntu repository.











The database was

upgraded to 9.3 when Ubuntu was upgraded to 14.04.  But I ran it for a
while without issues.  This issue is recent.


So what was the version before?


9.1.  It has always been whatever was on the Ubuntu repos.



Where I am going with this, is trying to determine whether you have a
'contaminated' data directory.


I appreciate it!  Perhaps it is worth mentioning that the server had
some issues with the linux md raid a while back.  Two disks failed in
quick succession and some data was corrupted.  Those disks have been
replaced and the RAID has been fine ever since.  The database cluster
was recreated from scratch after that, from backups of a few months
before the disks started misbehaving.  This is when that table was
created with pg_restore.  There were no issues with the restore, and the
backup was from a few months before the disk issue, so I don't suspect
corruption in the backup.


I would investigate the possibility that the raid is having problems again.







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] new index type with clustering in mind.

2014-12-10 Thread Alvaro Herrera
Jack Douglas wrote:
  in 9.4, GIN indexes are pretty close to this already
 
 Do I understand correctly that BRIN indexes will be even closer to this?
 

Yeah, in a way.  You could say they are closer from the opposite end.
There is one index tuple in a BRIN index for each page range (contiguous
set of pages); each index tuple contains a summary of what in that
page range.  There are no exact entries.  If the values are randomly
scattered, the index is useless; all page ranges will have to be scanned
for possibly matching tuples.  If the values are perfectly clustered,
the index is optimal because you scan the minimal set of pages.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid memory alloc request size

2014-12-10 Thread Gabriel Sánchez Martínez


On 12/10/2014 02:34 PM, Adrian Klaver wrote:

On 12/10/2014 10:08 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 01:00 PM, Adrian Klaver wrote:

On 12/10/2014 09:54 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 12:47 PM, Adrian Klaver wrote:

On 12/10/2014 09:25 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:49 AM, Adrian Klaver wrote:

On 12/10/2014 08:31 AM, Gabriel Sánchez Martínez wrote:


On 12/10/2014 11:16 AM, Adrian Klaver wrote:

On 12/10/2014 08:07 AM, Gabriel Sánchez Martínez wrote:

Hi all,





Also how did the Postgres server get installed?


apt-get install


Install from what repo?


The default Ubuntu repository.











The database was

upgraded to 9.3 when Ubuntu was upgraded to 14.04.  But I ran it for a
while without issues.  This issue is recent.


So what was the version before?


9.1.  It has always been whatever was on the Ubuntu repos.



Where I am going with this, is trying to determine whether you have a
'contaminated' data directory.


I appreciate it!  Perhaps it is worth mentioning that the server had
some issues with the linux md raid a while back.  Two disks failed in
quick succession and some data was corrupted.  Those disks have been
replaced and the RAID has been fine ever since.  The database cluster
was recreated from scratch after that, from backups of a few months
before the disks started misbehaving.  This is when that table was
created with pg_restore.  There were no issues with the restore, and the
backup was from a few months before the disk issue, so I don't suspect
corruption in the backup.


I would investigate the possibility that the raid is having problems 
again.


Everything looks good right now.  I ran SMART checks on all drives and 
there were no errors.  I will restore the table from an earlier backup 
and let you know if that takes care of the issue.  Thanks for your help, 
and please let me know if there is anything else I could do.  I'll reply 
to the list if there are any developments.














--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] new index type with clustering in mind.

2014-12-10 Thread Jack Douglas
 If the values are perfectly clustered, the index is optimal because you
scan the minimal set of pages.

That's the bit I'm particularly interested in, as my plan would be to keep
the pages well clustered: http://dba.stackexchange.com/a/66293/1396

Do you see any blocker preventing BRIN being used for a continuous
background re-clustering job (in parallel with or as part of vacuum),
similar to the mechanism I experimented with before? If not is this
something there might be support for adding to the TODO list?

Kindest regards
Jack




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Speeding up an in-progress wraparound-preventing vacuum

2014-12-10 Thread Jeff Janes
On Mon, Dec 8, 2014 at 4:58 PM, Vincent de Phily 
vincent.deph...@mobile-devices.fr wrote:

 On Monday 08 December 2014 10:17:37 Jeff Janes wrote:
  On Mon, Dec 8, 2014 at 4:54 AM, Vincent de Phily 

  I don't think that routine vacuums even attempts to update relfrozenxid,
 or
  at least doesn't try very hard.

 AFAIU it does (the 6th paragraph in 23.1.4 seems to imply it), and
 vacuum_freeze_min_age controls when. Perhaps lowering that value would help
 prepare a lot of the anti-wraparound work.


There are two things that make autovac do an anti-wraparound vacuum.  One
of them is if the table is already going to be vacuumed anyway
and vacuum_freeze_table_age is exceeded.  In that case, the vacuum that was
going to happen anyway gets promoted to be an anti-wraparound vacuum.  The
other case if when autovacuum_freeze_max_age is exceeded.  Then it forces a
vacuum to happen even when there is no other reason for it to occur.  I
think the language in the 6th paragraph is contrasting these two ways to
get an anti-wraparound vacuum, not contrasting ordinary vacuum with
anti-wraparound vacuum.



 Pity there's no frozen pages map (or is there ?) to keep track of pages
 will
 all-frozen tuples, it could speed up that anti-wraparound vacuum.


But not speed it up by much.  As you discovered, most of the time is spent
dealing with the indexes.



  Are you sure that routine vacuums have been running to completion on this
  table, as opposed to getting interrupted by something before finishing
 each
  time?

 I checked pg_stat_user_tables.last_{auto_,}vacuum for this table and they
 are
 both null. This is seriously worrying. I've seen autovacuum take a few
 days on
 this table but always assumed that it finished properly. And I'm pretty
 sure
 vacuuming does some work, otherwise my disk usage woul grow much faster. I
 have no idea what could cause vacuuming to systematically stop before the
 end.
 Maybe I'll find something in the logs.


If it gets interrupted by another process needing the lock, it will be in
the logs.  But more recent versions than 9.1 will log more useful
information.

If it is interrupted by a database shutdown or restart, it only logs that
it was shutdown, it doesn't log what it was in the middle of doing at the
time.  (Perhaps that should be changed?).

Cheers,

Jeff


[GENERAL] ALTER TYPE ADD SEND AND RECEIVE

2014-12-10 Thread Manuel Kniep
I have a custom type and want to add the yet missing  SEND and RECEIVE functions
is there anyway to alter the type definition without dropping and recreating it?

Manuel



Re: [GENERAL] new index type with clustering in mind.

2014-12-10 Thread Alvaro Herrera
Jack Douglas wrote:
  If the values are perfectly clustered, the index is optimal because you
 scan the minimal set of pages.
 
 That's the bit I'm particularly interested in, as my plan would be to keep
 the pages well clustered: http://dba.stackexchange.com/a/66293/1396
 
 Do you see any blocker preventing BRIN being used for a continuous
 background re-clustering job (in parallel with or as part of vacuum),
 similar to the mechanism I experimented with before? If not is this
 something there might be support for adding to the TODO list?

In principle, CLUSTER sucks, and having data clustered is clearly good,
so improvements in this area are certainly welcome.

If you were to propose some general mechanism that works for any index,
I don't see that we would reject having it work specifically for BRIN;
having it for BRIN only would be strange.  (I guess it's good enough if
it works for btree and BRIN.  Not sure about GiST and SP-GiST; GIN
clearly is of no use here.)

Currently, one issue you're going to face is that brin doesn't rescan a
range to find the tighest possible summary tuple.  Thinking in min/max
terms, once a tuple with a very high or very low is inserted, the range
doesn't get any smaller once that tuple is deleted from the range.  You
would need to find a way to fix that.  (The easiest way is to REINDEX
the whole thing, of course, but that processes the whole table and not
just some portion of it.)

Another issue is how to find the best possible ordering.  For minmax
opclasses it's easy, but for other opclass designs it's less clear what
to do.  Even for minmax you need to find some way to communicate to the
system what's the order to follow ...

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ALTER TYPE ADD SEND AND RECEIVE

2014-12-10 Thread Tom Lane
Manuel Kniep m.kn...@web.de writes:
 I have a custom type and want to add the yet missing  SEND and RECEIVE 
 functions
 is there anyway to alter the type definition without dropping and recreating 
 it?

There's no supported way to do that.  As an unsupported way, you could
consider a manual UPDATE on the type's pg_type row.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Integrating C++ singletons into postgresql extensions???

2014-12-10 Thread vicky
Tom Lane-2 wrote
 FWIW, if you are using the logging collector (highly recommended), output
 to a backend process's stdout or stderr will be caught and included in the
 log, though it won't have a log_line_prefix.  This might be a usable
 substitute for adapting your code.
 
   regards, tom lane
 
 
 -- 
 Sent via pgsql-general mailing list (

 pgsql-general@

 )
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

How do you use the logging collector on c++ code?

#include utils/elog.h
at linking time:
undefined reference to `elog_start(char const*, int, char const*)'

Vicky




--
View this message in context: 
http://postgresql.nabble.com/Integrating-C-singletons-into-postgresql-extensions-tp5823506p5830010.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Stored procedure workflow question

2014-12-10 Thread Israel Brewster
Currently, when I need to create/edit a stored procedure in Postgresql, my workflow goes like the following:- Create/edit the desired function in my "DB Commands" text file- Copy and paste function into my development database- Test- repeat above until it works as desired- Copy and paste function into my production DB.To edit an existing function, the workflow is basically the same as above, but I first have to find the function in my file.This whole workflow just feels kludgy to me. Is there a better way? Or is that essentially the recommended procedure? Thanks.
---Israel BrewsterSystems Analyst IIRavn Alaska5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD




Re: [GENERAL] Stored procedure workflow question

2014-12-10 Thread Gavin Flower

On 11/12/14 13:53, Israel Brewster wrote:
Currently, when I need to create/edit a stored procedure in 
Postgresql, my workflow goes like the following:


- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my development database
- Test
- repeat above until it works as desired
- Copy and paste function into my production DB.

To edit an existing function, the workflow is basically the same as 
above, but I first have to find the function in my file.


This whole workflow just feels kludgy to me. Is there a better way? Or 
is that essentially the recommended procedure? Thanks.

---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---





I create an SQL file using a text editer, and then execute it in psql 
using the '\i' command from the appropriate directory:


gavin= \i bus.sql

I your case I would test it in one environment and copy it to another.

You could use git to track versions of the file and the nature of changes.

Though, I am sure there are sophisticated ways of doing this!


Cheers,
Gavin



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure workflow question

2014-12-10 Thread Adrian Klaver

On 12/10/2014 04:53 PM, Israel Brewster wrote:

Currently, when I need to create/edit a stored procedure in Postgresql,
my workflow goes like the following:

- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my development database
- Test
- repeat above until it works as desired
- Copy and paste function into my production DB.

To edit an existing function, the workflow is basically the same as
above, but I first have to find the function in my file.


For above use \ef in psql. See here for more details:

http://www.postgresql.org/docs/9.3/interactive/app-psql.html

You can save the file outside the database if needed.



This whole workflow just feels kludgy to me. Is there a better way? Or
is that essentially the recommended procedure? Thanks.


Keep the functions in separate files. Do either:

psql -d some_database -U some_user -f function_file.sql

or from within psql

\i some/path/to_file/function_file.sql



---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure workflow question

2014-12-10 Thread Rob Sargent

On 12/10/2014 05:53 PM, Israel Brewster wrote:
Currently, when I need to create/edit a stored procedure in 
Postgresql, my workflow goes like the following:


- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my development database
- Test
- repeat above until it works as desired
- Copy and paste function into my production DB.

To edit an existing function, the workflow is basically the same as 
above, but I first have to find the function in my file.


This whole workflow just feels kludgy to me. Is there a better way? Or 
is that essentially the recommended procedure? Thanks.

---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---





I don't quite follow cut and paste it into some database, but you 
method doesn't sound any more kludy than code-compile-test-install 
workflow in other coding environments. I will say emacs make this a 
snap, but I won't say how;)




Re: service allowing arbitrary relations was Re: [GENERAL] hstore, but with fast range comparisons?

2014-12-10 Thread Guyren Howe


 I want to do something that is perfectly satisfied by an hstore column. 
 *Except* that I want to be able to do fast (ie indexed) ,  etc 
 comparisons, not just equality.
 
 From what I can tell, there isn’t really any way to get hstore to do this, 
 so I’ll have to go to a key-value table. But I thought I’d ask just in case 
 I’m missing something.
 
 
 I think your missing something.
 
 Is it one field in the hstore?  Did you try an expression index?
 
 create index cars_mph on cars ( (data-'mph') );
 
 thats a btree index, which should support  and .  (Although I've never 
 tried it)

Sorry I wasn’t clear. I need a fast comparison on whatever keys someone chooses 
to put in the hstore.

I’m creating a web service where you can post whatever keys and values you 
wish. I am leaning toward having folks declare the relations they want to store 
and the field types, so I could just create a table for each one, but I want 
this to be able to scale to a lot of data for a lot of people.

Perhaps I could give everyone their own schema and just create tables for their 
relations. How heavyweight are schemas?

But if I don’t want to do that, I’m leaning toward a table with a relation name 
and a hstore, and then separate k-v tables for different types. I was thinking 
of keeping the hstore because they will often be searching on fewer fields than 
they’re retrieving, so this would avoid having to do a join for every field 
they need.

Regards,

Guyren G Howe
Relevant Logic LLC




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] I did some testing of GIST/GIN vs BTree indexing…

2014-12-10 Thread Guyren Howe
On Dec 6, 2014, at 12:38 , Bruce Momjian br...@momjian.us wrote:
 
 On Wed, Dec  3, 2014 at 01:15:50AM -0800, Guyren Howe wrote:
 GIN is certainly not the “three times” size suggested in the docs, but 
 perhaps
 that just hasn’t been updated for the 9.4 improvements. Certainly, there 
 isn’t
 sufficient difference here to make the BTree advantage compelling in most
 applications.
 
 I am sure the docs need updating for 9.4 — any suggestions?

I want to get to the point where I can make fairly definitive statements about 
indexing regular fields with GIST or GIN. When I do, I’ll be happy to write 
something for the docs. If folks here can help me get to that point, all to the 
better of all… :-)

 Given the futility of database benchmarking in general, I didn’t want to go 
 any
 further with this. What I was interested in was whether it might be worth
 switching from BTree to GIST/GIN indexes with regular sorts of data. It 
 appears
 to be the case that GIST and GIN are often better than BTree in general, and
 given their much greater flexibility in satisfying queries on different
 columns, it might even be the case that one should recommend a single GIST or
 GIN index on the frequently-searched columns of a table in most cases?
 
 What GiST and GIN ops did you use for the testing?  Was it
 contrib/btree_gist and contrib/btree_gin?

Sorry; yes. I didn’t realize there was any practical alternative. Is there 
another option I should test?

 You might want to look at my presentation on indexing:
 
   http://momjian.us/main/presentations/features.html#indexing
 
 It is my understanding that btree is best for single-match indexes like
 unique indexes, or range queries (not range data types), while GIN is
 best for indexes with many duplicates.  GiST is more of an indexing
 framework and I am unclear where it is best except in cases where is the
 only option, like geometry and perhaps range (shared with SP-GiST). 
 With the 9.4 GIN improvements I am unclear if GiST is ever better for
 full text indexing compared to GIN.

Thanks for this. I will look at your presentation.

As I say, if folks can help me work out the definitive answer to all this, I’d 
love to contribute it to the docs.

My starting point was this: given that GIN (and GIST, maybe, the docs sort-of 
say “sort of”) can use arbitrary index fields, rather than left to right, if 
you’re in a situation of wanting to query arbitrary subsets of some of the 
fields on a table, it seems likely that a GIN index might be called for. Is 
that right? The description I’ve been able to find (that it’s a BTree with more 
sophisticated handling of duplicates) would surely entail otherwise, but this 
is clearly what the docs say.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure workflow question

2014-12-10 Thread Adrian Klaver

On 12/10/2014 05:03 PM, Gavin Flower wrote:

On 11/12/14 13:53, Israel Brewster wrote:

Currently, when I need to create/edit a stored procedure in
Postgresql, my workflow goes like the following:

- Create/edit the desired function in my DB Commands text file
- Copy and paste function into my development database
- Test
- repeat above until it works as desired
- Copy and paste function into my production DB.

To edit an existing function, the workflow is basically the same as
above, but I first have to find the function in my file.

This whole workflow just feels kludgy to me. Is there a better way? Or
is that essentially the recommended procedure? Thanks.
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---






I create an SQL file using a text editer, and then execute it in psql
using the '\i' command from the appropriate directory:

gavin= \i bus.sql

I your case I would test it in one environment and copy it to another.

You could use git to track versions of the file and the nature of changes.

Though, I am sure there are sophisticated ways of doing this!


Two that come to mind:

Sqitch

http://sqitch.org/

Alembic

https://alembic.readthedocs.org/en/latest/




Cheers,
Gavin






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure workflow question

2014-12-10 Thread Paul Jungwirth
How do you handle DDL changes in general? I would treat stored
procedures the same way. For instance Ruby on Rails has database
migrations where you write one method to apply the DDL change and
another to revert it, like this:

def up
  add_column :employees, :manager_id, :integer
  add_index :employees, :manager_id
end

def down
  remove_column :employees, :manager_id
end

You could create stored procedures like:

def up
  connection.execute -EOQ
CREATE OR REPLACE FUNCTION
...
  EOQ
end

or even:

def up
  connection.execute File.read(Rails.root + 'db' + 'procs' +
'my_function.sql')
end

That's how I'd do it in Rails. Maybe your development context has
something similar?

Paul



On Wed, Dec 10, 2014 at 5:53 PM, Adrian Klaver
adrian.kla...@aklaver.com wrote:
 On 12/10/2014 05:03 PM, Gavin Flower wrote:

 On 11/12/14 13:53, Israel Brewster wrote:

 Currently, when I need to create/edit a stored procedure in
 Postgresql, my workflow goes like the following:

 - Create/edit the desired function in my DB Commands text file
 - Copy and paste function into my development database
 - Test
 - repeat above until it works as desired
 - Copy and paste function into my production DB.

 To edit an existing function, the workflow is basically the same as
 above, but I first have to find the function in my file.

 This whole workflow just feels kludgy to me. Is there a better way? Or
 is that essentially the recommended procedure? Thanks.
 ---
 Israel Brewster
 Systems Analyst II
 Ravn Alaska
 5245 Airport Industrial Rd
 Fairbanks, AK 99709
 (907) 450-7293
 ---





 I create an SQL file using a text editer, and then execute it in psql
 using the '\i' command from the appropriate directory:

 gavin= \i bus.sql

 I your case I would test it in one environment and copy it to another.

 You could use git to track versions of the file and the nature of changes.

 Though, I am sure there are sophisticated ways of doing this!


 Two that come to mind:

 Sqitch

 http://sqitch.org/

 Alembic

 https://alembic.readthedocs.org/en/latest/



 Cheers,
 Gavin





 --
 Adrian Klaver
 adrian.kla...@aklaver.com



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I did some testing of GIST/GIN vs BTree indexing…

2014-12-10 Thread Bruce Momjian
On Wed, Dec 10, 2014 at 05:27:16PM -0800, Guyren Howe wrote:
  Given the futility of database benchmarking in general, I didn’t
  want to go any further with this. What I was interested in was
  whether it might be worth switching from BTree to GIST/GIN indexes
  with regular sorts of data. It appears to be the case that GIST and
  GIN are often better than BTree in general, and given their much
  greater flexibility in satisfying queries on different columns, it
  might even be the case that one should recommend a single GIST or
  GIN index on the frequently-searched columns of a table in most
  cases?
 
  What GiST and GIN ops did you use for the testing?  Was it
  contrib/btree_gist and contrib/btree_gin?

 Sorry; yes. I didn’t realize there was any practical alternative. Is
 there another option I should test?

Well, GIN and GiST are usually used for data that btree can't index,
e.g. JSONB, 2-dimensional points  I  thought the only win for
contrib/btree_gist and contrib/btree_gin would be for indexes with many
duplicates.

  You might want to look at my presentation on indexing:
 
  http://momjian.us/main/presentations/features.html#indexing
 
  It is my understanding that btree is best for single-match indexes
  like unique indexes, or range queries (not range data types), while
  GIN is best for indexes with many duplicates.  GiST is more of an
  indexing framework and I am unclear where it is best except in cases
  where is the only option, like geometry and perhaps range (shared
  with SP-GiST).  With the 9.4 GIN improvements I am unclear if GiST
  is ever better for full text indexing compared to GIN.

 Thanks for this. I will look at your presentation.

 As I say, if folks can help me work out the definitive answer to all
 this, I’d love to contribute it to the docs.

Great, thanks.

 My starting point was this: given that GIN (and GIST, maybe, the docs
 sort-of say “sort of”) can use arbitrary index fields, rather
 than left to right, if you’re in a situation of wanting to query
 arbitrary subsets of some of the fields on a table, it seems likely
 that a GIN index might be called for. Is that right? The description
 I’ve been able to find (that it’s a BTree with more sophisticated
 handling of duplicates) would surely entail otherwise, but this is
 clearly what the docs say.

Are you saying when you use a GIN index on a,b,c fields, you can do
lookups on them independently, like 'c'?  I was not aware that works,
but it might.  I know it doesn't work for traditional btree as the index
is hierarchical.  You can look up things like a,c and it will skip over
'b', but doing 'c' alone doesn't make any sense for traditional btree.

It would be interesting if that was true, though, and something we
should more clearly document.  Your testing is very useful here.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Defining functions for arrays of any number type

2014-12-10 Thread Paul Jungwirth
Hello,

I'm working on a package of functions that compute statistics on
arrays of numbers. For example this one computes a histogram from a
bunch of values plus some bucket characteristics:

CREATE OR REPLACE FUNCTION
array_to_hist(double precision[], double precision, double precision, int)
RETURNS int[]
AS 'aggs_for_arrays', 'array_to_hist'
LANGUAGE c;

Now suppose I wanted this function to accept not just arrays of double
precision values, but arrays of any numeric type. I don't see any
pseudotype like that in this list:

http://www.postgresql.org/docs/9.3/static/datatype-pseudo.html

So how would you declare this function? Should I use anyarray for the
first parameter and anyelement for the next two (start of first bucket
and bucket width), and then just have the implementation complain if
the type isn't numeric? Or is there a better way than that?

Is it legal to define a bunch of functions all called `array_to_hist`
for the different numeric types, and have them all implemented by the
same C function?

Thanks,
Paul

-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored procedure workflow question

2014-12-10 Thread Melvin Davidson
I suggest you download and install PgAdmin.

http://www.pgadmin.org/index.php

It makes review of functions and other database objects, as well as
maintenance, a lot easier.

Otherwise, you can just use psql

eg:
psql your_database
\o /some_dir/your_proc_filename
\sf+ your_proc
\q

Your function definition will now be in /your_proc_filename


On Wed, Dec 10, 2014 at 10:07 PM, Paul Jungwirth 
p...@illuminatedcomputing.com wrote:

 How do you handle DDL changes in general? I would treat stored
 procedures the same way. For instance Ruby on Rails has database
 migrations where you write one method to apply the DDL change and
 another to revert it, like this:

 def up
   add_column :employees, :manager_id, :integer
   add_index :employees, :manager_id
 end

 def down
   remove_column :employees, :manager_id
 end

 You could create stored procedures like:

 def up
   connection.execute -EOQ
 CREATE OR REPLACE FUNCTION
 ...
   EOQ
 end

 or even:

 def up
   connection.execute File.read(Rails.root + 'db' + 'procs' +
 'my_function.sql')
 end

 That's how I'd do it in Rails. Maybe your development context has
 something similar?

 Paul



 On Wed, Dec 10, 2014 at 5:53 PM, Adrian Klaver
 adrian.kla...@aklaver.com wrote:
  On 12/10/2014 05:03 PM, Gavin Flower wrote:
 
  On 11/12/14 13:53, Israel Brewster wrote:
 
  Currently, when I need to create/edit a stored procedure in
  Postgresql, my workflow goes like the following:
 
  - Create/edit the desired function in my DB Commands text file
  - Copy and paste function into my development database
  - Test
  - repeat above until it works as desired
  - Copy and paste function into my production DB.
 
  To edit an existing function, the workflow is basically the same as
  above, but I first have to find the function in my file.
 
  This whole workflow just feels kludgy to me. Is there a better way? Or
  is that essentially the recommended procedure? Thanks.
  ---
  Israel Brewster
  Systems Analyst II
  Ravn Alaska
  5245 Airport Industrial Rd
  Fairbanks, AK 99709
  (907) 450-7293
  ---
 
 
 
 
 
  I create an SQL file using a text editer, and then execute it in psql
  using the '\i' command from the appropriate directory:
 
  gavin= \i bus.sql
 
  I your case I would test it in one environment and copy it to another.
 
  You could use git to track versions of the file and the nature of
 changes.
 
  Though, I am sure there are sophisticated ways of doing this!
 
 
  Two that come to mind:
 
  Sqitch
 
  http://sqitch.org/
 
  Alembic
 
  https://alembic.readthedocs.org/en/latest/
 
 
 
  Cheers,
  Gavin
 
 
 
 
 
  --
  Adrian Klaver
  adrian.kla...@aklaver.com
 
 
 
  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general



 --
 _
 Pulchritudo splendor veritatis.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Defining functions for arrays of any number type

2014-12-10 Thread Tom Lane
Paul Jungwirth p...@illuminatedcomputing.com writes:
 Is it legal to define a bunch of functions all called `array_to_hist`
 for the different numeric types, and have them all implemented by the
 same C function?

Sure.

(There is a regression test that objects if we try to do that with
built-in functions, but it's meant to catch accidental errors rather
than intentional overloading.  As long as the C code is prepared to
handle all the cases, you can do what you like.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general