Re: [GENERAL] Serialization exception : Who else was involved?

2014-12-02 Thread Olivier MATROT
Hello,

 

I pushed the logs to the DEBUG5 level with no luck.

 

So I decided to take a look at the code. 

Serialization conflict detection is done in 
src/backend/storage/lmgr/predicate.c, where transactions that are doomed to 
fail are marked as such with the SXACT_FLAG_DOOMED flag.

 

I simply added elog(NOTIFY,...) calls with the DEBUG1 level, each time the flag 
is set, compiled the code and give it a try.

 

The results are amazing for me, because this simple modification allows me to 
see which query is marking other running transactions to fail.

Without this information, this is really difficult to understand what's going 
on and, eventualy, modify my program to make my transactions run successfully 
more often.

 

What is the correct way to suggest this improvement for a future version of 
PostgreSQL ?

 

Regards.

 

Olivier.

 

De : pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] De la part de Olivier MATROT
Envoyé : mardi 25 novembre 2014 13:00
À : pgsql-general@postgresql.org
Objet : [GENERAL] Serialization exception : Who else was involved?

 

I'm using PostgreSQL 9.2.8 on Windows x64.

 

When a transaction as SERIALIZABLE isolation level is failing, is there a way 
to know which connection(s), thus transaction(s), were involved in the overall 
process ?

 

I'm working on an accounting database. I know that we should retry the 
transaction, this is what we do. I've encountered an issue where a transaction 
has to be run 10 times to succeed on a busy system.

 

There should be a serious problem here. I'm wondering if there is a 
configuration parameter that would allow the system to tell in the exception 
detail the other connections involved in the failure.

 

The Postgresql wiki on SSI has no such information.

 

Thanks is advance.

 

Olivier.



Re: [GENERAL] Serialization exception : Who else was involved?

2014-12-02 Thread Albe Laurenz
Olivier MATROT wrote:
 Serialization conflict detection is done in 
 src/backend/storage/lmgr/predicate.c, where transactions
 that are doomed to fail are marked as such with the SXACT_FLAG_DOOMED flag.
 
 I simply added elog(NOTIFY,...) calls with the DEBUG1 level, each time the 
 flag is set, compiled the
 code and give it a try.
 
 The results are amazing for me, because this simple modification allows me to 
 see which query is
 marking other running transactions to fail.
 
 Without this information, this is really difficult to understand what’s going 
 on and, eventualy,
 modify my program to make my transactions run successfully more often.
 
 What is the correct way to suggest this improvement for a future version of 
 PostgreSQL ?

First you should make this suggestion on the -hackers list; if you have
a patch against HEAD, attach it.

If you get positive or encouraging feedback, add the patch to the next 
commitfest.
People who contribute code are also expected to review code.

Read the Developer FAQ:
https://wiki.postgresql.org/wiki/Developer_FAQ

Another helpful article:
https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning of a dependent table not based on date

2014-12-02 Thread Herouth Maoz

On 01/12/2014, at 19:26, Andy Colson wrote:

 On 12/1/2014 11:14 AM, Herouth Maoz wrote:
 I am currently in the process of creating a huge archive database that
 contains data from all of our systems, going back for almost a decade.
 
 Most of the tables fall into one of two categories:
 
 1. Static tables, which are rarely updated, such as lookup tables or
 user lists. I don't intend to partition these, I'll just refresh them
 periodically from production.
 2. Transaction tables, that have a timestamp field, for which I have the
 data archived in COPY format by month. Of course a monolithic table over
 a decade is not feasible, so I am partitioning these by month.
 
 (I don't mean transaction in the database sense, but in the sense that
 the data represents historical activity, e.g. message sent, file
 downloaded etc.)
 
 I have one table, though, that doesn't fall into this pattern. It's a
 many-to-one table relating to one of the transaction tables. So on one
 hand, it doesn't have a time stamp field, and on the other hand, it has
 accumulated lots of data over the last decade so I can't keep it
 unpartitioned.
 
 
 Lets stop here.  One big table with lots of rows (and a good index) isn't a 
 problem.  As long as you are not table scanning everything, there isn't a 
 reason to partition the table.
 
 Lots and lots of rows isnt a problem except for a few usage patterns:
 1) delete from bigtable where (some huge percent of the rows)
 2) select * from bigtable where (lots and lots of table scanning and cant 
 really index)
 
 If your index is selective enough, you'll be fine.


Hmm. I suppose you're right. I planned the whole partition thing in the first 
place because most of my transaction tables are still alive so I'll need to 
continue bulk-inserting data every month, and inserting into a fresh partition 
is better than into a huge table.

But in this case, since we have stopped working on this application in January, 
there will be no fresh inserts so it's not as important. We just need the 
archive for legal purposes.

One thing, though: I noticed on my other system (a reports system, that holds a 
year's worth of data) that after I have partitioned the largest tables, backup 
time dropped. I suppose pg_dump of a single huge table takes is not as fast as 
pg_dump of multiple smaller ones.

Herouth

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Auto vacuum not running -- Could not bind socket for statistics collector

2014-12-02 Thread Tim Schäfer
Dear list,


I am having trouble running PostgreSQL 9.3 under OpenSuSE because auto vacuum
does not seem to work.

Here are the details on my version:

# select version();
PostgreSQL 9.3.5 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 4.8.1
20130909 [gcc-4_8-branch revision 202388], 64-bit)

After starting the server with pg_ctl start, I get the following entries in the
logs:

2014-12-02 15:27:36 CET LOG:  could not bind socket for statistics
collector: Cannot assign requested address
2014-12-02 15:27:36 CET LOG:  disabling statistics collector for lack of
working socket
2014-12-02 15:27:36 CET WARNING:  autovacuum not started because of
misconfiguration
2014-12-02 15:27:36 CET HINT:  Enable the track_counts option.


BUT: track_counts is set to on in the postgresql.conf file (and so is
auto_vacuum).


I found some older threads using Google, and the person was given the advice to
check the listen addresses resolve to the proper IP addresses, but this is the
case for me:
 grep listen_address /var/lib/pgsql/data/postgresql.conf
listen_addresses = '127.0.0.1, 192.168.185.41'  # what IP address(es) to
listen on;

 /sbin/ifconfig | grep eth0 -C 2
eth0  Link encap:Ethernet  HWaddr 00:25:90:5A:B0:42
  inet addr:192.168.185.41  Bcast:192.168.185.255  Mask:255.255.255.0
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

 ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.026 ms
...

Some threads claim this was only a warning, and AV would be running, but this is
not the case:
# SELECT
  schemaname, relname,
  last_vacuum, last_autovacuum,
  vacuum_count, autovacuum_count
FROM pg_stat_user_tables;

schemaname |   relname| last_vacuum | last_autovacuum |
vacuum_count | autovacuum_count
+--+-+-+--+--
 public | plcc_motiftype   | | |
   0 |0
 public | plcc_ssecontact_complexgraph | | |
   0 |0
 public | plcc_nm_ssetoproteingraph| | |
   0 |0
 public | plcc_ssetypes| | |
   0 |0
 public | plcc_contact | | |
   0 |0
 public | plcc_complexcontacttypes | | |
   0 |0
 public | plcc_protein | | |
   0 |0
 public | plcc_contacttypes| | |
   0 |0
 public | plcc_graphtypes  | | |
   0 |0
 public | plcc_sse | | |
   0 |0
 public | plcc_secondat| | |
   0 |0
 public | plcc_nm_ssetofoldinggraph| | |
   0 |0
 public | plcc_fglinnot| | |
   0 |0
 public | plcc_complex_contact | | |
   0 |0
 public | plcc_foldinggraph| | |
   0 |0
 public | plcc_ligand  | | |
   0 |0
 public | plcc_nm_ligandtochain| | |
   0 |0
 public | plcc_graph   | | |
   0 |0
 public | plcc_graphlets   | | |
   0 |0
 public | plcc_motif   | | |
   0 |0
 public | plcc_chain   | | |
   0 |0
 public | plcc_complexgraph| | |
   0 |0
 public | plcc_nm_chaintomotif | | |
   0 |0
 public | plcc_graphletsimilarity  | | |
   0 |0


Atm, 64 parallel instances of a custom Java application write a lot of of data
into this database server (this is a computer cluster), so it SHOULD vaccuum.

Doing simple SELECTs takes a long time after some hours (a 'SELECT count(*)'
from a table with 5.5M columns takes 4 secs). When I run VACUUM manually, it
works (takes very long though), and afterwards, the SELECTs are fast again
(2ms). But this changes 

Re: [GENERAL] Programmatic access to interval units

2014-12-02 Thread Nelson Green
On Mon, Dec 1, 2014 at 2:14 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Mon, Dec 1, 2014 at 10:42 AM, Nelson Green nelsongree...@gmail.com
 wrote:
  Good morning list,
 
  According to the documentation for interval data type inputs, the unit
 can
  be one of microsecond, millisecond, second, minute, hour, day, week,
 month,
  year, decade, century, or millennium. Are these units stored in a catalog
  somewhere? I would like to access them programmatically if possible, to
  validate input for a function I am developing.

 if you're writing C, you can use libpqtypes to do this. It exposes the
 interval as a C structure.

 typedef struct
 {
 int years;
 int mons;
 int days;
 int hours;
 int mins;
 int secs;
 int usecs;
 } PGinterval;


 merlin


Thanks Merlin. I am not writing this in C, rather I am using Pl/pgSQL.
Apologies
for not mentioning that up front. I was hoping to do a SELECT ... WHERE IN
query form a catalog relation.

That being said, maybe it is time for me to get back into C? I haven't done
much
in C in many years, but this simple validation function might not be a bad
jumping off point. If I do not get the response I was hoping for I may just
do
that.

Regards,
Nelson


Re: [GENERAL] Auto vacuum not running -- Could not bind socket for statistics collector

2014-12-02 Thread Tom Lane
=?UTF-8?Q?Tim_Sch=C3=A4fer?= ts...@rcmd.org writes:
 After starting the server with pg_ctl start, I get the following entries in 
 the
 logs:

 2014-12-02 15:27:36 CET LOG:  could not bind socket for statistics
 collector: Cannot assign requested address
 2014-12-02 15:27:36 CET LOG:  disabling statistics collector for lack of
 working socket

Yes, this will break autovacuum, because it won't have any way to find out
what it should vacuum.  The cause probably is a DNS issue: localhost
isn't resolving to anything sensible.  dig localhost on the command line
might offer some insight.

 I found some older threads using Google, and the person was given the advice 
 to
 check the listen addresses resolve to the proper IP addresses, but this is the
 case for me:
 grep listen_address /var/lib/pgsql/data/postgresql.conf

listen_addresses is not related to this.  The stats code tries to bind
to whatever localhost resolves as, independently of that.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres 9.2 PITR testing to before DROP DATABASE ends up removing file system files

2014-12-02 Thread Joshua Boyd
I am testing out point in time recovery from a hot physical backup in a
disaster recovery situation - I turned on archiving of files, created a hot
physical backup, then (after letting it run for a few days) issued a DROP
DATABASE.  The pg_log file shows the DROP DATABASE command was issued at
'2014-11-28 10:20:00.010 PST'.  I shut down the server, moved the pgdata
directory to pgdata_backup ... restored the files in the hot physical
backup I made, copied the wal archive files from pgdata_backup to the (new)
pgdata archive, cleared out the new pg_xlog dir and copied the files from
the old pg_xlog into the new..  Set up a recovery.conf file as such:

restore_command = 'gunzip -c /home/pg2dev/joshtest/pgdata/archive/%f.gz 
%p'
recovery_target_time = '2014-11-28 10:20:00.010 PST'
recovery_target_inclusive = false

then I started the server up.  the pg_log shows the following:

2014-11-28 14:22:55.059 PST LOG:  database system was interrupted; last
known up at 2014-11-24 11:34:14 PST
2014-11-28 14:22:55.060 PST LOG:  starting point-in-time recovery to
2014-11-28 10:20:00.01-08
2014-11-28 14:22:55.239 PST LOG:  restored log file
000100A0 from archive
2014-11-28 14:22:55.243 PST LOG:  redo starts at 0/A080
2014-11-28 14:22:55.244 PST LOG:  consistent recovery state reached at
0/A100
2014-11-28 14:22:55.412 PST LOG:  restored log file
000100A1 from archive
2014-11-28 14:22:55.674 PST LOG:  restored log file
000100A2 from archive
2014-11-28 14:22:55.777 PST LOG:  recovery stopping before commit of
transaction 235078, time 2014-11-28 10:20:00.179303-08
2014-11-28 14:22:55.777 PST LOG:  redo done at 0/A2F0F4B0
2014-11-28 14:22:55.777 PST LOG:  last completed transaction was at log
time 2014-11-28 09:47:07.132608-08

Which looks fine and dandy.  I connect to database with psql and list the
databases with \l ... the database that WAS dropped is listed, which looks
fine and dandy.

I try to connect to the database and it gives:

psql: FATAL:  database jasperserver_restore does not exist
DETAIL:  The database subdirectory base/907110 is missing.

And then I look in pgdata/base .. and sure enough, that directory is
missing.  I examine my hot physical backup file and that directory exists
within it.

So  even though the recovery SAYS recovery stopping before commit of
transaction 235078 ... it doesn't appear that it's 100% accurate.  It
didn't commit the transaction, clearly, because the database is still
listed in the data dictionary ... however, the filesystem files are gone.
Please - am I doing something wrong, or would this be considered a bug?

-- 
Joshua Boyd


Re: [GENERAL] Problem with pg_dump and decimal mark

2014-12-02 Thread Eric Svenson
Seems you have a locale mismatch issue. The dump is coming from a locale
 where a '.' is the decimal mark and is being restored to a locale where
 ',' is the mark. Look at what the locales are the machines that work and
 the one that does not.



I have already done that and found something strange:

On the PC where the backup was done with pg_dump, all locale settings of
Postgres were English/United States. (LC_COLLATE, LC_CTYPE, LC_MONETARY,
LC_NUMERIC in postgresql.conf)

On the first PC on which I tried to load the backup file with psql, all
locale settings if Postgres were  German_Germany. Everything is ok, the SQL
file with '.' as decimal point was accepted without a problem

On the second PC (Virtual Machine) I had the SAME settings in
postgresql.conf (German_Germany)

- no success

I tried to change all the settings to English/United States, restart
postgres

- still no success

Changed all Windows settings to English / United States

- still no success.

So what I am searching for (at the moment without success) is the 'switch'
which decides what decimal seperator to expect by psql.


 That's what it sounds like all right, but how could that be?  The behavior
 of float8in/float8out is not supposed to be locale-dependent.

 float8in does depend on strtod(), whose behavior is locale-dependent
 according to POSIX, but we keep LC_NUMERIC set to C to force it to
 only believe that . is decimal point.


 Not sure if this makes a difference but if I am reading the original post
 correctly the OP was trying a plain text restore via psql.


This is correct.


regards and thanks for your support,
Eric Svenson


[GENERAL] update several columns from function returning several values

2014-12-02 Thread Rémi Cura
Hey,
a trivial question I guess,
can't make it work.

IF I define a function returning several values , I can't use it to update
mutliple columns of a table at once.
i __don't__ want to use CTE or subquerry,
and of course I don't wan tto compute the function several time.

CREATE TABLE test_update_m_values (
gid int,
gid_plus_1 int,
gid_minus_1 int
);

INSERT INTO test_update_m_values VALUES (1,0,0) ;

CREATE OR REPLACE FUNCTION rc_test_update_m_values( gid int,OUT gid_plus_1
int, OUT gid_minus_1 int)
AS $$  -- @brief : test function, can be deleted
BEGIN
SELECT gid+1,gid-1 INTO gid_plus_1, gid_minus_1;
RETURN ;
END;
$$ LANGUAGE 'plpgsql' IMMUTABLE STRICT   ;


UPDATE test_update_m_values SET (gid_plus_1,gid_minus_1) =
(rc_test_update_m_values(gid)); --doesn't work

Somebody now if this is possible?

CHeers,
Rémi-C


[GENERAL] Trying to get SSPI/JDBC working

2014-12-02 Thread Dave Rosckes
I am new to Postgresql.  I have a postgres server running on a windows
platform.  The DBs will not have any remote access, only users logged onto
the localhost will have access.  SSPI seems like a great authentication
protocol to use in this case.

I believe I have followed all the instructions on how to get this to work:

Postgresql: postgresql-9.3.4-4-windows-x64

Updated pg_hba.conf to have the following lines:
hostall all 127.0.0.1/32sspi
hostall all ::1/128 md5

Created role that matches userid via pgAdmin III

JDBC connection url:
jdbc:postgresql://localhost/postgresDB?integratedSecurity=true
JDBC jar: postgresql-9.3-1100.jdbc4.jar

Looking in the pglog all I see is:

CST FATAL:  SSPI authentication failed for user roscked
CST DETAIL:  Connection matched pg_hba.conf line 80: host
all all 127.0.0.1/32sspi

The exception I see:

org.postgresql.util.PSQLException: GSS Authentication failed

Any guidance on how to get more information on the failure, or suggestion
on what I am doing wrong would be greatly appreciated.


Re: [GENERAL] Programmatic access to interval units

2014-12-02 Thread Melvin Davidson
*I'm pretty sure the interval values are buried in the code, but there is
nothing to prevent you from creating your own reference table. :) CREATE
TABLE time_intervals(   time_interval_name varchar(15) NOT NULL,
CONSTRAINT time_intervals_pk PRIMARY KEY (time_interval_name));INSERT INTO
time_intervalsVALUES('microsecond'),('millisecond'),('second'),('minute'),('hour'),('day'),('week'),('month'),('year'),('decade'),('century'),('millennium');*

*SELECT * FROM time_intervals;*

On Tue, Dec 2, 2014 at 10:48 AM, Nelson Green nelsongree...@gmail.com
wrote:

 On Mon, Dec 1, 2014 at 2:14 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Mon, Dec 1, 2014 at 10:42 AM, Nelson Green nelsongree...@gmail.com
 wrote:
  Good morning list,
 
  According to the documentation for interval data type inputs, the unit
 can
  be one of microsecond, millisecond, second, minute, hour, day, week,
 month,
  year, decade, century, or millennium. Are these units stored in a
 catalog
  somewhere? I would like to access them programmatically if possible, to
  validate input for a function I am developing.

 if you're writing C, you can use libpqtypes to do this. It exposes the
 interval as a C structure.

 typedef struct
 {
 int years;
 int mons;
 int days;
 int hours;
 int mins;
 int secs;
 int usecs;
 } PGinterval;


 merlin


 Thanks Merlin. I am not writing this in C, rather I am using Pl/pgSQL.
 Apologies
 for not mentioning that up front. I was hoping to do a SELECT ... WHERE IN
 query form a catalog relation.

 That being said, maybe it is time for me to get back into C? I haven't
 done much
 in C in many years, but this simple validation function might not be a bad
 jumping off point. If I do not get the response I was hoping for I may
 just do
 that.

 Regards,
 Nelson




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] update several columns from function returning several values

2014-12-02 Thread Tom Lane
=?UTF-8?Q?R=C3=A9mi_Cura?= remi.c...@gmail.com writes:
 IF I define a function returning several values , I can't use it to update
 mutliple columns of a table at once.
 ...
 UPDATE test_update_m_values SET (gid_plus_1,gid_minus_1) =
 (rc_test_update_m_values(gid)); --doesn't work

 Somebody now if this is possible?

Not currently.  In 9.5 it'll be possible to do

UPDATE test_update_m_values SET (gid_plus_1,gid_minus_1) =
(select * from rc_test_update_m_values(gid)); 

but the syntax you were trying will never work, because it would be
ambiguous with the case of assigning a composite value to a single
composite column.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Programmatic access to interval units

2014-12-02 Thread Merlin Moncure
On Tue, Dec 2, 2014 at 9:48 AM, Nelson Green nelsongree...@gmail.com wrote:
 On Mon, Dec 1, 2014 at 2:14 PM, Merlin Moncure mmonc...@gmail.com wrote:
 On Mon, Dec 1, 2014 at 10:42 AM, Nelson Green nelsongree...@gmail.com
 wrote:
  Good morning list,
 
  According to the documentation for interval data type inputs, the unit
  can
  be one of microsecond, millisecond, second, minute, hour, day, week,
  month,
  year, decade, century, or millennium. Are these units stored in a
  catalog
  somewhere? I would like to access them programmatically if possible, to
  validate input for a function I am developing.

 if you're writing C, you can use libpqtypes to do this. It exposes the
 interval as a C structure.

 typedef struct
 {
 int years;
 int mons;
 int days;
 int hours;
 int mins;
 int secs;
 int usecs;
 } PGinterval;


 Thanks Merlin. I am not writing this in C, rather I am using Pl/pgSQL.
 Apologies
 for not mentioning that up front. I was hoping to do a SELECT ... WHERE IN
 query form a catalog relation.

 That being said, maybe it is time for me to get back into C? I haven't done
 much

well, maybe: that's a different question.  I wasn't sure what exactly
you wanted to verify and how.  The database is coded in C so having a
client side library that exposes the server side data with minimal
translation is pretty valuable.

For an sql solution, you probably want something like this.  It isn't
perfect, because there is some extra calculation happening vs what the
server actually stores but it might suffice:

create or replace function parse_interval(
  _i interval,
  years OUT INT,
  mons OUT INT,
  days OUT INT,
  hours OUT INT,
  mins OUT INT,
  secs OUT INT,
  usecs OUT INT) returns record as
$$
  select
extract('years' from _i)::INT,
extract('months' from _i)::INT,
extract('days' from _i)::INT,
extract('hours' from _i)::INT,
extract('minutes' from _i)::INT,
extract('seconds' from _i)::INT,
extract('microseconds' from _i)::INT;
$$ language sql immutable;


postgres=# select * from parse_interval('412342 years 5.2314321 months');
 years  │ mons │ days │ hours │ mins │ secs │  usecs
┼──┼──┼───┼──┼──┼──
 412342 │5 │6 │22 │   37 │   52 │ 52003200

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Programmatic access to interval units

2014-12-02 Thread Nelson Green
On Tue, Dec 2, 2014 at 10:16 AM, Melvin Davidson melvin6...@gmail.com
wrote:






















 *I'm pretty sure the interval values are buried in the code, but there is
 nothing to prevent you from creating your own reference table. :) CREATE
 TABLE time_intervals(   time_interval_name varchar(15) NOT NULL,
 CONSTRAINT time_intervals_pk PRIMARY KEY (time_interval_name));INSERT INTO
 time_intervalsVALUES('microsecond'),('millisecond'),('second'),('minute'),('hour'),('day'),('week'),('month'),('year'),('decade'),('century'),('millennium');*

 *SELECT * FROM time_intervals;*


Thanks Melvin,

Actually I've already hard-coded a temporary table into the function so
that I
can move forward with the development, but wanted to make that part more
dynamic, which is what prompted my first question.

Regards,
Nelson



 On Tue, Dec 2, 2014 at 10:48 AM, Nelson Green nelsongree...@gmail.com
 wrote:

 On Mon, Dec 1, 2014 at 2:14 PM, Merlin Moncure mmonc...@gmail.com
 wrote:

 On Mon, Dec 1, 2014 at 10:42 AM, Nelson Green nelsongree...@gmail.com
 wrote:
  Good morning list,
 
  According to the documentation for interval data type inputs, the unit
 can
  be one of microsecond, millisecond, second, minute, hour, day, week,
 month,
  year, decade, century, or millennium. Are these units stored in a
 catalog
  somewhere? I would like to access them programmatically if possible, to
  validate input for a function I am developing.

 if you're writing C, you can use libpqtypes to do this. It exposes the
 interval as a C structure.

 typedef struct
 {
 int years;
 int mons;
 int days;
 int hours;
 int mins;
 int secs;
 int usecs;
 } PGinterval;


 merlin


 Thanks Merlin. I am not writing this in C, rather I am using Pl/pgSQL.
 Apologies
 for not mentioning that up front. I was hoping to do a SELECT ... WHERE IN
 query form a catalog relation.

 That being said, maybe it is time for me to get back into C? I haven't
 done much
 in C in many years, but this simple validation function might not be a bad
 jumping off point. If I do not get the response I was hoping for I may
 just do
 that.

 Regards,
 Nelson




 --
 *Melvin Davidson*
 I reserve the right to fantasize.  Whether or not you
 wish to share my fantasy is entirely up to you.



Re: [GENERAL] JSON_AGG produces extra square brakets

2014-12-02 Thread Davide S
This is a small testcase that reproduces the problem on my machine.


  DB SETUP  

createdb --username=myuser --owner=myuser --encoding=UTF8 testcase


CREATE TABLE thing_template (
id   serial   PRIMARY KEY
);
INSERT INTO thing_template VALUES ( 1 );


CREATE TABLE thing (
idserialPRIMARY KEY,
template_id   integer   REFERENCES thing_template   NOT NULL
);
INSERT INTO thing VALUES ( 1, 1 );
INSERT INTO thing VALUES ( 2, 1 );


CREATE TABLE tag (
id serial   PRIMARY KEY,
name   text
);
INSERT INTO tag VALUES ( 1, 'tag 1' );
INSERT INTO tag VALUES ( 2, 'tag 2' );


CREATE TABLE thing_tag (
thing_id   integer   REFERENCES thing   NOT NULL,
tag_id integer   REFERENCES tag NOT NULL,
PRIMARY KEY ( thing_id, tag_id )
);
INSERT INTO thing_tag VALUES ( 1, 1 );
INSERT INTO thing_tag VALUES ( 1, 2 );
INSERT INTO thing_tag VALUES ( 2, 1 );
INSERT INTO thing_tag VALUES ( 2, 2 );


CREATE TABLE summary_status (
id serialPRIMARY KEY,
severity   integer
);
INSERT INTO summary_status VALUES ( 1, 10 );
INSERT INTO summary_status VALUES ( 2, 20 );


CREATE TABLE thing_state (
thing_template_id   integer   REFERENCES thing_template   NOT NULL,
summary_status_id   integer   REFERENCES summary_status   NOT NULL,
image_url   text,
PRIMARY KEY ( thing_template_id, summary_status_id )
);
INSERT INTO thing_state VALUES ( 1, 1, 'img1.jpg' );
INSERT INTO thing_state VALUES ( 1, 2, 'img2.jpg' );



 QUERY  

SELECT
thing.id,
tags,
xtst.states
FROM
thing,
(SELECT thing_tag.thing_id AS thid, JSON_AGG( tag.name ) AS tags FROM
thing_tag, tag WHERE (thing_tag.tag_id = tag.id) GROUP BY
thing_tag.thing_id) xtg,
(SELECT thing_state.thing_template_id, JSON_AGG( ROW_TO_JSON( (SELECT q
FROM (SELECT thing_state.image_url, summary_status.severity) q) ) ) AS
states FROM thing_state, summary_status WHERE
(thing_state.summary_status_id = summary_status.id) GROUP BY
thing_state.thing_template_id) xtst
WHERE
(xtg.thid = thing.id) AND
(xtst.thing_template_id = thing.template_id) AND
(thing.id IN (1, 2));



  RESULT  

 id |tags|
states
++---
  1 | [tag 1, tag 2] | [{image_url:img1.jpg,severity:10},
{image_url:img2.jpg,severity:20}]
  2 | [tag 1, tag 2] | [{image_url:img1.jpg,severity:10},
{image_url:img2.jpg,severity:20}]]
(2 rows)



Note the ']]' at the end of the second row (the third would have 3
brackets, and so on).


Some info on my system (debian testing, updated a maybe 10 days ago):
$ uname -r
3.16.0-4-amd64
$ psql -V
psql (PostgreSQL) 9.4beta3



Thanks!




On Sun, Nov 30, 2014 at 11:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Davide S swept.along.by.eve...@gmail.com writes:
  Note that the tags are just fine, but the arrays with the states have an
  increasing number of square brackets at the end: the first has 1
 (correct),
  the second has 2, the third has 3, etc., which is invalid json.

 Could you provide a self-contained test case for that?

 regards, tom lane



Re: [GENERAL] JSON_AGG produces extra square brakets

2014-12-02 Thread Tom Lane
Davide S swept.along.by.eve...@gmail.com writes:
 This is a small testcase that reproduces the problem on my machine.

Ah, I see it: json_agg_finalfn is violating the rule that an aggregate
final function can't scribble on the aggregate state.  Will fix, thanks
for the report!

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Programmatic access to interval units

2014-12-02 Thread Nelson Green
On Tue, Dec 2, 2014 at 11:57 AM, Merlin Moncure mmonc...@gmail.com wrote:

 On Tue, Dec 2, 2014 at 9:48 AM, Nelson Green nelsongree...@gmail.com
 wrote:
  On Mon, Dec 1, 2014 at 2:14 PM, Merlin Moncure mmonc...@gmail.com
 wrote:
  On Mon, Dec 1, 2014 at 10:42 AM, Nelson Green nelsongree...@gmail.com
  wrote:
   Good morning list,
  
   According to the documentation for interval data type inputs, the unit
   can
   be one of microsecond, millisecond, second, minute, hour, day, week,
   month,
   year, decade, century, or millennium. Are these units stored in a
   catalog
   somewhere? I would like to access them programmatically if possible,
 to
   validate input for a function I am developing.
 
  if you're writing C, you can use libpqtypes to do this. It exposes the
  interval as a C structure.
 
  typedef struct
  {
  int years;
  int mons;
  int days;
  int hours;
  int mins;
  int secs;
  int usecs;
  } PGinterval;
 
 
  Thanks Merlin. I am not writing this in C, rather I am using Pl/pgSQL.
  Apologies
  for not mentioning that up front. I was hoping to do a SELECT ... WHERE
 IN
  query form a catalog relation.
 
  That being said, maybe it is time for me to get back into C? I haven't
 done
  much

 well, maybe: that's a different question.  I wasn't sure what exactly
 you wanted to verify and how.


Hi Merlin,

I'm afraid I'm only confusing things, so let me give an example of what I am
trying to do:

-- Example

CREATE OR REPLACE FUNCTION check_interval(_period TEXT, _unit TEXT)
RETURNS INTERVAL
AS $$
   DECLARE
  _DEFAULT_INTERVAL INTERVAL:= '1 HOUR';

  BEGIN
 -- Create a temporary table that maintains the time intervals:
 CREATE TEMPORARY TABLE interval_period
 (
interval_unitTEXT  NOT NULL
 );

 INSERT INTO interval_period
 VALUES
 ('microsecond'),
 ('microseconds'),
 ('millisecond'),
 ('milliseconds'),
 ('second'),
 ('seconds'),
 ('minute'),
 ('minutes'),
 ('hour'),
 ('hours'),
 ('day'),
 ('days'),
 ('week'),
 ('weeks'),
 ('month'),
 ('months'),
 ('year'),
 ('years'),
 ('decade'),
 ('decades'),
 ('century'),
 ('centurys'),
 ('millennium'),
 ('millenniums');

 IF _period !~ '[1-9]\d*'
 THEN
DROP TABLE interval_period;
RETURN _DEFAULT_INTERVAL;
 END IF;

 IF LOWER(_unit) NOT IN (SELECT interval_unit
 FROM interval_period)
 THEN
DROP TABLE interval_period;
RETURN _DEFAULT_INTERVAL;
 END IF;

 DROP TABLE interval_period;
 RETURN CAST(CONCAT(_period, _unit) AS INTERVAL);

  END;
$$
LANGUAGE PLPGSQL;
-- End Example


In the line: IF LOWER(_unit) NOT IN (SELECT interval_unit ..., I would
rather
query a catalog table for the interval unit names if possible. That would
then
compensate for any changes to those values in the future.

When I meant do this in C, I was referring to rewriting this function in C
instead of Pl/pgSQL.

I hope this helps you understand what I am asking, and apologies for not
being
more specific up front.

Regards,
Nelson

merlin



[GENERAL] segmentation fault postgres 9.3.5 core dump perlu related ?

2014-12-02 Thread Day, David
We are developing on and  running Postgres 9.3.5 on  FreeBsd 10.0-p12.
We have been experiencing a intermittent postgres core dump which
Seems primarily to be associated with the the 2  functions below.

The area of interest is  based on the content of the postgres log file which 
often indicates

2014-12-01T14:37:41.559725-05:00 puertorico local0 info postgres[30154]: [3-1] 
LOG:  server process (PID 30187) was terminated by signal 11: Segmentation fault
2014-12-01T14:37:41.559787-05:00 puertorico local0 info postgres[30154]: [3-2] 
DETAIL:  Failed process was running: SELECT * FROM 
cc.get_port_and_registration_data($1, $2, $3, $4, $5)
2014-12-01T14:37:41.559794-05:00 puertorico local0 info postgres[30154]: [4-1] 
LOG:  terminating any other active server processes

And that the core file back trace may show  association to perl libraries  of 
which we only have two possibilities currently, and this is the most relevant 
logic.

Given the onset of this problem,  we suspect it has something to do with the 
addition of  DNS lookup within the our  perlu function cc.get_sip_id(...).
I would note that we have captured the details of the arguments to the 
cc.get_port_and_registration_data at time of a core  and can repeat
the same query after the core event without incident.  Currently we are testing 
for an absence of the core event by commenting out dns  perl function logic and
have rebuilt postgres with debugging symbols.  An example core of this output 
is below.  ( prior to function alteration ).

I am usually attempting to debug simpler  program errors  without such a bad 
impact on the postgres server.
I would appreciate any comment on potential issues or bad practices in the 
suspect functions and/or additional details
that could be  gathered from the core files that might assist in resolving this 
matter.


Thanks

Dave Day



CREATE OR REPLACE FUNCTION cc.get_port_and_registration_data(cca character 
varying, tgrp character varying, dhost character varying, usr character 
varying[], orig_flag boolean)
  RETURNS SETOF cc.port_type_tbl AS
$BODY$
--  The inputs to this overloaded function are sip parameters.
DECLARE pid INTEGER;
DECLARE uid INTEGER;
DECLARE modeCHARACTER VARYING;
DECLARE sql_result record;

BEGIN

  SELECT * FROM cc.get_sip_id($1,$2,$3, $4) INTO pid LIMIT 1;   -- Perl 
invocation

  FOR sql_result IN
   SELECT cc.get_db_refhndl($5)AS db_ref_hndl,* FROM  cc.port_info t1
 LEFT JOIN (SELECT translator_id, mgcp_digit_map FROM 
cc.translator_sys) t2 USING (translator_id)
 LEFT JOIN cc.register_port USING (port_id)
   WHERE port_id = pid AND op_mode = 'default'
   ORDER by expiration DESC
  LOOP
RETURN NEXT sql_result;
  END LOOP;
  RETURN;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100
  ROWS 1000;
ALTER FUNCTION cc.get_port_and_registration_data(character varying, character 
varying, character varying, character varying[], boolean)
  OWNER TO redcom;


CREATE OR REPLACE FUNCTION cc.get_sip_id(cca character varying, tgrp character 
varying, dhost character varying, usr character varying[])
  RETURNS integer AS
$BODY$

use Socket qw(getaddrinfo getnameinfo
  PF_UNSPEC SOCK_STREAM AI_NUMERICHOST NI_NAMEREQD NIx_NOSERV);
use URI;

sub is_local {
my $host = shift(@_);
my $result = 0;
open my $fh, /sbin/route get $host |;
while ($fh) {
if (m/interface/) {
chomp;
my @fields = split /\s+/;
if ($fields[2] eq lo0) {
$result = 1;
}
last;
}
}
close $fh;
return $result;
}

my ($cca, $tgrp, $dhost, $usr) = @_;

$do_dns_lookup = 1;
{
my $query = qq{
SELECT sip_dns_lookup_on_incoming_requests FROM admin.system_options;
};
my $rv = spi_exec_query($query, 1);
if ($rv-{status} =~ /^SPI_OK/  $rv-{processed}  0) {
$do_dns_lookup = $rv-{rows}[0]-{sip_dns_lookup_on_incoming_requests};
}
}

if ($tgrp ne '') {
my $query = qq{
SELECT port_id FROM cc.port_info WHERE destination_group_id = '$tgrp';
};
my $rv = spi_exec_query($query, 1);
if ($rv-{status} =~ /^SPI_OK/  $rv-{processed}  0) {
return $rv-{rows}[0]-{port_id};
}
}

if ($cca ne '') {
my $query = qq{
SELECT port_id FROM cc.port_info WHERE call_control_agent = '$cca';
};
my $rv = spi_exec_query($query, 1);
if ($rv-{status} =~ /^SPI_OK/  $rv-{processed}  0) {
return $rv-{rows}[0]-{port_id};
}
}

for my $uristr (@$usr) {
if ($uristr ne '') {
my $uri = URI-new($uristr);
if (is_local($uri-host)) {
$dhost = '';
my $name = $uri-user;
if ($name ne '') {
my $query = qq{
SELECT port_id FROM cc.port_info
WHERE registration_user = '$name';
};
my $rv = spi_exec_query($query, 1);
if ($rv-{status} =~ /^SPI_OK/  $rv-{processed}  0) {
  

Re: [GENERAL] Programmatic access to interval units

2014-12-02 Thread Adrian Klaver

On 12/02/2014 10:40 AM, Nelson Green wrote:

On Tue, Dec 2, 2014 at 11:57 AM, Merlin Moncure mmonc...@gmail.com
mailto:mmonc...@gmail.com wrote:




Hi Merlin,

I'm afraid I'm only confusing things, so let me give an example of what I am
trying to do:

-- Example

CREATE OR REPLACE FUNCTION check_interval(_period TEXT, _unit TEXT)
RETURNS INTERVAL
AS $$
DECLARE
   _DEFAULT_INTERVAL INTERVAL:= '1 HOUR';

   BEGIN
  -- Create a temporary table that maintains the time intervals:
  CREATE TEMPORARY TABLE interval_period
  (
 interval_unitTEXT  NOT NULL
  );

  INSERT INTO interval_period
  VALUES
  ('microsecond'),
  ('microseconds'),
  ('millisecond'),
  ('milliseconds'),
  ('second'),
  ('seconds'),
  ('minute'),
  ('minutes'),
  ('hour'),
  ('hours'),
  ('day'),
  ('days'),
  ('week'),
  ('weeks'),
  ('month'),
  ('months'),
  ('year'),
  ('years'),
  ('decade'),
  ('decades'),
  ('century'),
  ('centurys'),
  ('millennium'),
  ('millenniums');

  IF _period !~ '[1-9]\d*'
  THEN
 DROP TABLE interval_period;
 RETURN _DEFAULT_INTERVAL;
  END IF;

  IF LOWER(_unit) NOT IN (SELECT interval_unit
  FROM interval_period)
  THEN
 DROP TABLE interval_period;
 RETURN _DEFAULT_INTERVAL;
  END IF;

  DROP TABLE interval_period;
  RETURN CAST(CONCAT(_period, _unit) AS INTERVAL);

   END;
$$
LANGUAGE PLPGSQL;
-- End Example


In the line: IF LOWER(_unit) NOT IN (SELECT interval_unit ..., I would
rather
query a catalog table for the interval unit names if possible. That
would then
compensate for any changes to those values in the future.

When I meant do this in C, I was referring to rewriting this function in C
instead of Pl/pgSQL.

I hope this helps you understand what I am asking, and apologies for not
being
more specific up front.


Would it not be easier to just try the CAST and then catch the exception 
and handle it:


http://www.postgresql.org/docs/9.3/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING



Regards,
Nelson

merlin





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON_AGG produces extra square brakets

2014-12-02 Thread Davide S
Thank you!
Glad to have helped!

On Tue, Dec 2, 2014 at 7:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Davide S swept.along.by.eve...@gmail.com writes:
  This is a small testcase that reproduces the problem on my machine.

 Ah, I see it: json_agg_finalfn is violating the rule that an aggregate
 final function can't scribble on the aggregate state.  Will fix, thanks
 for the report!

 regards, tom lane



Re: [GENERAL] Trying to get SSPI/JDBC working

2014-12-02 Thread Adrian Klaver

On 12/02/2014 08:10 AM, Dave Rosckes wrote:

I am new to Postgresql.  I have a postgres server running on a windows
platform.  The DBs will not have any remote access, only users logged
onto the localhost will have access.  SSPI seems like a great
authentication protocol to use in this case.

I believe I have followed all the instructions on how to get this to work:

Postgresql: postgresql-9.3.4-4-windows-x64

Updated pg_hba.conf to have the following lines:
hostall all 127.0.0.1/32
http://127.0.0.1/32sspi
hostall all ::1/128 md5

Created role that matches userid via pgAdmin III

JDBC connection url:
jdbc:postgresql://localhost/postgresDB?integratedSecurity=true
JDBC jar: postgresql-9.3-1100.jdbc4.jar

Looking in the pglog all I see is:

CST FATAL:  SSPI authentication failed for user roscked
CST DETAIL:  Connection matched pg_hba.conf line 80: host
all all 127.0.0.1/32 http://127.0.0.1/32sspi

The exception I see:

org.postgresql.util.PSQLException: GSS Authentication failed


Is there more to the exception, like maybe a traceback?

Is there anything in the Windows system logs that would help?



Any guidance on how to get more information on the failure, or
suggestion on what I am doing wrong would be greatly appreciated.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Solved] Programmatic access to interval units

2014-12-02 Thread Nelson Green
On Tue, Dec 2, 2014 at 2:25 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 12/02/2014 10:40 AM, Nelson Green wrote:

 On Tue, Dec 2, 2014 at 11:57 AM, Merlin Moncure mmonc...@gmail.com
 mailto:mmonc...@gmail.com wrote:



 Hi Merlin,

 I'm afraid I'm only confusing things, so let me give an example of what I
 am
 trying to do:

 -- Example
 
 CREATE OR REPLACE FUNCTION check_interval(_period TEXT, _unit TEXT)
 RETURNS INTERVAL
 AS $$
 DECLARE
_DEFAULT_INTERVAL INTERVAL:= '1 HOUR';

BEGIN
   -- Create a temporary table that maintains the time intervals:
   CREATE TEMPORARY TABLE interval_period
   (
  interval_unitTEXT  NOT NULL
   );

   INSERT INTO interval_period
   VALUES
   ('microsecond'),
   ('microseconds'),
   ('millisecond'),
   ('milliseconds'),
   ('second'),
   ('seconds'),
   ('minute'),
   ('minutes'),
   ('hour'),
   ('hours'),
   ('day'),
   ('days'),
   ('week'),
   ('weeks'),
   ('month'),
   ('months'),
   ('year'),
   ('years'),
   ('decade'),
   ('decades'),
   ('century'),
   ('centurys'),
   ('millennium'),
   ('millenniums');

   IF _period !~ '[1-9]\d*'
   THEN
  DROP TABLE interval_period;
  RETURN _DEFAULT_INTERVAL;
   END IF;

   IF LOWER(_unit) NOT IN (SELECT interval_unit
   FROM interval_period)
   THEN
  DROP TABLE interval_period;
  RETURN _DEFAULT_INTERVAL;
   END IF;

   DROP TABLE interval_period;
   RETURN CAST(CONCAT(_period, _unit) AS INTERVAL);

END;
 $$
 LANGUAGE PLPGSQL;
 -- End Example
 

 In the line: IF LOWER(_unit) NOT IN (SELECT interval_unit ..., I would
 rather
 query a catalog table for the interval unit names if possible. That
 would then
 compensate for any changes to those values in the future.

 When I meant do this in C, I was referring to rewriting this function in C
 instead of Pl/pgSQL.

 I hope this helps you understand what I am asking, and apologies for not
 being
 more specific up front.


 Would it not be easier to just try the CAST and then catch the exception
 and handle it:

 http://www.postgresql.org/docs/9.3/interactive/plpgsql-
 control-structures.html#PLPGSQL-ERROR-TRAPPING


Thanks Adrian, for putting my head back on straight.

Not only would that be at least as easy, I have done similar error trapping
in
other functions. Not to sure how I got off on this tangent and then stuck
with
it. Guess I was trying to make this way harder than it needed to be, or I
had
way too much turkey over the past holiday?

And a big thanks to everyone that took time to work with me too.

Regards,
Nelson




 Regards,
 Nelson

 merlin




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Programmatic access to interval units

2014-12-02 Thread Merlin Moncure
On Tue, Dec 2, 2014 at 12:40 PM, Nelson Green nelsongree...@gmail.com wrote:
 In the line: IF LOWER(_unit) NOT IN (SELECT interval_unit ..., I would
 rather
 query a catalog table for the interval unit names if possible. That would
 then
 compensate for any changes to those values in the future.

 When I meant do this in C, I was referring to rewriting this function in C
 instead of Pl/pgSQL.

 I hope this helps you understand what I am asking, and apologies for not
 being
 more specific up front.

I was the one that was confused -- heh.  I mis-understood the original
email and thought you were trying to validate interval output vs
interval input.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with pg_dump and decimal mark

2014-12-02 Thread Adrian Klaver

On 11/29/2014 12:25 AM, Eric Svenson wrote:




Seems you have a locale mismatch issue. The dump is coming
from a locale
where a '.' is the decimal mark and is being restored to a
locale where
',' is the mark. Look at what the locales are the machines
that work and
the one that does not.



I have already done that and found something strange:

On the PC where the backup was done with pg_dump, all locale settings of
Postgres were English/United States. (LC_COLLATE, LC_CTYPE, LC_MONETARY,
LC_NUMERIC in postgresql.conf)


OS and OS version?
Postgres version?
What was the pg_dump command used?



On the first PC on which I tried to load the backup file with psql, all
locale settings if Postgres were  German_Germany. Everything is ok, the
SQL file with '.' as decimal point was accepted without a problem


OS and OS version?
Postgres version?
How was it loaded via psql?
Was the psql on the same machine and from the same version of Postgres?



On the second PC (Virtual Machine) I had the SAME settings in
postgresql.conf (German_Germany)


OS and OS version?
Postgres version?
How was it loaded via psql?
Did you use the psql on the VM or did you use the psql on the host?
Was the Postgres/psql on the host the same as the VM?
What are you using for virtualization?
What is host OS?



- no success

I tried to change all the settings to English/United States, restart
postgres

- still no success

Changed all Windows settings to English / United States

- still no success.

So what I am searching for (at the moment without success) is the
'switch' which decides what decimal seperator to expect by psql.


Well according to below, flipping a switch is not necessary. There is a 
mismatch occurring, which is why I posed all the questions above. To 
narrow the possibilities it would help to a have fuller picture of what 
the environment is in each situation.




That's what it sounds like all right, but how could that be?
The behavior
of float8in/float8out is not supposed to be locale-dependent.

float8in does depend on strtod(), whose behavior is locale-dependent
according to POSIX, but we keep LC_NUMERIC set to C to force it to
only believe that . is decimal point.


Not sure if this makes a difference but if I am reading the original
post correctly the OP was trying a plain text restore via psql.


This is correct.


regards and thanks for your support,
Eric Svenson




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.2 PITR testing to before DROP DATABASE ends up removing file system files

2014-12-02 Thread Adrian Klaver

On 11/28/2014 02:29 PM, Joshua Boyd wrote:

I am testing out point in time recovery from a hot physical backup in a
disaster recovery situation - I turned on archiving of files, created a
hot physical backup,


How did you take the backup?

Archiving how and to where?

then (after letting it run for a few days) issued a

DROP DATABASE.  The pg_log file shows the DROP DATABASE command was
issued at '2014-11-28 10:20:00.010 PST'.  I shut down the server, moved
the pgdata directory to pgdata_backup ... restored the files in the hot
physical backup I made, copied the wal archive files from pgdata_backup
to the (new) pgdata archive,


The above I do not understand.
You where archiving the WALs in your pgdata directory?

Restored the backup how?

 cleared out the new pg_xlog dir and copied

the files from the old pg_xlog into the new..  Set up a recovery.conf


All the files or only the unarchived ones?


file as such:

restore_command = 'gunzip -c /home/pg2dev/joshtest/pgdata/archive/%f.gz
  %p'
recovery_target_time = '2014-11-28 10:20:00.010 PST'
recovery_target_inclusive = false

then I started the server up.  the pg_log shows the following:




And then I look in pgdata/base .. and sure enough, that directory is
missing.  I examine my hot physical backup file and that directory
exists within it.

So  even though the recovery SAYS recovery stopping before commit
of transaction 235078 ... it doesn't appear that it's 100% accurate.
It didn't commit the transaction, clearly, because the database is still
listed in the data dictionary ... however, the filesystem files are
gone.  Please - am I doing something wrong, or would this be considered
a bug?

--
Joshua Boyd



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Programmatic access to interval units

2014-12-02 Thread Nelson Green
On Tue, Dec 2, 2014 at 3:48 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Tue, Dec 2, 2014 at 12:40 PM, Nelson Green nelsongree...@gmail.com
 wrote:
  In the line: IF LOWER(_unit) NOT IN (SELECT interval_unit ..., I would
  rather
  query a catalog table for the interval unit names if possible. That would
  then
  compensate for any changes to those values in the future.
 
  When I meant do this in C, I was referring to rewriting this function in
 C
  instead of Pl/pgSQL.
 
  I hope this helps you understand what I am asking, and apologies for not
  being
  more specific up front.

 I was the one that was confused -- heh.  I mis-understood the original
 email and thought you were trying to validate interval output vs
 interval input.

 merlin


But you took time to work with me, and I appreciate that.

Thanks,
Nelson


Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-02 Thread mongoose
David,

Thank you for your prompt reply. I believe your answer helped a lot but it
seems I was not clear enough on my description. Basically I want a counter
(id) to show if two or more names are similar (i.e. levenshtein distance
less than 3) So in the previous example:

From this table: 

Name, City 
Booob, NYC 
Alex, Washington 
Alexj2, Washington 
Bob, NYC 
Aleex1, Washington 

to get this table:

id, Name, City 
1,Alex, Washington 
1,Aleex1, Washington 
1,Alexj2, Washington 
2,Bob, NYC 
2,Booob, NYC 

So basically the id is a counter that starts from 1 and increments only
when there is a different name. Please notice that the table has its names
in a completely random order.





--
View this message in context: 
http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829030.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is WITH () UPDATE Thread Safe ?

2014-12-02 Thread Paul GOERGLER
Ok thanks.
-- 
Paul GOERGLER

De: Albe Laurenz laurenz.a...@wien.gv.at
Répondre: Albe Laurenz laurenz.a...@wien.gv.at
Date: 1 décembre 2014 at 13:21:07
À: Paul GOERGLER *EXTERN* pgoerg...@gmail.com, pgsql-general@postgresql.org 
pgsql-general@postgresql.org
Sujet:  RE: [GENERAL] Is WITH () UPDATE Thread Safe ?  

Paul GOERGLER wrote:  
 I have a lot of tickets, i need to take a batch of tickets and process them.  
 So the process is :  
 SELECT ONLY 100 tickets  
 PROCESS ticket  
 MARK THEM AS « done »  
  
 I’m selecting the tickets with :  
  
 WITH t0 AS (  
 SELECT t.id,  
 RANDOM() AS rank,  
 EXTRACT(EPOCH FROM NOW() - b.created_at) as elapsed  
 FROM tickets AS t  
 LEFT JOIN batch as b ON b.id = t.batch_id  
 WHERE (  
 t.status = 'waiting' OR  
 (t.status = 'processing' AND t.locked_until IS NOT NULL AND t.locked_until = 
 NOW())  
 ) AND t.send_at  NOW()  
 AND (t.send_before IS NULL OR t.send_before  NOW())  
 ORDER BY  
 t.priority DESC,  
 rank ASC  
 LIMIT 100  
 FOR UPDATE OF t  
 )  
 UPDATE tickets AS t1  
 SET status = 'processing',  
 locked_until = NOW() + '1 HOUR’,  
 extra = t1.extra || hstore('elapsed', t0.elapsed || '') || hstore('rank', 
 rank || '')  
 FROM t0  
 WHERE t1.id = t0.id  
 RETURNING t1.*;  
  
  
 I wonder if this query is thread safe, Can a ticket be updated between the 
 SELECT part (t0) and the  
 UPDATE part ?  
 If this query is not « thread safe » how can i do this ?  

There is no race condition in your query because you used SELECT ... FOR 
UPDATE.  

That causes the rows found in the WITH clause to be locked against concurrent 
modification.  

So you should be fine.  

Yours,  
Laurenz Albe  


Re: [GENERAL] Is WITH () UPDATE Thread Safe ?

2014-12-02 Thread David G Johnston
Albe Laurenz *EXTERN* wrote
 Paul GOERGLER wrote:
 I have a lot of tickets, i need to take a batch of tickets and process
 them.
 So the process is :
 SELECT ONLY 100 tickets
 PROCESS ticket
 MARK THEM AS « done »
 
 I’m selecting the tickets with :
 
 WITH t0 AS (
 SELECT t.id,
 RANDOM() AS rank,
 EXTRACT(EPOCH FROM NOW() - b.created_at) as elapsed
 FROM tickets AS t
 LEFT JOIN batch as b ON b.id = t.batch_id
 WHERE (
 t.status = 'waiting' OR
 (t.status = 'processing' AND t.locked_until IS NOT NULL AND
 t.locked_until = NOW())
 ) AND t.send_at  NOW()
 AND (t.send_before IS NULL OR t.send_before  NOW())
 ORDER BY
 t.priority DESC,
 rank ASC
 LIMIT 100
 FOR UPDATE OF t
 )
 UPDATE tickets AS t1
 SET status = 'processing',
 locked_until = NOW() + '1 HOUR’,
 extra = t1.extra || hstore('elapsed', t0.elapsed || '') ||
 hstore('rank', rank || '')
 FROM t0
 WHERE t1.id = t0.id
 RETURNING t1.*;
 
 
 I wonder if this query is thread safe, Can a ticket be updated between
 the SELECT part (t0) and the
 UPDATE part ?
 If this query is not « thread safe » how can i do this ?
 
 There is no race condition in your query because you used SELECT ... FOR
 UPDATE.
 
 That causes the rows found in the WITH clause to be locked against
 concurrent modification.
 
 So you should be fine.
 
 Yours,
 Laurenz Albe

I was under the impression that the presence of FOR UPDATE in this situation
was unnecessary since the execution of the update occurs in the same
statement as the select and thus the relevant data will be locked at
execution.

The FOR UPDATE is for situations where other code needs to intervene between
the select and a subsequent update.

The documentation is silent on this distinction, but...

Note that the use of a CTE in this example is a convenience and that the top
level command is still UPDATE, not SELECT.

It may be worthwhile to update the UPDATE page's WITH commentary to note
that (if correct) there is no need for a FOR UPDATE clause on the contained
subquery (yes, that was quite a mouthful...)

David J.





--
View this message in context: 
http://postgresql.nabble.com/Is-WITH-UPDATE-Thread-Safe-tp5828738p5829038.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merge rows based on Levenshtein distance

2014-12-02 Thread David G Johnston
On Tuesday, December 2, 2014, mongoose [via PostgreSQL] 
ml-node+s1045698n5829030...@n5.nabble.com wrote:

 David,

 Thank you for your prompt reply. I believe your answer helped a lot but it
 seems I was not clear enough on my description. Basically I want a counter
 (id) to show if two or more names are similar (i.e. levenshtein distance
 less than 3) So in the previous example:

 From this table:

 Name, City
 Booob, NYC
 Alex, Washington
 Alexj2, Washington
 Bob, NYC
 Aleex1, Washington

 to get this table:

 id, Name, City
 1,Alex, Washington
 1,Aleex1, Washington
 1,Alexj2, Washington
 2,Bob, NYC
 2,Booob, NYC

 So basically the id is a counter that starts from 1 and increments only
 when there is a different name. Please notice that the table has its names
 in a completely random order.


Write and combine a few subqueries that use window functions (namely lag
and row_number) to identify groups, label them, and assign rows to each
group (using a between condition on a join)

Pondering some (not tested) if you identify the boundary records in a
subquery you can assign them a value of 1 while all others take on null.
In the outer query you should be able to assign groups by simply
applying the sum function over the entire result such that at each boundary
value the presence of the 1 will increment the sum while the null rows will
use the sum value from the prior row.

David J.




--
View this message in context: 
http://postgresql.nabble.com/Merge-rows-based-on-Levenshtein-distance-tp5828841p5829041.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.