Re: [GENERAL] ID column naming convention

2015-10-15 Thread Karsten Hilbert
On Fri, Oct 16, 2015 at 02:28:25PM +1300, Gavin Flower wrote:

> Since 'id' is only used to indicate a PRIMARY KEY, there is less confusion
> in joins, and it is clear when something is a foreign key rather than a
> PRIMARY KEY.

Given that "id" often has meaning outside the database I much
prefer naming my primary keys "pk". And foreign keys "fk_TABLENAME":

line_item.pk_stock = stock.pk

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question

2015-10-15 Thread Guillaume Lelarge
2015-10-15 23:05 GMT+02:00 Adrian Klaver :

> On 10/15/2015 01:35 PM, anj patnaik wrote:
>
>> Hello all,
>> I will experiment with -Fc (custom). The file is already growing very
>> large.
>>
>> I am running this:
>> ./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump
>>
>> Are there any other options for large tables to run faster and occupy
>> less disk space?
>>
>
> Yes, do not double compress. -Fc already compresses the file.
>
>
Right. But I'd say "use custom format but do not compress with pg_dump".
Use the -Z0 option to disable compression, and use an external
multi-threaded tool such as pigz or pbzip2 to get faster and better
compression.


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] ID column naming convention

2015-10-15 Thread Gavin Flower

On 16/10/15 13:09, Jim Nasby wrote:

On 10/13/15 2:34 PM, Gavin Flower wrote:



My practice is to name the PRIMARY KEY as id, and foreign keys with the
original table name plus the sufiix_id.

By leaving the table name off the primary key name, and just using id,
makes it more obvious that it is a primary key (plus it seems redundant
to prefix the primary key name with its own table name!).


There's two things that are ugly about that though:

Joins become MUCH easier to screw up. When you have 5 different fields 
that are all called 'id' it's trivial to mix them up. It's much harder 
to accidentally do something like 'blah.person_id = foo.invoice_id'.


The other issue is common to all "bare word" names (id, name, 
description, etc): it becomes completely impossible to find all 
occurrences of something in code. If you grep your entire codebase for 
'person_id', you know you'll find exactly what you want. Grepping for 
'id' OTOH would be useless.
It would seem to be very dodgy to us a join based on apparently very 
different semantic implied by 'blah.person_id = foo.invoice_id'!!! :-)


Because 2 fields in different tables have the same name, it does not 
necessarily mean they have the same semantics. For example 2 tables 
could have a field named 'start_date', but the one in a table called 
'employment' would have different semantics to the one in 'project'.


Since 'id' is only used to indicate a PRIMARY KEY, there is less 
confusion in joins, and it is clear when something is a foreign key 
rather than a PRIMARY KEY.  For example, if two tables both refer to the 
same human, you can join using a.human_id = b.human_id - and it is 
clearer when you are joining a child to a parent table, for example 
line_item.stock_id = stock.id.


Adopting you convention, it would result in not only picking up foreign 
key references, but also the primary keys - which may, or may not, too 
helpful!


It would be very rare to have a join such as project.id = task.id, it is 
usually a mistake to join tables on their primary key - so using just 
'id' as the PRIMARY KEY name is a bonus.


I once devised a stored procedure in SyBase with over 3,000 lines of SQL 
(I would have broken it up in smaller units, but it was not practicable 
in that development environment).  It had 7 temporary tables, 5 used 
'id' as the PRIMARY KEY - and 2 used the name of the PRIMARY KEY of an 
existing table ('tcs_id' & 'perorg_seq'), because that made more sense, 
as they had the the same semantic meaning. I did not design the 2 
databases I queried, but I suspect sometimes I might decide it best to 
use something other than just 'id' - but it would be very rare (I won't 
say never!) that I'd use the table name as a prefix for the primary key.


Searching on a bare word names can be useful when the fields have 
similar, related semantics.  In a real database, I'd be very unlikely to 
use 'name' for a field, though using 'description' might be valid.  
Though in general, I would agree that using several words in a name is 
normally preferable. Also it would also be better to define appropriate 
DOMAINs rather than just using bare types like 'text' & 'int' - to 
better document the semantics and make it easier to change things in a 
more controlled way.


If one was grepping for the occurrences of the use of the PRIMARY KEY of 
the table human, you would look for 'human_id' you would only grep for 
'id' if one wanted to find the use of PRIMARY KEYs.


No naming convention is perfect in all situations, and I'll adapt mine 
as appropriate.  In my experience, my convention (well to be honest, I 
adopted it from others - so I can't claim to have originated it!) seems 
to be better in general.


Essentially it is a guideline, I won't insist that you have have your 
computers confiscated  if you use a different convention!







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problems with copy from file

2015-10-15 Thread Jim Nasby

On 10/14/15 11:40 AM, Andreas Kretschmer wrote:

test=*# \copy httpd_log (data) from '~/test.log';
ERROR:  invalid byte sequence for encoding "UTF8": 0xb1
CONTEXT:  COPY httpd_log, line 3: "other-domain bb.243.xx.yyy - - [06/Nov/2014:00:48:22 +0100] 
"\x16\x03\x01\x01\xb1\x01" 501 10..."


I don't think it's possible with COPY. The issue is that COPY is passing 
the string to text_in() to process, and text_in is treating that as an 
escape sequence (which it is...).


You could maybe create a raw_text type that had a different input 
function and define a view that had that type and a trigger to put the 
data in the table. You wouldn't want to use raw_text in a table though, 
because you wouldn't be able to safely dump and restore it.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ID column naming convention

2015-10-15 Thread Jim Nasby

On 10/13/15 2:34 PM, Gavin Flower wrote:



My practice is to name the PRIMARY KEY as id, and foreign keys with the
original table name plus the sufiix_id.

By leaving the table name off the primary key name, and just using id,
makes it more obvious that it is a primary key (plus it seems redundant
to prefix the primary key name with its own table name!).


There's two things that are ugly about that though:

Joins become MUCH easier to screw up. When you have 5 different fields 
that are all called 'id' it's trivial to mix them up. It's much harder 
to accidentally do something like 'blah.person_id = foo.invoice_id'.


The other issue is common to all "bare word" names (id, name, 
description, etc): it becomes completely impossible to find all 
occurrences of something in code. If you grep your entire codebase for 
'person_id', you know you'll find exactly what you want. Grepping for 
'id' OTOH would be useless.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] using postgresql for session

2015-10-15 Thread Jim Nasby

On 10/14/15 8:57 PM, Tiger Nassau wrote:

maybe we will just use beaker with our bottle framework - thought it was
duplicative to have redis since we have postgres and lookup speed should
be  trivial since session only has a couple of small fields like account
id and role


The problem with sessions in Postgres tends to be the update rate, since 
every update results in a new table tuple, and possible new index 
tuples. That gets painful really fast for high-update workloads.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpool ssl handshake failure

2015-10-15 Thread Tatsuo Ishii
> Hi,
> 
> I am using pgpool-II version 3.4.3 (tataraboshi).
> Where my database is Postgresql 8.4.
> 
> I am trying to configure ssl mode from client and between pgpool and
> database it is non-ssl.
> I configured as document and now I am getting this in my log:
> 
>>
>> *2015-10-13 22:17:58: pid 1857: LOG:  new connection received*
>> *2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5
>> port=65326*
>> *2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl handshake
>> failure"*
>> *2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from 
>> frontend**2015-10-13
>> 22:17:58: pid 1857: DETAIL:  socket read failed with an error "Success"*
> 
> Please let me know what wrong I am doing.

Works for me using psql coming with PostgreSQL 9.4.5 and pgpool-II 3.4.3.
(This is Ubuntu 14.04. PostgreSQL and pgpool-II are compiled from the
source code).

$ psql -p 11000 -h localhost test
psql (9.4.5)
SSL connection (protocol: TLSv1, cipher: AES256-SHA, bits: 256, compression: 
off)
Type "help" for help.

I don't think your old PostgreSQL 8.4 server is related to your
problem because you are trying to enable SSL between client and
pgpool, not pgpool and PostgreSQL server. However psql coming with
PostgreSQL 8.4 might be related to the problem. Why don't you try
newer version of psql (more precisely, newer libpq).

I assume your SSL setting is perfect. If you are not sure, please take
a look at FAQ:

http://pgpool.net/mediawiki/index.php/FAQ#How_can_I_set_up_SSL_for_pgpool-II.3F

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Standby pg_dump Conflict with Recovery

2015-10-15 Thread Louis Battuello

> On Oct 15, 2015, at 6:16 PM, Adrian Klaver  wrote:
> 
> On 10/15/2015 03:03 PM, Louis Battuello wrote:
>> Hello All,
>> 
>> I’ve got a confusing issue with dumping data from a standby PostgreSQL
>> 9.4.5 database.
>> 
>> At night, on a nearly completely idle server, I run a pg_dump of a
>> database that contains numerous small tables and one 3GB table. The
>> dump consistently fails when reaching the 3GB table with this message:
>> 
>> pg_dump: Dumping the contents of table “" failed: PQgetResult()
>> failed.
>> pg_dump: Error message from server: ERROR:  canceling statement due to
>> conflict with recovery
>> DETAIL:  User query might have needed to see row versions that must be
>> removed.
>> pg_dump: The command was: COPY  (...) TO stdout;
>> 
>> I have replication slots enabled on the primary (“repmgr_slot_3" for the
>> standby pg_dump source), and I’m using hot_standby_feedback. After
>> getting the failure a couple times, I temporarily set
>> max_standby_archive_delay and max_standby_streaming_delay to -1 to allow
>> infinite delay on the standby,  just to see if I could get the dump to
>> complete. I still encountered the above error.
> 
> How did you set and temporarily enable the settings

I changed the settings in the postgresql.conf file, restarted the standby 
server, checked that there wasn't any activity on the primary or the standby, 
and ran the pg_dump on the standby again - which failed. I watched the xmin 
value on the primary pg_replication_slots, which held steady until the dump 
failed.

Then, I changed the delay settings back to the defaults and restarted the 
standby so I wouldn’t affect the replication during the next business day.


> 
>> 
>> 
>> postgres=# select * from pg_replication_slots ;
>>slot_name   | plugin | slot_type | datoid | database | active
>> |  xmin   | catalog_xmin | restart_lsn
>> ---++---++--++-+--+-
>>  repmgr_slot_2 || physical  ||  | t  |
>> |  | A/C6502880
>>  repmgr_slot_3 || physical  ||  | t  |
>> 1356283 |  | A/C6502880
>> (2 rows)
>> 
>> Is there some other configuration setting I’m forgetting?
>> 
>> Thanks,
>> Louis
>> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general 
> 


Re: [GENERAL] Standby pg_dump Conflict with Recovery

2015-10-15 Thread Adrian Klaver

On 10/15/2015 03:03 PM, Louis Battuello wrote:

Hell All,

I’ve got a confusing issue with dumping data from a standby PostgreSQL
9.4.5 database.

At night, on a nearly completely idle server, I run a pg_dump of a
database that contains numerous small tables and one 3GB table. The
dump consistently fails when reaching the 3GB table with this message:

pg_dump: Dumping the contents of table “" failed: PQgetResult()
failed.
pg_dump: Error message from server: ERROR:  canceling statement due to
conflict with recovery
DETAIL:  User query might have needed to see row versions that must be
removed.
pg_dump: The command was: COPY  (...) TO stdout;

I have replication slots enabled on the primary (“repmgr_slot_3" for the
standby pg_dump source), and I’m using hot_standby_feedback. After
getting the failure a couple times, I temporarily set
max_standby_archive_delay and max_standby_streaming_delay to -1 to allow
infinite delay on the standby,  just to see if I could get the dump to
complete. I still encountered the above error.


How did you set and temporarily enable the settings?




postgres=# select * from pg_replication_slots ;
slot_name   | plugin | slot_type | datoid | database | active
|  xmin   | catalog_xmin | restart_lsn
---++---++--++-+--+-
  repmgr_slot_2 || physical  ||  | t  |
 |  | A/C6502880
  repmgr_slot_3 || physical  ||  | t  |
1356283 |  | A/C6502880
(2 rows)

Is there some other configuration setting I’m forgetting?

Thanks,
Louis




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Standby pg_dump Conflict with Recovery

2015-10-15 Thread Louis Battuello
Hell All,

I’ve got a confusing issue with dumping data from a standby PostgreSQL 9.4.5 
database.

At night, on a nearly completely idle server, I run a pg_dump of a database 
that contains numerous small tables and one 3GB table. The dump consistently 
fails when reaching the 3GB table with this message:

pg_dump: Dumping the contents of table “" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:  canceling statement due to conflict 
with recovery
DETAIL:  User query might have needed to see row versions that must be removed.
pg_dump: The command was: COPY  (...) TO stdout;

I have replication slots enabled on the primary (“repmgr_slot_3" for the 
standby pg_dump source), and I’m using hot_standby_feedback. After getting the 
failure a couple times, I temporarily set max_standby_archive_delay and 
max_standby_streaming_delay to -1 to allow infinite delay on the standby,  just 
to see if I could get the dump to complete. I still encountered the above error.


postgres=# select * from pg_replication_slots ;
   slot_name   | plugin | slot_type | datoid | database | active |  xmin   | 
catalog_xmin | restart_lsn 
---++---++--++-+--+-
 repmgr_slot_2 || physical  ||  | t  | |
  | A/C6502880
 repmgr_slot_3 || physical  ||  | t  | 1356283 |
  | A/C6502880
(2 rows)

Is there some other configuration setting I’m forgetting?

Thanks,
Louis



Re: [GENERAL] postgres function

2015-10-15 Thread Torsten Förtsch
On 15/10/15 14:32, Ramesh T wrote:
>  select position('-' in '123-987-123')
> position
> ---
> 4
> But I want second occurrence,
> position
> -
> 8
> 
> plz any help..?


For instance:

# select char_length(substring('123-987-123' from '^[^-]*-[^-]*-'));
char_length
-
8

Best,
Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question

2015-10-15 Thread Adrian Klaver

On 10/15/2015 01:35 PM, anj patnaik wrote:

Hello all,
I will experiment with -Fc (custom). The file is already growing very large.

I am running this:
./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump

Are there any other options for large tables to run faster and occupy
less disk space?


Yes, do not double compress. -Fc already compresses the file.

This information and a lot more can be found here:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html



Below is memory info:

[root@onxl5179 tmp]# cat /proc/meminfo
MemTotal:   16333720 kB
MemFree:  187736 kB
Buffers:   79696 kB
Cached: 11176616 kB
SwapCached: 2024 kB
Active: 11028784 kB
Inactive:4561616 kB
Active(anon):3839656 kB
Inactive(anon):   642416 kB
Active(file):7189128 kB
Inactive(file):  3919200 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:  33456120 kB
SwapFree:   33428960 kB
Dirty: 33892 kB
Writeback: 0 kB
AnonPages:   4332408 kB
Mapped:   201388 kB
Shmem:147980 kB
Slab: 365380 kB
SReclaimable: 296732 kB
SUnreclaim:68648 kB
KernelStack:5888 kB
PageTables:37720 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit:41622980 kB
Committed_AS:7148392 kB
VmallocTotal:   34359738367 kB
VmallocUsed:  179848 kB
VmallocChunk:   34359548476 kB
HardwareCorrupted: 0 kB
AnonHugePages:   3950592 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:   10240 kB
DirectMap2M:16766976 kB


# CPUs=8
RHEL 6.5

The PG shared memory info is the defaults as I've not touched the .conf
file. I am not a DBA, just a test tools developer who needs to backup
the table efficiently. I am fairly new to PG and not an expert at Linux.

Also if there are recommended backup scripts/cron that you recommend,
please point them to me.

Thanks!!

On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead mailto:sco...@openscg.com>> wrote:


On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge
mailto:guilla...@lelarge.info>> wrote:

2015-10-15 20:40 GMT+02:00 anj patnaik mailto:patn...@gmail.com>>:

It's a Linux machine with 8 CPUs. I don't have the other
details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with
very large tables, aka 150K rows and half of the rows have
data being inserted with 22MB?


Don't use tar format? I never understood the interest on this
one. You should better use the custom method.


+ 1

  Use -F c


--
Scott Mead
Sr. Architect
/OpenSCG/
PostgreSQL, Java & Linux Experts


http://openscg.com 


-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  >
/tmp/dump
pg_dump: [archiver (db)] connection to database "postgres"
failed: fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   >
/tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date
+\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510 .gz: Permission
denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date
+\%Y\%m\%d\%H`.gz


Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson
mailto:melvin6...@gmail.com>> wrote:

In addition to exactly what you mean by "a long time" to
pg_dump 77k of your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with
pg_dump?
What is the value of shared_memory &
maintenance_work_mem in postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a
client application that
runs in Tcl that inserts to the db and fetches
records.

For the majority of the time, the app will
connect to the server to do
insert/fetch.

For occasional use,

Re: [GENERAL] question

2015-10-15 Thread Melvin Davidson
The PostgreSQL default configuration is very conservative so as to insure
it will work on almost any system.
However, based on your latest information, you should definitely adjust
shared_buffers = 4GB
maintenance_work_mem = 512MB

Note that you will need to restart PostgreSQL for this to take effect.

On Thu, Oct 15, 2015 at 4:35 PM, anj patnaik  wrote:

> Hello all,
> I will experiment with -Fc (custom). The file is already growing very
> large.
>
> I am running this:
> ./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump
>
> Are there any other options for large tables to run faster and occupy less
> disk space?
>
> Below is memory info:
>
> [root@onxl5179 tmp]# cat /proc/meminfo
> MemTotal:   16333720 kB
> MemFree:  187736 kB
> Buffers:   79696 kB
> Cached: 11176616 kB
> SwapCached: 2024 kB
> Active: 11028784 kB
> Inactive:4561616 kB
> Active(anon):3839656 kB
> Inactive(anon):   642416 kB
> Active(file):7189128 kB
> Inactive(file):  3919200 kB
> Unevictable:   0 kB
> Mlocked:   0 kB
> SwapTotal:  33456120 kB
> SwapFree:   33428960 kB
> Dirty: 33892 kB
> Writeback: 0 kB
> AnonPages:   4332408 kB
> Mapped:   201388 kB
> Shmem:147980 kB
> Slab: 365380 kB
> SReclaimable: 296732 kB
> SUnreclaim:68648 kB
> KernelStack:5888 kB
> PageTables:37720 kB
> NFS_Unstable:  0 kB
> Bounce:0 kB
> WritebackTmp:  0 kB
> CommitLimit:41622980 kB
> Committed_AS:7148392 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:  179848 kB
> VmallocChunk:   34359548476 kB
> HardwareCorrupted: 0 kB
> AnonHugePages:   3950592 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> DirectMap4k:   10240 kB
> DirectMap2M:16766976 kB
>
>
> # CPUs=8
> RHEL 6.5
>
> The PG shared memory info is the defaults as I've not touched the .conf
> file. I am not a DBA, just a test tools developer who needs to backup the
> table efficiently. I am fairly new to PG and not an expert at Linux.
>
> Also if there are recommended backup scripts/cron that you recommend,
> please point them to me.
>
> Thanks!!
>
> On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead  wrote:
>
>>
>> On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge <
>> guilla...@lelarge.info> wrote:
>>
>>> 2015-10-15 20:40 GMT+02:00 anj patnaik :
>>>
 It's a Linux machine with 8 CPUs. I don't have the other details.

 I get archive member too large for tar format.

 Is there a recommended command/options when dealing with very large
 tables, aka 150K rows and half of the rows have data being inserted with
 22MB?


>>> Don't use tar format? I never understood the interest on this one. You
>>> should better use the custom method.
>>>
>>
>> + 1
>>
>>  Use -F c
>>
>>
>> --
>> Scott Mead
>> Sr. Architect
>> *OpenSCG*
>> PostgreSQL, Java & Linux Experts
>>
>>
>> http://openscg.com
>>
>>
>>>
>>>
 -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
 pg_dump: [archiver (db)] connection to database "postgres" failed:
 fe_sendauth: no password supplied
 -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
 Password:
 pg_dump: [tar archiver] archive member too large for tar format
 -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
 -bash: pg_dumpall: command not found
 -bash: tmpdb.out-2015101510.gz: Permission denied
 -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz


 Thank you so much for replying and accepting my post to this NG.

 On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson >>> > wrote:

> In addition to exactly what you mean by "a long time" to pg_dump 77k
> of your table,
>
> What is your O/S and how much memory is on your system?
> How many CPU's are in your system?
> Also, what is your hard disk configuration?
> What other applications are running simultaneously with pg_dump?
> What is the value of shared_memory & maintenance_work_mem in
> postgresql.conf?
>
> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
> adrian.kla...@aklaver.com> wrote:
>
>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>
>>> Hello,
>>>
>>> I recently downloaded postgres 9.4 and I have a client application
>>> that
>>> runs in Tcl that inserts to the db and fetches records.
>>>
>>> For the majority of the time, the app will connect to the server to
>>> do
>>> insert/fetch.
>>>
>>> For occasional use, we want to remove the requirement to have a
>>> server
>>> db and just have the application retrieve data from a local file.
>>>
>>> I know I can use pg_dump to export the tables. The questions are:
>>>
>>> 1) is there an in

Re: [GENERAL] question

2015-10-15 Thread Scott Mead
On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge 
wrote:

> 2015-10-15 20:40 GMT+02:00 anj patnaik :
>
>> It's a Linux machine with 8 CPUs. I don't have the other details.
>>
>> I get archive member too large for tar format.
>>
>> Is there a recommended command/options when dealing with very large
>> tables, aka 150K rows and half of the rows have data being inserted with
>> 22MB?
>>
>>
> Don't use tar format? I never understood the interest on this one. You
> should better use the custom method.
>

+ 1

 Use -F c


--
Scott Mead
Sr. Architect
*OpenSCG*
PostgreSQL, Java & Linux Experts


http://openscg.com


>
>
>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
>> pg_dump: [archiver (db)] connection to database "postgres" failed:
>> fe_sendauth: no password supplied
>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
>> Password:
>> pg_dump: [tar archiver] archive member too large for tar format
>> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>> -bash: pg_dumpall: command not found
>> -bash: tmpdb.out-2015101510.gz: Permission denied
>> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>>
>>
>> Thank you so much for replying and accepting my post to this NG.
>>
>> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson 
>> wrote:
>>
>>> In addition to exactly what you mean by "a long time" to pg_dump 77k of
>>> your table,
>>>
>>> What is your O/S and how much memory is on your system?
>>> How many CPU's are in your system?
>>> Also, what is your hard disk configuration?
>>> What other applications are running simultaneously with pg_dump?
>>> What is the value of shared_memory & maintenance_work_mem in
>>> postgresql.conf?
>>>
>>> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
>>> adrian.kla...@aklaver.com> wrote:
>>>
 On 10/14/2015 06:39 PM, anj patnaik wrote:

> Hello,
>
> I recently downloaded postgres 9.4 and I have a client application that
> runs in Tcl that inserts to the db and fetches records.
>
> For the majority of the time, the app will connect to the server to do
> insert/fetch.
>
> For occasional use, we want to remove the requirement to have a server
> db and just have the application retrieve data from a local file.
>
> I know I can use pg_dump to export the tables. The questions are:
>
> 1) is there an in-memory db instance or file based I can create that is
> loaded with the dump file? This way the app code doesn't have to
> change.
>

 No.


> 2) does pg support embedded db?
>

 No.

 3) Or is my best option to convert the dump to sqlite and the import the
> sqlite and have the app read that embedded db.
>

 Sqlite tends to follow Postgres conventions, so you might be able to
 use the pg_dump output directly if you use --inserts or --column-inserts:

 http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html


> Finally, I am noticing pg_dump takes a lot of time to create a dump of
> my table. right now, the table  has 77K rows. Are there any ways to
> create automated batch files to create dumps overnight and do so
> quickly?
>

 Define long time.

 What is the pg_dump command you are using?

 Sure use a cron job.


> Thanks for your inputs!
>


 --
 Adrian Klaver
 adrian.kla...@aklaver.com


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

>>>
>>>
>>>
>>> --
>>> *Melvin Davidson*
>>> I reserve the right to fantasize.  Whether or not you
>>> wish to share my fantasy is entirely up to you.
>>>
>>
>>
>
>
> --
> Guillaume.
>   http://blog.guillaume.lelarge.info
>   http://www.dalibo.com
>


Re: [GENERAL] question

2015-10-15 Thread Melvin Davidson
You stated you wanted to dump just one table, but your command is dumping
the whole database!

So if you truly want to dump just a single table, then change your command
to:

pg_dump -t RECORDER postgres --format=t -t your_table_name -w  > /tmp/dump

Also, please explain why you cannot provide the other required information.
Are you not the DBA? If that is the case, then I can only encourage you to
consult with him/her.

On Thu, Oct 15, 2015 at 2:40 PM, anj patnaik  wrote:

> It's a Linux machine with 8 CPUs. I don't have the other details.
>
> I get archive member too large for tar format.
>
> Is there a recommended command/options when dealing with very large
> tables, aka 150K rows and half of the rows have data being inserted with
> 22MB?
>
> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
> pg_dump: [archiver (db)] connection to database "postgres" failed:
> fe_sendauth: no password supplied
> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
> Password:
> pg_dump: [tar archiver] archive member too large for tar format
> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
> -bash: pg_dumpall: command not found
> -bash: tmpdb.out-2015101510.gz: Permission denied
> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>
>
> Thank you so much for replying and accepting my post to this NG.
>
> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson 
> wrote:
>
>> In addition to exactly what you mean by "a long time" to pg_dump 77k of
>> your table,
>>
>> What is your O/S and how much memory is on your system?
>> How many CPU's are in your system?
>> Also, what is your hard disk configuration?
>> What other applications are running simultaneously with pg_dump?
>> What is the value of shared_memory & maintenance_work_mem in
>> postgresql.conf?
>>
>> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
>> adrian.kla...@aklaver.com> wrote:
>>
>>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>>
 Hello,

 I recently downloaded postgres 9.4 and I have a client application that
 runs in Tcl that inserts to the db and fetches records.

 For the majority of the time, the app will connect to the server to do
 insert/fetch.

 For occasional use, we want to remove the requirement to have a server
 db and just have the application retrieve data from a local file.

 I know I can use pg_dump to export the tables. The questions are:

 1) is there an in-memory db instance or file based I can create that is
 loaded with the dump file? This way the app code doesn't have to change.

>>>
>>> No.
>>>
>>>
 2) does pg support embedded db?

>>>
>>> No.
>>>
>>> 3) Or is my best option to convert the dump to sqlite and the import the
 sqlite and have the app read that embedded db.

>>>
>>> Sqlite tends to follow Postgres conventions, so you might be able to use
>>> the pg_dump output directly if you use --inserts or --column-inserts:
>>>
>>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>>>
>>>
 Finally, I am noticing pg_dump takes a lot of time to create a dump of
 my table. right now, the table  has 77K rows. Are there any ways to
 create automated batch files to create dumps overnight and do so
 quickly?

>>>
>>> Define long time.
>>>
>>> What is the pg_dump command you are using?
>>>
>>> Sure use a cron job.
>>>
>>>
 Thanks for your inputs!

>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] question

2015-10-15 Thread Guillaume Lelarge
2015-10-15 20:40 GMT+02:00 anj patnaik :

> It's a Linux machine with 8 CPUs. I don't have the other details.
>
> I get archive member too large for tar format.
>
> Is there a recommended command/options when dealing with very large
> tables, aka 150K rows and half of the rows have data being inserted with
> 22MB?
>
>
Don't use tar format? I never understood the interest on this one. You
should better use the custom method.


> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
> pg_dump: [archiver (db)] connection to database "postgres" failed:
> fe_sendauth: no password supplied
> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
> Password:
> pg_dump: [tar archiver] archive member too large for tar format
> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
> -bash: pg_dumpall: command not found
> -bash: tmpdb.out-2015101510.gz: Permission denied
> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>
>
> Thank you so much for replying and accepting my post to this NG.
>
> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson 
> wrote:
>
>> In addition to exactly what you mean by "a long time" to pg_dump 77k of
>> your table,
>>
>> What is your O/S and how much memory is on your system?
>> How many CPU's are in your system?
>> Also, what is your hard disk configuration?
>> What other applications are running simultaneously with pg_dump?
>> What is the value of shared_memory & maintenance_work_mem in
>> postgresql.conf?
>>
>> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
>> adrian.kla...@aklaver.com> wrote:
>>
>>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>>
 Hello,

 I recently downloaded postgres 9.4 and I have a client application that
 runs in Tcl that inserts to the db and fetches records.

 For the majority of the time, the app will connect to the server to do
 insert/fetch.

 For occasional use, we want to remove the requirement to have a server
 db and just have the application retrieve data from a local file.

 I know I can use pg_dump to export the tables. The questions are:

 1) is there an in-memory db instance or file based I can create that is
 loaded with the dump file? This way the app code doesn't have to change.

>>>
>>> No.
>>>
>>>
 2) does pg support embedded db?

>>>
>>> No.
>>>
>>> 3) Or is my best option to convert the dump to sqlite and the import the
 sqlite and have the app read that embedded db.

>>>
>>> Sqlite tends to follow Postgres conventions, so you might be able to use
>>> the pg_dump output directly if you use --inserts or --column-inserts:
>>>
>>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>>>
>>>
 Finally, I am noticing pg_dump takes a lot of time to create a dump of
 my table. right now, the table  has 77K rows. Are there any ways to
 create automated batch files to create dumps overnight and do so
 quickly?

>>>
>>> Define long time.
>>>
>>> What is the pg_dump command you are using?
>>>
>>> Sure use a cron job.
>>>
>>>
 Thanks for your inputs!

>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


-- 
Guillaume.
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


Re: [GENERAL] postgres function

2015-10-15 Thread Ramesh T
yes David gave correct solution

but , the value I'm using  and  it's column in the table sometimes value
 may be '123-987-123' or '123-987-123-13-87'

if pass like below must return else condiion 0,



select case when select split_part('123-987-123','-',4) >0
then 1 else 0 end
it's return error like integer need...



On Thu, Oct 15, 2015 at 8:50 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Oct 15, 2015 at 10:05 AM, Ramesh T 
> wrote:
>
>> '123-987-123' it is not fixed some times it may be '1233-9873-123-098'
>> as you said it's fixed,
>>
>> changes the values in middle of the -
>>
>> sometimes times i need 1233 and 098 or 9873,first position  i'll find
>> direct for second variable we don't know where it's end with -
>>
>> i.e ,
>> i need to find second postition of the variable between the '-'
>> ​​
>>
>
> ​While I and others are likely inclined to provide you a working solution
> to do so you need to state your data and requirement more clearly.​  Given
> the apparent language dynamic I'd suggest supplying 5-10 example data
> values along with their expected result.
>
> ​Otherwise, regular expressions almost certainly will let you solve your
> problem (though, like Joe Conway indicated, split_​part may be possible)
> once you learn how to construct them.  regexp_matches(...) is the access
> point to using them.
>
> David J.
>
>


Re: [GENERAL] question

2015-10-15 Thread anj patnaik
It's a Linux machine with 8 CPUs. I don't have the other details.

I get archive member too large for tar format.

Is there a recommended command/options when dealing with very large tables,
aka 150K rows and half of the rows have data being inserted with 22MB?

-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
pg_dump: [archiver (db)] connection to database "postgres" failed:
fe_sendauth: no password supplied
-bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
Password:
pg_dump: [tar archiver] archive member too large for tar format
-bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
-bash: pg_dumpall: command not found
-bash: tmpdb.out-2015101510.gz: Permission denied
-bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz


Thank you so much for replying and accepting my post to this NG.

On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson 
wrote:

> In addition to exactly what you mean by "a long time" to pg_dump 77k of
> your table,
>
> What is your O/S and how much memory is on your system?
> How many CPU's are in your system?
> Also, what is your hard disk configuration?
> What other applications are running simultaneously with pg_dump?
> What is the value of shared_memory & maintenance_work_mem in
> postgresql.conf?
>
> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver  > wrote:
>
>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>
>>> Hello,
>>>
>>> I recently downloaded postgres 9.4 and I have a client application that
>>> runs in Tcl that inserts to the db and fetches records.
>>>
>>> For the majority of the time, the app will connect to the server to do
>>> insert/fetch.
>>>
>>> For occasional use, we want to remove the requirement to have a server
>>> db and just have the application retrieve data from a local file.
>>>
>>> I know I can use pg_dump to export the tables. The questions are:
>>>
>>> 1) is there an in-memory db instance or file based I can create that is
>>> loaded with the dump file? This way the app code doesn't have to change.
>>>
>>
>> No.
>>
>>
>>> 2) does pg support embedded db?
>>>
>>
>> No.
>>
>> 3) Or is my best option to convert the dump to sqlite and the import the
>>> sqlite and have the app read that embedded db.
>>>
>>
>> Sqlite tends to follow Postgres conventions, so you might be able to use
>> the pg_dump output directly if you use --inserts or --column-inserts:
>>
>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>>
>>
>>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>>> my table. right now, the table  has 77K rows. Are there any ways to
>>> create automated batch files to create dumps overnight and do so quickly?
>>>
>>
>> Define long time.
>>
>> What is the pg_dump command you are using?
>>
>> Sure use a cron job.
>>
>>
>>> Thanks for your inputs!
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


Re: [GENERAL] postgres function

2015-10-15 Thread David G. Johnston
On Thu, Oct 15, 2015 at 3:15 PM, Ramesh T 
wrote:

> yes David gave correct solution
>
> but , the value I'm using  and  it's column in the table sometimes value
>  may be '123-987-123' or '123-987-123-13-87'
>
>
​So adapt the answer provided to match your data.​

if pass like below must return else condiion 0,
>
> select case when select split_part('123-987-123','-',4) >0
> then 1 else 0 end
> it's return error like integer need...
>
>
​I have no clue what you are trying to say here...

David J.
​


Re: [GENERAL] Simple way to load xml into table

2015-10-15 Thread David G. Johnston
On Thu, Oct 15, 2015 at 1:38 PM, Emi  wrote:

> Hello,
>
> For psql 8.3, is there a simple way to load xml file into table please?
>
> E.g.,
>
>   
> True
> test1
> e1
>   
>   
> false
> test2
>   
>
> Results:
> t1 (c1 text, c2 text, c3 text):
>
> c1| c2 | c3
> -
> true| test1 | e1
> false   | test2 | null
> ..
>
> Thanks a lot!
>
>
​For "simple" I don't see anything built-it better than xpath_table(...) in
the xml2 contrib module.

http://www.postgresql.org/docs/8.3/static/xml2.html

​This applies to any release since the standard SQL/XML stuff that we
implemented in 8.3+ doesn't appear to cover this particular capability​.

David J.


Re: [GENERAL] Simple way to load xml into table

2015-10-15 Thread Rob Sargent

On 10/15/2015 11:38 AM, Emi wrote:

Hello,

For psql 8.3, is there a simple way to load xml file into table please?

E.g.,

  
True
test1
e1
  
  
false
test2
  

Results:
t1 (c1 text, c2 text, c3 text):

c1| c2 | c3
-
true| test1 | e1
false   | test2 | null
..

Thanks a lot!



Shame on Concordia! 8.3.  Really? (Send this up the chain)


[GENERAL] Simple way to load xml into table

2015-10-15 Thread Emi

Hello,

For psql 8.3, is there a simple way to load xml file into table please?

E.g.,

  
True
test1
e1
  
  
false
test2
  

Results:
t1 (c1 text, c2 text, c3 text):

c1| c2 | c3
-
true| test1 | e1
false   | test2 | null
..

Thanks a lot!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpool ssl handshake failure

2015-10-15 Thread Adrian Klaver

On 10/15/2015 09:36 AM, AI Rumman wrote:

I configured Postgresql 9.4 and still getting the same error.


Configured what?

Or more to the point what is ssl_renegotiation_limit set to?



Thanks.

On Thu, Oct 15, 2015 at 7:16 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 10/15/2015 06:59 AM, AI Rumman wrote:

Hi,

I am using pgpool-II version 3.4.3 (tataraboshi).
Where my database is Postgresql 8.4.


Probably already know, but 8.4 is approximately 1.25 years beyond EOL:

http://www.postgresql.org/support/versioning/


I am trying to configure ssl mode from client and between pgpool and
database it is non-ssl.


What is non-ssl, the database or pgpool?

I configured as document and now I am getting this in my log:

 /2015-10-13 22:17:58: pid 1857: LOG:  new connection received
 //2015-10-13 22:17:58: pid 1857: DETAIL:  connecting
host=10.0.0.5
 port=65326
 //2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl:
"SSL_read": "ssl
 handshake failure"
 //2015-10-13 22:17:58: pid 1857: ERROR:  unable to read
data from
 frontend
 //2015-10-13 22:17:58: pid 1857: DETAIL:  socket read
failed with an
 error "Success"/

Please let me know what wrong I am doing.


Not quite sure but given the below from the 9.5 Release Notes:

"
Remove server configuration parameter ssl_renegotiation_limit, which
was deprecated in earlier releases (Andres Freund)

While SSL renegotiation is a good idea in theory, it has caused
enough bugs to be considered a net negative in practice, and it is
due to be removed from future versions of the relevant standards. We
have therefore removed support for it from PostgreSQL."

I would check to see what  ssl_renegotiation_limit is set to:

http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html

and if it is not set to 0, then try that.



Thanks & Regards.



--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpool ssl handshake failure

2015-10-15 Thread AI Rumman
I configured Postgresql 9.4 and still getting the same error.

Thanks.

On Thu, Oct 15, 2015 at 7:16 AM, Adrian Klaver 
wrote:

> On 10/15/2015 06:59 AM, AI Rumman wrote:
>
>> Hi,
>>
>> I am using pgpool-II version 3.4.3 (tataraboshi).
>> Where my database is Postgresql 8.4.
>>
>
> Probably already know, but 8.4 is approximately 1.25 years beyond EOL:
>
> http://www.postgresql.org/support/versioning/
>
>
>> I am trying to configure ssl mode from client and between pgpool and
>> database it is non-ssl.
>>
>
> What is non-ssl, the database or pgpool?
>
> I configured as document and now I am getting this in my log:
>>
>> /2015-10-13 22:17:58: pid 1857: LOG:  new connection received
>> //2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5
>> port=65326
>> //2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl
>> handshake failure"
>> //2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from
>> frontend
>> //2015-10-13 22:17:58: pid 1857: DETAIL:  socket read failed with an
>> error "Success"/
>>
>> Please let me know what wrong I am doing.
>>
>
> Not quite sure but given the below from the 9.5 Release Notes:
>
> "
> Remove server configuration parameter ssl_renegotiation_limit, which was
> deprecated in earlier releases (Andres Freund)
>
> While SSL renegotiation is a good idea in theory, it has caused enough
> bugs to be considered a net negative in practice, and it is due to be
> removed from future versions of the relevant standards. We have therefore
> removed support for it from PostgreSQL."
>
> I would check to see what  ssl_renegotiation_limit is set to:
>
> http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html
>
> and if it is not set to 0, then try that.
>
>
>
>> Thanks & Regards.
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Dario Beraldi
On 15 October 2015 at 16:29, Dario Beraldi  wrote:

>
>
>
>
>
> On 15 October 2015 at 16:23, Tom Lane  wrote:
>
>> Dario Beraldi  writes:
>> >> It might be worth cd'ing into the src/pl/plpython subdirectory and
>> >> manually doing "make install" there to see what it prints.
>>
>> > Here we go:
>> > cd
>> >
>> /data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython
>> > make install
>> > make: Nothing to be done for `install'.
>>
>> That, and the fact that your "ls" shows no derived files, means that the
>> Makefile is choosing not to do anything, which a look at the Makefile
>> says must be because shared_libpython isn't getting set.  (As of 9.5
>> we've changed that to not fail silently, but in 9.3 this is what it does.)
>>
>> There are two possibilities here: either your python3 installation does
>> not include a shared-library version of libpython, or it does but the
>> configure+Make process is failing to detect that.  Probably should
>> establish which of those it is before going further.
>>
>> regards, tom lane
>>
>
> Ahh, I guess this answers the question then:
>
> cd
> /data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython/
> make
>
> *** Cannot build PL/Python because libpython is not a shared library.
> *** You might have to rebuild your Python installation.  Refer to
> *** the documentation for details.
>

Ok, this seems to have done the trick:

# Get python and install as shared library:
wget http://python.org/ftp/python/3.5.0/Python-3.5.0.tar.xz
tar xf Python-3.5.0.tar.xz
cd Python-3.5.0

./configure --enable-shared \
--prefix=$HOME \
LDFLAGS="-Wl,--rpath=$HOME/lib"
make
make altinstall

# Re-configure postgres
cd /Users/berald01/applications/postgresql/postgresql-9.3.5/
./configure --prefix=$HOME --with-python PYTHON=~/bin/python3.5
make
make install

# Create python3 lang:
createlang plpython3u sblab


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Dario Beraldi
On 15 October 2015 at 16:23, Tom Lane  wrote:

> Dario Beraldi  writes:
> >> It might be worth cd'ing into the src/pl/plpython subdirectory and
> >> manually doing "make install" there to see what it prints.
>
> > Here we go:
> > cd
> >
> /data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython
> > make install
> > make: Nothing to be done for `install'.
>
> That, and the fact that your "ls" shows no derived files, means that the
> Makefile is choosing not to do anything, which a look at the Makefile
> says must be because shared_libpython isn't getting set.  (As of 9.5
> we've changed that to not fail silently, but in 9.3 this is what it does.)
>
> There are two possibilities here: either your python3 installation does
> not include a shared-library version of libpython, or it does but the
> configure+Make process is failing to detect that.  Probably should
> establish which of those it is before going further.
>
> regards, tom lane
>

Ahh, I guess this answers the question then:

cd
/data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython/
make

*** Cannot build PL/Python because libpython is not a shared library.
*** You might have to rebuild your Python installation.  Refer to
*** the documentation for details.

Right, it looks like I have to rebuild python then.
Thanks guys!
Dario


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Tom Lane
Dario Beraldi  writes:
>> It might be worth cd'ing into the src/pl/plpython subdirectory and
>> manually doing "make install" there to see what it prints.

> Here we go:
> cd
> /data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython
> make install
> make: Nothing to be done for `install'.

That, and the fact that your "ls" shows no derived files, means that the
Makefile is choosing not to do anything, which a look at the Makefile
says must be because shared_libpython isn't getting set.  (As of 9.5
we've changed that to not fail silently, but in 9.3 this is what it does.)

There are two possibilities here: either your python3 installation does
not include a shared-library version of libpython, or it does but the
configure+Make process is failing to detect that.  Probably should
establish which of those it is before going further.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres function

2015-10-15 Thread David G. Johnston
On Thu, Oct 15, 2015 at 10:05 AM, Ramesh T 
wrote:

> '123-987-123' it is not fixed some times it may be '1233-9873-123-098'
> as you said it's fixed,
>
> changes the values in middle of the -
>
> sometimes times i need 1233 and 098 or 9873,first position  i'll find
> direct for second variable we don't know where it's end with -
>
> i.e ,
> i need to find second postition of the variable between the '-'
> ​​
>

​While I and others are likely inclined to provide you a working solution
to do so you need to state your data and requirement more clearly.​  Given
the apparent language dynamic I'd suggest supplying 5-10 example data
values along with their expected result.

​Otherwise, regular expressions almost certainly will let you solve your
problem (though, like Joe Conway indicated, split_​part may be possible)
once you learn how to construct them.  regexp_matches(...) is the access
point to using them.

David J.


Re: [GENERAL] question

2015-10-15 Thread Melvin Davidson
In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver 
wrote:

> On 10/14/2015 06:39 PM, anj patnaik wrote:
>
>> Hello,
>>
>> I recently downloaded postgres 9.4 and I have a client application that
>> runs in Tcl that inserts to the db and fetches records.
>>
>> For the majority of the time, the app will connect to the server to do
>> insert/fetch.
>>
>> For occasional use, we want to remove the requirement to have a server
>> db and just have the application retrieve data from a local file.
>>
>> I know I can use pg_dump to export the tables. The questions are:
>>
>> 1) is there an in-memory db instance or file based I can create that is
>> loaded with the dump file? This way the app code doesn't have to change.
>>
>
> No.
>
>
>> 2) does pg support embedded db?
>>
>
> No.
>
> 3) Or is my best option to convert the dump to sqlite and the import the
>> sqlite and have the app read that embedded db.
>>
>
> Sqlite tends to follow Postgres conventions, so you might be able to use
> the pg_dump output directly if you use --inserts or --column-inserts:
>
> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>
>
>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>> my table. right now, the table  has 77K rows. Are there any ways to
>> create automated batch files to create dumps overnight and do so quickly?
>>
>
> Define long time.
>
> What is the pg_dump command you are using?
>
> Sure use a cron job.
>
>
>> Thanks for your inputs!
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Adrian Klaver

On 10/15/2015 07:49 AM, Dario Beraldi wrote:



It might be worth cd'ing into the src/pl/plpython subdirectory and
manually doing "make install" there to see what it prints.


By the way, that's what I see in src/pl/plpython:


So it does not look like it actually ran the make, I see no *.o files.

What happens if you run make in?:

/data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython



ls -l
/data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython
total 292
drwxrwx--- 2 berald01 sblab  4096 Jul 21  2014 expected
-rw-r- 1 berald01 sblab  1002 Jul 21  2014 generate-spiexceptions.pl

-rw-r- 1 berald01 sblab  6154 Jul 21  2014 Makefile
-rw-r- 1 berald01 sblab   648 Jul 21  2014 nls.mk 
-rw-r- 1 berald01 sblab 10623 Jul 21  2014 plpy_cursorobject.c
-rw-r- 1 berald01 sblab   394 Jul 21  2014 plpy_cursorobject.h
-rw-r- 1 berald01 sblab 10841 Jul 21  2014 plpy_elog.c
-rw-r- 1 berald01 sblab   699 Jul 21  2014 plpy_elog.h
-rw-r- 1 berald01 sblab 22176 Jul 21  2014 plpy_exec.c
-rw-r- 1 berald01 sblab   294 Jul 21  2014 plpy_exec.h
-rw-r- 1 berald01 sblab 10407 Jul 21  2014 plpy_main.c
-rw-r- 1 berald01 sblab   789 Jul 21  2014 plpy_main.h
-rw-r- 1 berald01 sblab  2476 Jul 21  2014 plpy_planobject.c
-rw-r- 1 berald01 sblab   456 Jul 21  2014 plpy_planobject.h
-rw-r- 1 berald01 sblab  9942 Jul 21  2014 plpy_plpymodule.c
-rw-r- 1 berald01 sblab   365 Jul 21  2014 plpy_plpymodule.h
-rw-r- 1 berald01 sblab 13374 Jul 21  2014 plpy_procedure.c
-rw-r- 1 berald01 sblab  1596 Jul 21  2014 plpy_procedure.h
-rw-r- 1 berald01 sblab  6980 Jul 21  2014 plpy_resultobject.c
-rw-r- 1 berald01 sblab   573 Jul 21  2014 plpy_resultobject.h
-rw-r- 1 berald01 sblab 13793 Jul 21  2014 plpy_spi.c
-rw-r- 1 berald01 sblab   780 Jul 21  2014 plpy_spi.h
-rw-r- 1 berald01 sblab  5490 Jul 21  2014 plpy_subxactobject.c
-rw-r- 1 berald01 sblab   673 Jul 21  2014 plpy_subxactobject.h
-rw-r- 1 berald01 sblab   351 Jul 21  2014 plpython2u--1.0.sql
-rw-r- 1 berald01 sblab   196 Jul 21  2014 plpython2u.control
-rw-r- 1 berald01 sblab   402 Jul 21  2014
plpython2u--unpackaged--1.0.sql
-rw-r- 1 berald01 sblab   351 Jul 21  2014 plpython3u--1.0.sql
-rw-r- 1 berald01 sblab   196 Jul 21  2014 plpython3u.control
-rw-r- 1 berald01 sblab   402 Jul 21  2014
plpython3u--unpackaged--1.0.sql
-rw-r- 1 berald01 sblab  4071 Jul 21  2014 plpython.h
-rw-r- 1 berald01 sblab   347 Jul 21  2014 plpythonu--1.0.sql
-rw-r- 1 berald01 sblab   194 Jul 21  2014 plpythonu.control
-rw-r- 1 berald01 sblab   393 Jul 21  2014
plpythonu--unpackaged--1.0.sql
-rw-r- 1 berald01 sblab 27349 Jul 21  2014 plpy_typeio.c
-rw-r- 1 berald01 sblab  2659 Jul 21  2014 plpy_typeio.h
-rw-r- 1 berald01 sblab  3548 Jul 21  2014 plpy_util.c
-rw-r- 1 berald01 sblab   511 Jul 21  2014 plpy_util.h
drwxrwx--- 2 berald01 sblab   144 Jul 21  2014 po
-rw-r- 1 berald01 sblab 22857 Jul 21  2014 spiexceptions.h
drwxrwx--- 2 berald01 sblab  4096 Jul 21  2014 sql




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

2015-10-15 Thread David G. Johnston
On Thu, Oct 15, 2015 at 10:48 AM, Rob Richardson 
wrote:

> By George, I think I've got it!
>
> When I ran CREATE EXTENSION tablefunc WITH SCHEMA public, I got the
> crosstab methods and my sample query worked.
>

I would suggest learning about search_path(s) instead of placing everything
into the one schema that happens to be in the default search_path.
Otherwise your "public" is going to be a mess to scan through.

David J.​


Re: [GENERAL] postgres function

2015-10-15 Thread Geoff Winkless
Well you could use

SELECT LENGTH(REGEXP_REPLACE('123-987-123', '(([^-]*-){2}).*', '\1'));

Not pretty, but it works.

Geoff

On 15 October 2015 at 15:05, Ramesh T  wrote:

> '123-987-123' it is not fixed some times it may be '1233-9873-123-098'
> as you said it's fixed,
>
> changes the values in middle of the -
>
> sometimes times i need 1233 and 098 or 9873,first position  i'll find
> direct for second variable we don't know where it's end with -
>
> i.e ,
> i need to find second postition of the variable between the '-'
>
>
>
> On Thu, Oct 15, 2015 at 6:32 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Thu, Oct 15, 2015 at 8:32 AM, Ramesh T 
>> wrote:
>>
>>>  select position('-' in '123-987-123')
>>> position
>>> ---
>>> 4
>>> But I want second occurrence,
>>> position
>>> -
>>> 8
>>>
>>> plz any help..?
>>>
>>>
>> ​
>> SELECT length((regexp_matches('123-987-123', '(\d{3}-\d{3}-)\d{3}'))[1])
>> ​
>>
>> David J.
>>
>>
>


Re: [GENERAL] question

2015-10-15 Thread Adrian Klaver

On 10/14/2015 06:39 PM, anj patnaik wrote:

Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server
db and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.


No.



2) does pg support embedded db?


No.


3) Or is my best option to convert the dump to sqlite and the import the
sqlite and have the app read that embedded db.


Sqlite tends to follow Postgres conventions, so you might be able to use 
the pg_dump output directly if you use --inserts or --column-inserts:


http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html



Finally, I am noticing pg_dump takes a lot of time to create a dump of
my table. right now, the table  has 77K rows. Are there any ways to
create automated batch files to create dumps overnight and do so quickly?


Define long time.

What is the pg_dump command you are using?

Sure use a cron job.



Thanks for your inputs!



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres function

2015-10-15 Thread Joe Conway
On 10/15/2015 07:05 AM, Ramesh T wrote:
> '123-987-123' it is not fixed some times it may be '1233-9873-123-098'
> as you said it's fixed,
> 
> changes the values in middle of the -
> 
> sometimes times i need 1233 and 098 or 9873,first position  i'll find
> direct for second variable we don't know where it's end with -
> 
> i.e ,
> i need to find second postition of the variable between the '-'

Are you looking for the position or the actual variable? If you really
want the latter you can do:

select split_part('123-987-123','-',2);
select split_part('1233-9873-123-098','-',2);

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Adrian Klaver

On 10/15/2015 07:42 AM, Dario Beraldi wrote:


createlang plpython3u sblab
ERROR:  could not open extension control file

"/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control":
No such file or directory
STATEMENT:  CREATE EXTENSION "plpython3u";


Hmm, what files *do* you have in that directory


Here's what I see:

ls -l /data/sblab-home/berald01/share/postgresql/extension/
total 12
-rw-r--r-- 1 berald01 sblab 332 Oct 15 15:30 plpgsql--1.0.sql
-rw-r--r-- 1 berald01 sblab 179 Oct 15 15:30 plpgsql.control
-rw-r--r-- 1 berald01 sblab 381 Oct 15 15:30 plpgsql--unpackaged--1.0.sql

There seems to be a discrepancy in paths:

ERROR:  could not open extension control file
"/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control"

configure_args=' '\''--prefix=/Users/berald01'\''

So is there something mapping /Users/berald01 to /data/sblab-home/ ?


It *should* be fine /Users/berald01 and /data/sblab-home/berald01 point
to the same space. I.e. "ls  /Users/berald01" is the same as "ls
/data/sblab-home/berald01"


Just for grins try:

ls -al /Users/berald01/share/postgresql/extension/




It might be worth cd'ing into the src/pl/plpython subdirectory and
manually doing "make install" there to see what it prints.



Here we go:
cd
/data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython
make install
make: Nothing to be done for `install'.

Any clue?

(Thanks a ton for your assistance!)


--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] question

2015-10-15 Thread anj patnaik
Hello,

I recently downloaded postgres 9.4 and I have a client application that
runs in Tcl that inserts to the db and fetches records.

For the majority of the time, the app will connect to the server to do
insert/fetch.

For occasional use, we want to remove the requirement to have a server db
and just have the application retrieve data from a local file.

I know I can use pg_dump to export the tables. The questions are:

1) is there an in-memory db instance or file based I can create that is
loaded with the dump file? This way the app code doesn't have to change.

2) does pg support embedded db?
3) Or is my best option to convert the dump to sqlite and the import the
sqlite and have the app read that embedded db.

Finally, I am noticing pg_dump takes a lot of time to create a dump of my
table. right now, the table  has 77K rows. Are there any ways to create
automated batch files to create dumps overnight and do so quickly?

Thanks for your inputs!


Re: [GENERAL] postgres function

2015-10-15 Thread Ramesh T
 select position('-' in '123-987-123')
position
---
4
But I want second occurrence,
position
-
8

plz any help..?



On Thu, Oct 15, 2015 at 12:54 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Oct 14, 2015 at 9:38 AM, Ramesh T 
> wrote:
>
>> Hi All,
>>   Do we have  function like  regexp_substr in postgres..?
>>
>> in oracle this function seach the - from 1 to 2 and return result,
>> regexp_substr(PART_CATG_DESC,'[^-]+', 1, 2)
>>
>
> ​Maybe one of the functions on this page will get you what you need.
>
> http://www.postgresql.org/docs/devel/static/functions-string.html
>
> David J.
>
> ​
>
>


Re: [GENERAL] postgres function

2015-10-15 Thread Ramesh T
'123-987-123' it is not fixed some times it may be '1233-9873-123-098'
as you said it's fixed,

changes the values in middle of the -

sometimes times i need 1233 and 098 or 9873,first position  i'll find
direct for second variable we don't know where it's end with -

i.e ,
i need to find second postition of the variable between the '-'



On Thu, Oct 15, 2015 at 6:32 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Oct 15, 2015 at 8:32 AM, Ramesh T 
> wrote:
>
>>  select position('-' in '123-987-123')
>> position
>> ---
>> 4
>> But I want second occurrence,
>> position
>> -
>> 8
>>
>> plz any help..?
>>
>>
> ​
> SELECT length((regexp_matches('123-987-123', '(\d{3}-\d{3}-)\d{3}'))[1])
> ​
>
> David J.
>
>


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Dario Beraldi
>
> It might be worth cd'ing into the src/pl/plpython subdirectory and
>> manually doing "make install" there to see what it prints.
>>
>>
>> By the way, that's what I see in src/pl/plpython:

ls -l
/data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython
total 292
drwxrwx--- 2 berald01 sblab  4096 Jul 21  2014 expected
-rw-r- 1 berald01 sblab  1002 Jul 21  2014 generate-spiexceptions.pl
-rw-r- 1 berald01 sblab  6154 Jul 21  2014 Makefile
-rw-r- 1 berald01 sblab   648 Jul 21  2014 nls.mk
-rw-r- 1 berald01 sblab 10623 Jul 21  2014 plpy_cursorobject.c
-rw-r- 1 berald01 sblab   394 Jul 21  2014 plpy_cursorobject.h
-rw-r- 1 berald01 sblab 10841 Jul 21  2014 plpy_elog.c
-rw-r- 1 berald01 sblab   699 Jul 21  2014 plpy_elog.h
-rw-r- 1 berald01 sblab 22176 Jul 21  2014 plpy_exec.c
-rw-r- 1 berald01 sblab   294 Jul 21  2014 plpy_exec.h
-rw-r- 1 berald01 sblab 10407 Jul 21  2014 plpy_main.c
-rw-r- 1 berald01 sblab   789 Jul 21  2014 plpy_main.h
-rw-r- 1 berald01 sblab  2476 Jul 21  2014 plpy_planobject.c
-rw-r- 1 berald01 sblab   456 Jul 21  2014 plpy_planobject.h
-rw-r- 1 berald01 sblab  9942 Jul 21  2014 plpy_plpymodule.c
-rw-r- 1 berald01 sblab   365 Jul 21  2014 plpy_plpymodule.h
-rw-r- 1 berald01 sblab 13374 Jul 21  2014 plpy_procedure.c
-rw-r- 1 berald01 sblab  1596 Jul 21  2014 plpy_procedure.h
-rw-r- 1 berald01 sblab  6980 Jul 21  2014 plpy_resultobject.c
-rw-r- 1 berald01 sblab   573 Jul 21  2014 plpy_resultobject.h
-rw-r- 1 berald01 sblab 13793 Jul 21  2014 plpy_spi.c
-rw-r- 1 berald01 sblab   780 Jul 21  2014 plpy_spi.h
-rw-r- 1 berald01 sblab  5490 Jul 21  2014 plpy_subxactobject.c
-rw-r- 1 berald01 sblab   673 Jul 21  2014 plpy_subxactobject.h
-rw-r- 1 berald01 sblab   351 Jul 21  2014 plpython2u--1.0.sql
-rw-r- 1 berald01 sblab   196 Jul 21  2014 plpython2u.control
-rw-r- 1 berald01 sblab   402 Jul 21  2014
plpython2u--unpackaged--1.0.sql
-rw-r- 1 berald01 sblab   351 Jul 21  2014 plpython3u--1.0.sql
-rw-r- 1 berald01 sblab   196 Jul 21  2014 plpython3u.control
-rw-r- 1 berald01 sblab   402 Jul 21  2014
plpython3u--unpackaged--1.0.sql
-rw-r- 1 berald01 sblab  4071 Jul 21  2014 plpython.h
-rw-r- 1 berald01 sblab   347 Jul 21  2014 plpythonu--1.0.sql
-rw-r- 1 berald01 sblab   194 Jul 21  2014 plpythonu.control
-rw-r- 1 berald01 sblab   393 Jul 21  2014
plpythonu--unpackaged--1.0.sql
-rw-r- 1 berald01 sblab 27349 Jul 21  2014 plpy_typeio.c
-rw-r- 1 berald01 sblab  2659 Jul 21  2014 plpy_typeio.h
-rw-r- 1 berald01 sblab  3548 Jul 21  2014 plpy_util.c
-rw-r- 1 berald01 sblab   511 Jul 21  2014 plpy_util.h
drwxrwx--- 2 berald01 sblab   144 Jul 21  2014 po
-rw-r- 1 berald01 sblab 22857 Jul 21  2014 spiexceptions.h
drwxrwx--- 2 berald01 sblab  4096 Jul 21  2014 sql


Re: [GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

2015-10-15 Thread Rob Richardson
By George, I think I've got it!

When I ran CREATE EXTENSION tablefunc WITH SCHEMA public, I got the crosstab 
methods and my sample query worked.

RobR


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Dario Beraldi
> createlang plpython3u sblab
>>> ERROR:  could not open extension control file
>>>
>>> "/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control":
>>> No such file or directory
>>> STATEMENT:  CREATE EXTENSION "plpython3u";
>>>
>>
>> Hmm, what files *do* you have in that directory
>
>
Here's what I see:

ls -l /data/sblab-home/berald01/share/postgresql/extension/
total 12
-rw-r--r-- 1 berald01 sblab 332 Oct 15 15:30 plpgsql--1.0.sql
-rw-r--r-- 1 berald01 sblab 179 Oct 15 15:30 plpgsql.control
-rw-r--r-- 1 berald01 sblab 381 Oct 15 15:30 plpgsql--unpackaged--1.0.sql



> There seems to be a discrepancy in paths:
>
> ERROR:  could not open extension control file
> "/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control"
>
> configure_args=' '\''--prefix=/Users/berald01'\''
>
> So is there something mapping /Users/berald01 to /data/sblab-home/ ?
>

It *should* be fine /Users/berald01 and /data/sblab-home/berald01 point to
the same space. I.e. "ls  /Users/berald01" is the same as "ls
/data/sblab-home/berald01"


>
>> It might be worth cd'ing into the src/pl/plpython subdirectory and
>> manually doing "make install" there to see what it prints.
>>
>

Here we go:
cd
/data/sblab-home/berald01/applications/postgresql/postgresql-9.3.5/src/pl/plpython
make install
make: Nothing to be done for `install'.

Any clue?

(Thanks a ton for your assistance!)

>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] Cast hstore type to bytea (and later to hex possibly)

2015-10-15 Thread Igor Stassiy
Hello,

I would like to achieve something like the following:

COPY (select 'a=>x, b=>y'::hstore::bytea) TO STDOUT;

I have implemented an hstore value iterator that works with
pqxx::result::field.c_str() (which has its own binary serialisation format)
and I would like to reuse it to work with records that comes from stdin (in
hex format for example).

How can this be done?

Thank you,
Igor


Re: [GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

2015-10-15 Thread Rob Richardson
I should have mentioned (twice now) that I'm running under Windows 7.

RobR

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Thursday, October 15, 2015 10:19 AM
To: Rob Richardson
Cc: pgsql-general General
Subject: Re: [GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

Rob Richardson  writes:
> I am trying to learn about crosstab functions in ProgreSQL 9.3, but none of 
> the examples I’ve found are working.  I get errors claiming the functions 
> are unknown, but when I try running CREATE EXTENSION tablefunc, I am told 
> that its methods already exist.

This looks like a search_path problem.  You could try "\dx+ tablefunc"
to see which schema its functions are in, then adjust your search_path to 
include that, or else schema-qualify the function names.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

2015-10-15 Thread Rob Richardson
Tim,

Thank you, but I think I already did that.  The query is a dollar-quoted 
string, so there should be no need to do anything with the single quote marks 
within it, so I would have thought the query engine would already know that 
it's text.  But after seeing the first error message, I explicitly casted it 
using "::text".  The error message that time said that crosstab(text) was not 
found, so that doesn't seem to be the problem.

RobR

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tim Clarke
Sent: Thursday, October 15, 2015 10:31 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

Looks to me like argument types possibly? The article creates various 
combinations of crosstab() function but you are passing in a query. Wrap your 
query in quotes (and then escape those within it). Then you'll be passing in a 
"text" type not an "unknown" as the error clearly shows.

Tim Clarke


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

2015-10-15 Thread Tim Clarke
Looks to me like argument types possibly? The article creates various
combinations of crosstab() function but you are passing in a query. Wrap
your query in quotes (and then escape those within it). Then you'll be
passing in a "text" type not an "unknown" as the error clearly shows.

Tim Clarke

On 15/10/15 15:19, Tom Lane wrote:
> Rob Richardson  writes:
>> I am trying to learn about crosstab functions in ProgreSQL 9.3, but none of 
>> the examples I’ve found are working.  I get errors claiming the functions 
>> are unknown, but when I try running CREATE EXTENSION tablefunc, I am told 
>> that its methods already exist.
> This looks like a search_path problem.  You could try "\dx+ tablefunc"
> to see which schema its functions are in, then adjust your search_path
> to include that, or else schema-qualify the function names.
>
>   regards, tom lane
>
>



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Adrian Klaver

On 10/15/2015 07:16 AM, Tom Lane wrote:

Dario Beraldi  writes:

Sorry guys... I executed



./configure --prefix=$HOME --with-python PYTHON=/usr/local/bin/python3
make
make install


That looks sane from here ...


and it completed fine (see also below output from 'grep -i 'PYTHON'
config.log'). Still after restarting postgres I get:



createlang plpython3u sblab
ERROR:  could not open extension control file
"/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control":
No such file or directory
STATEMENT:  CREATE EXTENSION "plpython3u";


Hmm, what files *do* you have in that directory?


There seems to be a discrepancy in paths:

ERROR:  could not open extension control file 
"/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control"


configure_args=' '\''--prefix=/Users/berald01'\''

So is there something mapping /Users/berald01 to /data/sblab-home/ ?



It might be worth cd'ing into the src/pl/plpython subdirectory and
manually doing "make install" there to see what it prints.

regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

2015-10-15 Thread Tom Lane
Rob Richardson  writes:
> I am trying to learn about crosstab functions in ProgreSQL 9.3, but none of 
> the examples I’ve found are working.  I get errors claiming the functions 
> are unknown, but when I try running CREATE EXTENSION tablefunc, I am told 
> that its methods already exist.

This looks like a search_path problem.  You could try "\dx+ tablefunc"
to see which schema its functions are in, then adjust your search_path
to include that, or else schema-qualify the function names.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Tom Lane
Dario Beraldi  writes:
> Sorry guys... I executed

> ./configure --prefix=$HOME --with-python PYTHON=/usr/local/bin/python3
> make
> make install

That looks sane from here ...

> and it completed fine (see also below output from 'grep -i 'PYTHON'
> config.log'). Still after restarting postgres I get:

> createlang plpython3u sblab
> ERROR:  could not open extension control file
> "/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control":
> No such file or directory
> STATEMENT:  CREATE EXTENSION "plpython3u";

Hmm, what files *do* you have in that directory?

It might be worth cd'ing into the src/pl/plpython subdirectory and
manually doing "make install" there to see what it prints.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpool ssl handshake failure

2015-10-15 Thread Adrian Klaver

On 10/15/2015 06:59 AM, AI Rumman wrote:

Hi,

I am using pgpool-II version 3.4.3 (tataraboshi).
Where my database is Postgresql 8.4.


Probably already know, but 8.4 is approximately 1.25 years beyond EOL:

http://www.postgresql.org/support/versioning/



I am trying to configure ssl mode from client and between pgpool and
database it is non-ssl.


What is non-ssl, the database or pgpool?


I configured as document and now I am getting this in my log:

/2015-10-13 22:17:58: pid 1857: LOG:  new connection received
//2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5
port=65326
//2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl
handshake failure"
//2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from
frontend
//2015-10-13 22:17:58: pid 1857: DETAIL:  socket read failed with an
error "Success"/

Please let me know what wrong I am doing.


Not quite sure but given the below from the 9.5 Release Notes:

"
Remove server configuration parameter ssl_renegotiation_limit, which was 
deprecated in earlier releases (Andres Freund)


While SSL renegotiation is a good idea in theory, it has caused enough 
bugs to be considered a net negative in practice, and it is due to be 
removed from future versions of the relevant standards. We have 
therefore removed support for it from PostgreSQL."


I would check to see what  ssl_renegotiation_limit is set to:

http://www.postgresql.org/docs/8.4/static/runtime-config-connection.html

and if it is not set to 0, then try that.




Thanks & Regards.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can I use crosstab functons in PostgreSQL 9.3?

2015-10-15 Thread Rob Richardson
I am trying to learn about crosstab functions in ProgreSQL 9.3, but none of the 
examples I’ve found are working.  I get errors claiming the functions are 
unknown, but when I try running CREATE EXTENSION tablefunc, I am told that its 
methods already exist.

For example, I am trying to run the code contained on this page: 
https://learnerspeak.wordpress.com/2012/09/02/97/ .  After adjusting quotation 
marks, my crosstab query from that example is:

SELECT *
FROM crosstab(
  $$select rowid, attribute, value
from ct
where attribute = 'att2' or attribute = 'att3'
order by 1,2$$)
AS ct(row_name text, category_1 text, category_2 text, category_3 text);

That query gives me the following error message:
ERROR:  function crosstab(unknown) does not exist
LINE 2: FROM crosstab(
 ^
HINT:  No function matches the given name and argument types. You might need to 
add explicit type casts.
** Error **

ERROR: function crosstab(unknown) does not exist
SQL state: 42883
Hint: No function matches the given name and argument types. You might need to 
add explicit type casts.
Character: 15

I don’t know why it thinks the argument’s type is unknown.  But if I explicitly 
cast it to text, I get:
ERROR:  function crosstab(text) does not exist
LINE 2: FROM crosstab(
 ^
HINT:  No function matches the given name and argument types. You might need to 
add explicit type casts.
** Error **

ERROR: function crosstab(text) does not exist
SQL state: 42883
Hint: No function matches the given name and argument types. You might need to 
add explicit type casts.
Character: 15

Thank you for your help.

RobR


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Dario Beraldi
On 15 October 2015 at 14:46, Tom Lane  wrote:

> Dario Beraldi  writes:
> > Thanks for your answer. Just checking before I screw things up... About
> > "the source tree has to be configured and built twice", does it mean
> that I
> > have to execute again
>
> > ./configure --prefix=$HOME;
> > make;
> > make install
>
> > And should I enable any particular option in ./configure? I see there is
> a
> > "--with-python" option (not specific to python3 though).
>
> Indeed --- you have not built any version of plpython here.  You need
> --with-python, and you need to make sure the PYTHON environment variable
> is set (else you'll get whatever version is invoked by "python", which is
> most likely python2).  See the build instructions in the documentation.
> Also watch the output from configure, which will show you which python
> it selected.
>
> regards, tom lane
>

Sorry guys... I executed

./configure --prefix=$HOME --with-python PYTHON=/usr/local/bin/python3
make
make install

and it completed fine (see also below output from 'grep -i 'PYTHON'
config.log'). Still after restarting postgres I get:

createlang plpython3u sblab
ERROR:  could not open extension control file
"/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control":
No such file or directory
STATEMENT:  CREATE EXTENSION "plpython3u";

## From config.log
grep -i 'PYTHON' config.log

  $ ./configure --prefix=/Users/berald01 --with-python
PYTHON=/usr/local/bin/python3
PATH: /opt/rh/python27/root/usr/bin
configure:5399: checking whether to build Python modules
configure:7499: checking for python
configure:7529: result: /usr/local/bin/python3
configure:7544: checking for Python distutils module
configure:7557: checking Python configuration directory
configure:7562: result: /usr/local/lib/python3.4/config-3.4m
configure:7565: checking Python include directories
configure:7575: result: -I/usr/local/include/python3.4m
configure:7580: checking how to link an embedded Python application
configure:7607: result: -L/usr/local/lib/python3.4/config-3.4m -lpython3.4m
-lpthread -ldl  -lutil -lm
configure:7612: checking whether Python is compiled with thread support
configure:29636: checking Python.h usability
configure:29653: gcc -c -O2 -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv
-I/usr/local/include/python3.4m  -D_GNU_SOURCE  conftest.c >&5
configure:29678: checking Python.h presence
configure:29693: gcc -E -I/usr/local/include/python3.4m  -D_GNU_SOURCE
conftest.c
configure:29747: checking for Python.h
ac_cv_header_Python_h=yes
ac_cv_path_PYTHON=/usr/local/bin/python3
PYTHON='/usr/local/bin/python3'
configure_args=' '\''--prefix=/Users/berald01'\'' '\''--with-python'\''
'\''PYTHON=/usr/local/bin/python3'\'''
python_additional_libs='-lpthread -ldl  -lutil -lm'
python_enable_shared='0'
python_includespec='-I/usr/local/include/python3.4m'
python_libdir='/usr/local/lib/python3.4/config-3.4m'
python_libspec='-L/usr/local/lib/python3.4/config-3.4m -lpython3.4m'
python_majorversion='3'
python_version='3.4'
with_python='yes'


[GENERAL] pgpool ssl handshake failure

2015-10-15 Thread AI Rumman
Hi,

I am using pgpool-II version 3.4.3 (tataraboshi).
Where my database is Postgresql 8.4.

I am trying to configure ssl mode from client and between pgpool and
database it is non-ssl.
I configured as document and now I am getting this in my log:

>
> *2015-10-13 22:17:58: pid 1857: LOG:  new connection received*
> *2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5
> port=65326*
> *2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl handshake
> failure"*
> *2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from 
> frontend**2015-10-13
> 22:17:58: pid 1857: DETAIL:  socket read failed with an error "Success"*

Please let me know what wrong I am doing.

Thanks & Regards.


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Tom Lane
Dario Beraldi  writes:
> Thanks for your answer. Just checking before I screw things up... About
> "the source tree has to be configured and built twice", does it mean that I
> have to execute again

> ./configure --prefix=$HOME;
> make;
> make install

> And should I enable any particular option in ./configure? I see there is a
> "--with-python" option (not specific to python3 though).

Indeed --- you have not built any version of plpython here.  You need
--with-python, and you need to make sure the PYTHON environment variable
is set (else you'll get whatever version is invoked by "python", which is
most likely python2).  See the build instructions in the documentation.
Also watch the output from configure, which will show you which python
it selected.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Dario Beraldi
Hi Adrian,

Thanks for your answer. Just checking before I screw things up... About
"the source tree has to be configured and built twice", does it mean that I
have to execute again

./configure --prefix=$HOME;
make;
make install

And should I enable any particular option in ./configure? I see there is a
"--with-python" option (not specific to python3 though). If it matters, my
python 3 is in /usr/local/bin/python3.

On 15 October 2015 at 14:20, Adrian Klaver 
wrote:

> On 10/15/2015 03:21 AM, Dario Beraldi wrote:
>
>> Hello,
>>
>> I'm having problems installing plpython3u, this is my situation:
>> I have installed postgresql-9.3.5 in my home directory, from source. I
>> used (from my memory, it might not be exact)
>>
>> ./configure --prefix=$HOME;
>> make;
>> make install
>>
>> Now I need to upload a database which requires plpython3u, and this is
>> what happens:
>>
>> pg_restore -U berald01 -d sblab -h localhost -1
>> current_pg_sblab.backup.tar
>>
>> pg_restore: [archiver (db)] Error while PROCESSING TOC:
>> pg_restore: [archiver (db)] Error from TOC entry 1590; 2612 24721
>> PROCEDURAL LANGUAGE plpython3u dberaldi
>> pg_restore: [archiver (db)] could not execute query: ERROR:  could not
>> access file "$libdir/plpython3": No such file or directory
>>  Command was: CREATE OR REPLACE PROCEDURAL LANGUAGE plpython3u;
>>
>> If I try to create plpython3u I get:
>>
>> createlang plpython3u sblab
>> createlang: language installation failed: ERROR:  could not open
>> extension control file
>> "/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control":
>> No such file or directory
>>
>> I'm a bit at a loss, how do I add plpython3u?
>>
>
> See here:
>
> http://www.postgresql.org/docs/9.3/interactive/plpython-python23.html
>
> "Tip: The built variant depends on which Python version was found during
> the installation or which version was explicitly set using the PYTHON
> environment variable; see Section 15.4. To make both variants of PL/Python
> available in one installation, the source tree has to be configured and
> built twice."
>
>
>> My OS is CentOS release 6.
>>
>> Thanks!
>> Dario
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Installing plpython3u

2015-10-15 Thread Adrian Klaver

On 10/15/2015 03:21 AM, Dario Beraldi wrote:

Hello,

I'm having problems installing plpython3u, this is my situation:
I have installed postgresql-9.3.5 in my home directory, from source. I
used (from my memory, it might not be exact)

./configure --prefix=$HOME;
make;
make install

Now I need to upload a database which requires plpython3u, and this is
what happens:

pg_restore -U berald01 -d sblab -h localhost -1 current_pg_sblab.backup.tar

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 1590; 2612 24721
PROCEDURAL LANGUAGE plpython3u dberaldi
pg_restore: [archiver (db)] could not execute query: ERROR:  could not
access file "$libdir/plpython3": No such file or directory
 Command was: CREATE OR REPLACE PROCEDURAL LANGUAGE plpython3u;

If I try to create plpython3u I get:

createlang plpython3u sblab
createlang: language installation failed: ERROR:  could not open
extension control file
"/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control":
No such file or directory

I'm a bit at a loss, how do I add plpython3u?


See here:

http://www.postgresql.org/docs/9.3/interactive/plpython-python23.html

"Tip: The built variant depends on which Python version was found during 
the installation or which version was explicitly set using the PYTHON 
environment variable; see Section 15.4. To make both variants of 
PL/Python available in one installation, the source tree has to be 
configured and built twice."




My OS is CentOS release 6.

Thanks!
Dario



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres function

2015-10-15 Thread David G. Johnston
On Thu, Oct 15, 2015 at 8:32 AM, Ramesh T 
wrote:

>  select position('-' in '123-987-123')
> position
> ---
> 4
> But I want second occurrence,
> position
> -
> 8
>
> plz any help..?
>
>
​
SELECT length((regexp_matches('123-987-123', '(\d{3}-\d{3}-)\d{3}'))[1])
​

David J.


[GENERAL] Installing plpython3u

2015-10-15 Thread Dario Beraldi
Hello,

I'm having problems installing plpython3u, this is my situation:
I have installed postgresql-9.3.5 in my home directory, from source. I used
(from my memory, it might not be exact)

./configure --prefix=$HOME;
make;
make install

Now I need to upload a database which requires plpython3u, and this is what
happens:

pg_restore -U berald01 -d sblab -h localhost -1 current_pg_sblab.backup.tar

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 1590; 2612 24721
PROCEDURAL LANGUAGE plpython3u dberaldi
pg_restore: [archiver (db)] could not execute query: ERROR:  could not
access file "$libdir/plpython3": No such file or directory
Command was: CREATE OR REPLACE PROCEDURAL LANGUAGE plpython3u;

If I try to create plpython3u I get:

createlang plpython3u sblab
createlang: language installation failed: ERROR:  could not open extension
control file
"/data/sblab-home/berald01/share/postgresql/extension/plpython3u.control":
No such file or directory

I'm a bit at a loss, how do I add plpython3u?

My OS is CentOS release 6.

Thanks!
Dario


Re: [GENERAL] Serialization errors despite KEY SHARE/NO KEY UPDATE

2015-10-15 Thread Olivier Dony

On 10/12/2015 03:59 PM, Jim Nasby wrote:

On 10/6/15 12:18 PM, Olivier Dony wrote:


We would happily skip the micro-transactions (as a perf workaround) if
there was a way to detect this situation, but we couldn't find a way to
do that in 9.3. <9.3 we used SELECT FOR UPDATE NOWAIT to guard similar
cases.

If there is any way I could help to make the back-patch happen, please
let me know!


I'd say you should probably open a bug about this to make sure it's
visible if you want it fixed. Or start a thread on -hackers.


Good point, I've submitted bug #13681, thanks! :-)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general