Re: [GENERAL] Migration from Symfoware to PostgreSQL-Constructor functions

2013-08-05 Thread raghu ram
how to see list of system tables, functions, view and user defined tables, functions, views. You can get more details from below URL: Extracting META information from PostgreSQL (INFORMATION_SCHEMA)http://www.alberton.info/postgresql_meta_info.html Thanks Regards Raghu Ram

[GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread Amit Langote
Hello, I am looking the effect of having large number of relation files under $PGDATA/base/ (for example, in cases where I choose lower segment size using --with-segsize). Consider a case where I am working with a large database with large relations, for example a database similar in size to what

[GENERAL] Re: Postgres 9.2.4 for Windows (Vista) Dell Vostro 400, re-installation failure PLEASE CAN SOMEONE HELP!! (nearly fixed)

2013-08-05 Thread Thomas Kellerer
Stephen Brearley, 02.08.2013 11:40: I can change my registry reference to point to D:\..\9.2\data or D:\..\9.2\data_old without any (apparently) undesirable consequences, but I just can’t see my tables in the system, which makes me think I need to do more than this. Why don't you use the

Re: [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread KONDO Mitsumasa
Hi Amit, (2013/08/05 15:23), Amit Langote wrote: May the routines in fd.c become bottleneck with a large number of concurrent connections to above database, say something like pgbench -j 8 -c 128? Is there any other place I should be paying attention to? What kind of file system did you use?

Re: [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread Amit Langote
On Mon, Aug 5, 2013 at 5:01 PM, KONDO Mitsumasa kondo.mitsum...@lab.ntt.co.jp wrote: Hi Amit, (2013/08/05 15:23), Amit Langote wrote: May the routines in fd.c become bottleneck with a large number of concurrent connections to above database, say something like pgbench -j 8 -c 128? Is there

Re: [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread John R Pierce
On 8/5/2013 1:01 AM, KONDO Mitsumasa wrote: When we open file, ext3 or ext4 file system seems to sequential search inode for opening file in file directory. no, ext3/4 uses H-tree structures to search directories over 1 block long quite efficiently. -- john r pierce

[GENERAL] dblink / Insert several records into remote table

2013-08-05 Thread P. Broennimann
Hello I'd like to use dblink to insert records (several) from a local table to a remote table. Is it possible to do this in one statement? Something like: insert into remotetable select col1, col2, col3 from localtable Can someone give me an example or tell me what commands to use? What I

Re: [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread KONDO Mitsumasa
(2013/08/05 17:14), Amit Langote wrote: So, within the limits of max_files_per_process, the routines of file.c should not become a bottleneck? It may not become bottleneck. 1 FD consumes 160 byte in 64bit system. See linux manual at epoll. Regards, -- Mitsumasa KONDO NTT Open Source Software

Re: [GENERAL] incremental dumps

2013-08-05 Thread Luca Ferrari
On Fri, Aug 2, 2013 at 6:55 PM, haman...@t-online.de wrote: thanks for the hint - this is probably one of the things to do. I have something else in mind, but at present I just suspect that this might happen: when I modify data and select _without an ordering_, I am pretty sure to get the

[GENERAL] [tsearch2] Problem with case sensitivity (or with creating own dictionary)

2013-08-05 Thread Krzysztof xaru Rajda
Hello, I encountered such a problem. my goal is to extract links from a text using tsearch2. Everything seemed to be well, unless I got some youtube links - there are some small and big letters inside, and a tsearch parser is lowering everything (from http://youtube.com/Y6dsHDX I got

Re: [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread Andres Freund
On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote: (2013/08/05 17:14), Amit Langote wrote: So, within the limits of max_files_per_process, the routines of file.c should not become a bottleneck? It may not become bottleneck. 1 FD consumes 160 byte in 64bit system. See linux manual at epoll.

Re: [GENERAL] incremental dumps

2013-08-05 Thread hamann . w
Luca Ferrari wrote: On Fri, Aug 2, 2013 at 6:55 PM, haman...@t-online.de wrote: thanks for the hint - this is probably one of the things to do. I have something else in mind, but at present I just suspect that this might happen: when I modify data and select _without an ordering_, I am

Re: [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread Florian Weimer
On 08/05/2013 10:42 AM, John R Pierce wrote: On 8/5/2013 1:01 AM, KONDO Mitsumasa wrote: When we open file, ext3 or ext4 file system seems to sequential search inode for opening file in file directory. no, ext3/4 uses H-tree structures to search directories over 1 block long quite

Re: [GENERAL] Bottlenecks with large number of relation segment files

2013-08-05 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes: ... Also, there are global limits to the amount of filehandles that can simultaneously opened on a system. Yeah. Raising max_files_per_process puts you at serious risk that everything else on the box will start falling over for lack of available FD

Re: [GENERAL] Why are stored procedures looked on so negatively?

2013-08-05 Thread Chris Travers
On Sun, Aug 4, 2013 at 7:01 PM, Craig Ringer cr...@2ndquadrant.com wrote: I think part of the issue is that people tend to consider stored procedures part of the application's internal implementation where you just change all the call sites when you change the function. Normally stored

Re: [GENERAL] Exit code -1073741819

2013-08-05 Thread Carlos Henrique Reimer
Hi, Yes, I agree with you that it must be upgraded to a supported version but as the developer has not homologated the system to some new PG versions yet I need to find out some way to fix it with 8.2. Will try to install PG in another windows box, copying the data directories over the network

Re: [GENERAL] Exit code -1073741819

2013-08-05 Thread Adrian Klaver
On 08/05/2013 06:24 AM, Carlos Henrique Reimer wrote: Hi, Yes, I agree with you that it must be upgraded to a supported version but as the developer has not homologated the system to some new PG versions yet I need to find out some way to fix it with 8.2. Will try to install PG in another

[GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread immersive.ex...@gmail.com
I needed a GROUP_CONCAT to port some queries to postgres. In discussions online, I found repeated advice for rewriting the queries, but no solid way to formulate the GROUP_CONCAT as a postgres function. Rewrite perhaps hundreds of queries that happen to be in the

[GENERAL] [tsearch2] Problem with case sensitivity (or with creating own dictionary)

2013-08-05 Thread Krzysztof xaru Rajda
Hello, I encountered such a problem. my goal is to extract links from a text using tsearch2. Everything seemed to be well, unless I got some youtube links - there are some small and big letters inside, and a tsearch parser is lowering everything (from http://youtube.com/Y6dsHDX I got

Re: [GENERAL] [tsearch2] Problem with case sensitivity (or with creating own dictionary)

2013-08-05 Thread Oleg Bartunov
Please, take a look on contrib/dict_int and create your own dict_noop. It should be easy. I think you could document it and share with people (wiki.postgresql.org ?), since there were other people interesting in noop dictionary. Also, don't forget to modify your configuration - use ts_debug(),

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread Alvaro Herrera
immersive.ex...@gmail.com escribió: Note: I found some close-but-no cigar aggregates shared online, but they would not accept integer arguments, nor would they handle the optionally furnished delimiter. People would suggesting casting the argument to the pseudo-GROUP_CONCAT.

[GENERAL] Connecting Multiple LibreOffice Base Users to a Remote Postgres Database

2013-08-05 Thread Don Parris
Hi all, I have been posting on my blog about getting LibreOffice Base and PostgreSQL working together. My latest actually focuses on getting multiple users connecting to a single, remote, stand-alone PostgreSQL server. You can set this up fairly quickly and - in my case, anyway - with no

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread Richard Broersma
For posterity that finalized function could be posted here: http://okbob.blogspot.com/2009/08/mysql-functions-for-postgresql.html There's already a GROUP_CONCAT, listed there, but I guess this one was lacking in some way. On Mon, Aug 5, 2013 at 10:04 AM, Alvaro Herrera

Re: [GENERAL] Re: Postgres 9.2.4 for Windows (Vista) Dell Vostro 400, re-installation failure PLEASE CAN SOMEONE HELP!! (nearly fixed)

2013-08-05 Thread Stephen Brearley
Thanks Blade-of-Light (mysterious person) As you’ll see from my later post, I managed to fix things, largely by understanding that there was a Postgres service I needed to stop before I could make the last change I needed. However, there is a lot of useful stuff in your post, which I will

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread immersive.ex...@gmail.com
Ahh! I was looking for that in the documentation!! I believe that is not mentioned on the function documentation page. I had tried ANY that works for other SQLs, but it had barfed on it. Let me test it and post the more efficient code, if it works... On 08/05/2013

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread immersive.ex...@gmail.com
I believe his VARIADIC might cover the optional delimiter use, but he suggests using an array aggregation function instead -which does not return a string list, so you would still have to rework the original queries. Now you could write a GROUP_CONCAT aggregate that

Re: [GENERAL] Re: Postgres 9.2.4 for Windows (Vista) Dell Vostro 400, re-installation failure PLEASE CAN SOMEONE HELP!! (nearly fixed)

2013-08-05 Thread BladeOfLight16
On Mon, Aug 5, 2013 at 2:21 AM, Thomas Kellerer spam_ea...@gmx.net wrote: Stephen Brearley, 02.08.2013 11:40: Why don't you use the official way to re-configure the Windows service and use pg_ctl unregister and pg_ctl register to make it use the correct data directory. There is no need to

[GENERAL] Possible bug with row_to_json

2013-08-05 Thread Jack Christensen
When using a subquery as a source for row_to_json, depending on the order of arguments it may ignore renaming a column. jack=# create table player( jack(# player_id serial primary key, jack(# name varchar not null unique jack(# ); NOTICE: CREATE TABLE will create implicit sequence

Re: [GENERAL] Possible bug with row_to_json

2013-08-05 Thread Merlin Moncure
On Mon, Aug 5, 2013 at 5:15 PM, Jack Christensen j...@jackchristensen.com wrote: When using a subquery as a source for row_to_json, depending on the order of arguments it may ignore renaming a column. jack=# create table player( jack(# player_id serial primary key, jack(# name varchar

Re: [GENERAL] dblink / Insert several records into remote table

2013-08-05 Thread Michael Paquier
On Mon, Aug 5, 2013 at 6:22 PM, P. Broennimann peter.broennim...@gmail.com wrote: Hello I'd like to use dblink to insert records (several) from a local table to a remote table. Is it possible to do this in one statement? Something like: insert into remotetable select col1, col2, col3

[GENERAL] pg_stat_replication became empty suddenly

2013-08-05 Thread ascot.m...@gmail.com
Hi, I am doing some stress tests to a pair of PG servers to monitor the pg_stat_replication, during the test, the pg_stat_replication suddenly became empty. PG version: 9.2.4 O/S: Ubuntu: 12.04 Since I need to monitor the replication lag from time to time, if the pg_stat_replication becomes

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread immersive.ex...@gmail.com
Well after testing, this is what I found: When you try to use ANYELEMENT parameters, and even just a VARIADIC TEXT[] parameter to support the optional delimiter argument: FUNCTION GROUP_CONCAT_ATOM(ANYELEMENT, ANYELEMENT, VARIADIC delimiters TEXT[]) when you go to create the aggregates,

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread immersive.ex...@gmail.com
No luck on posting at that blog; comments are limited to X characters. Here is the final version with the minor update to the variable names and comments: -- group_concat.sql -- permutation of GROUP_CONCAT parameter types with delimiter parameter furnished: CREATE OR REPLACE FUNCTION

Re: [GENERAL] pg_stat_replication became empty suddenly

2013-08-05 Thread ascot.m...@gmail.com
Hi, I found the problem should be because I tried to clean RAM cache in the slave by running sync; echo 3 /proc/sys/vm/drop_caches' that caused the receiver of slave gone away. ps -ef | grep receiver postgres 6182 6178 0 12:11 ?00:00:06 postgres: wal receiver process

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread Pavel Stehule
Hello 2013/8/3 immersive.ex...@gmail.com immersive.ex...@gmail.com: I needed a GROUP_CONCAT to port some queries to postgres. In discussions online, I found repeated advice for rewriting the queries, but no solid way to formulate the GROUP_CONCAT as a postgres function. Rewrite perhaps

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread Alvaro Herrera
Pavel Stehule escribió: you code can be translated to CREATE OR REPLACE FUNCTION GROUP_CONCAT_ATOM(field1 TEXT, field2 TEXT, delimiter TEXT) RETURNS TEXT AS $$ BEGIN RETURN COALESCE(field1||delimiter||field2, field2, field1); END; $$ LANGUAGE plpgsql; Actually you don't even

Re: [GENERAL] Seamless replacement to MySQL's GROUP_CONCAT function...

2013-08-05 Thread Pavel Stehule
2013/8/6 Alvaro Herrera alvhe...@2ndquadrant.com: Pavel Stehule escribió: you code can be translated to CREATE OR REPLACE FUNCTION GROUP_CONCAT_ATOM(field1 TEXT, field2 TEXT, delimiter TEXT) RETURNS TEXT AS $$ BEGIN RETURN COALESCE(field1||delimiter||field2, field2, field1);