Re: [GENERAL] problem with partitioned table and indexed json field

2013-11-08 Thread Raphael Bauduin
On Thu, Nov 7, 2013 at 7:24 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 I wrote:
  It looks like the problem is we're building a MergeAppend plan and not
  getting the targetlist for the MergeAppend node right.

 Found it --- simple oversight in building optimized min/max plans.
 If you need a patch now, see

 http://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=5d0731da521f090f80ea39529fe274ac6d6bffa1


Wow, the patch is available thes same day I supplied the steps to reproduce
the bug! I don't think it's possible to be faster :-)
Thanks a alot!

Raph


regards, tom lane




-- 
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org


Re: [GENERAL] Breaking up a PostgreSQL COPY command into chunks?

2013-11-08 Thread Victor Hooi
Hi,

@Francisco - Yeah, the file is around 600 Mb currently, uncompressed.

You're right, our internet connection is going to be the limiting factor.

Essentially, the PostgreSQL server is in a datacentre, the server we're
dumping to is in the office.

Running a script on the PostgreSQL server in the datacentre is going to be
tricky (not so much technically, just from a procedures/security point of
view).

Dumping to a spare table seems like an interesting point - so we'd just
create the table, COPY the results to that table, then use LIMIT/OFFSET to
paginate through that, then drop the table afterwards?

Currently, I'm doing a quick hack where we download an ordered list of the
ids (auto-incrementing integer) into Python, chunk it up into groups of
ids, then use a WHERE IN clause to download each chunk via COPY.

Would dumping to a spare table and paginating a better approach? Reasons?
(Not challenging it, I just want to understand everything).

Cheers,
Victor


On Fri, Nov 8, 2013 at 6:36 PM, Francisco Olarte fola...@peoplecall.comwrote:

 On Fri, Nov 8, 2013 at 5:09 AM, Victor Hooi victorh...@yahoo.com wrote:
  They think that it might be limited by the network, and how fast the
  PostgreSQL server can push the data across the internet. (The Postgres
  server and the box running the query are connected over the internet).

 You previously said you had 600Mb. Over the internet. ¿ Is it a very
 fat pipe ? Because otherwise the limitng factor is probably not the
 speed at which postgres can push the resuts, but he throughput of your
 link.

 If, as you stated, you need a single transaction to get a 600Mb
 snapshot I would recommend to dump it to disk, compressing on the fly
 ( you should get easily four o five fold reduction on a CSV file using
 any decent compressor ), and then send the file. If you do not have
 disk for the dump but can run programs near the server, you can try
 compressing on the fly. If you have got none of this but have got
 space for a spare table, use a select into, paginate this output and
 drop it after. Or just look at the configs and set longer query times,
 if your app NEEDS two hour queries, they can be enabled. But anyway,
 doing a long transaction over the internet does not seem like a good
 idea to me.

 Francisco Olarte



[GENERAL] Re: changing port numbers so pgbouncer can read geoserver and postgres

2013-11-08 Thread si24
would any of this help make it more clear on what I seem to be doing wrong?

pgbouncer=# show config;
key|  value 
| changeable
---+-+
 job_name  | pgbouncer  
| no
 service_name  | pgbouncer  
| no
 conffile  | C:\Program Files\PostgreSQL\share\pgbouncer.ini
| yes
 logfile   | C:\Program Files\PostgreSQL\log\pgbouncer.log  
| yes
 pidfile   | C:\Program Files\PostgreSQL\log\pgbouncer.pid  
| no
 listen_addr   | *  
| no
 listen_port   | 5432   
| no
 listen_backlog| 128
| no
 auth_type | md5
| yes
 auth_file | C:\Program Files\PostgreSQL\etc\userlist.txt   
| yes
 pool_mode | transaction
| yes
 max_client_conn   | 400
| yes
 default_pool_size | 100
| yes
 min_pool_size | 0  
| yes
 reserve_pool_size | 0  
| yes
 reserve_pool_timeout  | 5  
| yes
 syslog| 0  
| yes
 syslog_facility   | daemon 
| yes
 syslog_ident  | pgbouncer  
| yes
 autodb_idle_timeout   | 3600   
| yes
 server_reset_query| DISCARD ALL
| yes
 server_check_query| select 1   
| yes
 server_check_delay| 30 
| yes
 query_timeout | 0  
| yes
 query_wait_timeout| 0  
| yes
 client_idle_timeout   | 0  
| yes
 client_login_timeout  | 60 
| yes
 idle_transaction_timeout  | 0  
| yes
 server_lifetime   | 1200   
| yes
 server_idle_timeout   | 60 
| yes
 server_connect_timeout| 15 
| yes
 server_login_retry| 15 
| yes
 server_round_robin| 0  
| yes
 suspend_timeout   | 10 
| yes
 ignore_startup_parameters | application_name,extra_float_digits
| yes
 disable_pqexec| 0  
| no
 dns_max_ttl   | 15 
| yes
 dns_zone_check_period | 0  
| yes
 max_packet_size   | 2147483647 
| yes
 pkt_buf   | 2048   
| no
 sbuf_loopcnt  | 5  
| yes
 tcp_defer_accept  | 0  
| yes
 tcp_socket_buffer | 0  
| yes
 tcp_keepalive | 1  
| yes
 tcp_keepcnt   | 0  
| yes
 tcp_keepidle  | 0  
| yes
 tcp_keepintvl | 0  
| yes
 verbose   | 0  
| yes
 admin_users   | postgres   
| yes
 stats_users   | postgres   
| yes
 stats_period  | 60 
| yes
 log_connections   | 1  
| yes
 log_disconnections| 1  
| yes
 log_pooler_errors | 1  
| yes
(54 rows)



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/changing-port-numbers-so-pgbouncer-can-read-geoserver-and-postgres-tp5776631p5777453.html
Sent from the PostgreSQL - general mailing list archive at 

[GENERAL] pg_upgrade / Checking for presence of required libraries

2013-11-08 Thread Marc Mamin
Hello,
we are trying pg_upgrade on an old test instance (9.1 - 9.3)

the check fails for a library that we don't mange to locate in the old system.
(we suspect it to come from a very old try to implement a custom FTS parser).

So my question: can we find out where the pg_ugrade checker did find this 
reference?

($libdir/hello.so does not exist in the 9.1 instance)


  Performing Consistency Checks
  -
  ...
  Checking for presence of required libraries fatal

  Could not load library $libdir/hello.so
  ERROR:  could not access file $libdir/hello.so: No such file or directory

many thanks,

Marc Mamin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explanantion on pgbouncer please

2013-11-08 Thread si24
has it got to do with the DNS I have checked on the admin consol and a
recieved this:

pgbouncer=# show dns_hosts;
 hostname | ttl | addrs
--+-+---
(0 rows)


pgbouncer=# show dns_zones;
 zonename | serial | count
--++---
(0 rows)

Not to sure if this helps.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Explanantion-on-pgbouncer-please-tp5776515p5777456.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: changing port numbers so pgbouncer can read geoserver and postgres

2013-11-08 Thread si24
has it got to do with the DNS I have checked on the admin consol and a
recieved this: 

pgbouncer=# show dns_hosts; 
 hostname | ttl | addrs 
--+-+--- 
(0 rows) 


pgbouncer=# show dns_zones; 
 zonename | serial | count 
--++--- 
(0 rows) 

Not to sure if this helps.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/changing-port-numbers-so-pgbouncer-can-read-geoserver-and-postgres-tp5776631p5777457.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: pg_upgrade / Checking for presence of required libraries (SOLVED)

2013-11-08 Thread Marc Mamin
 Hello,
 we are trying pg_upgrade on an old test instance (9.1 - 9.3)
 
 the check fails for a library that we don't mange to locate in the old
 system.
 (we suspect it to come from a very old try to implement a custom FTS
 parser).
 
 So my question: can we find out where the pg_ugrade checker did find
 this reference?

  found it in a function body:

  select pg_get_functiondef(oid) 
  from pg_proc 
  where pg_get_functiondef(oid) ~'hello' 
  and not proisagg -- seems that you can't call functiondef on aggregate 
functions, at least in 9.1


 
 ($libdir/hello.so does not exist in the 9.1 instance)
 
 
   Performing Consistency Checks
   -
   ...
   Checking for presence of required libraries fatal
 
   Could not load library $libdir/hello.so
   ERROR:  could not access file $libdir/hello.so: No such file or
 directory
 
 many thanks,
 
 Marc Mamin
 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To
 make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Breaking up a PostgreSQL COPY command into chunks?

2013-11-08 Thread Francisco Olarte
Hi Victor:

 You're right, our internet connection is going to be the limiting factor.
...

Good to know this. Then you have 1 restriction, your bandwidth...

 Essentially, the PostgreSQL server is in a datacentre, the server we're
 dumping to is in the office.
 Running a script on the PostgreSQL server in the datacentre is going to be
 tricky (not so much technically, just from a procedures/security point of
 view).

... another restriction, you cannot run code on the server.

An aside here, you know postgres can do server side copy, and even
pipe the results to a program, so if you have 600Mb spare disk, or
about 150 and access to gzip, or access to ssh CLIENT in the server
and ssh SERVER in the office you could transfer the file easily. Ie,
if you have a directory which you can access from the server account,
you could do

COPY the_table_or_query TO '/the/dir/the/filename' FORMAT .

or

COPY the_table_or_query TO PROGRAM 'gzip -c 
/the/dir/the/filename.czip' FORMAT .

and then transfer the appropiate file to the office at your leisure
using whichever method you are used to.

In fact, if you have an ssh server in your office accesible from the
server you could transfer it there directly, or use a listening socket
in your program and netcat, but in this case you'll run in the same
timeout problems ( except you can compress and reduce the compression
time ).

 Dumping to a spare table seems like an interesting point - so we'd just
 create the table, COPY the results to that table, then use LIMIT/OFFSET to
 paginate through that, then drop the table afterwards?

That is more or less it. You do not copy, just create the table with
the proper data, i.e., instead of COPY (my query) ... you do a
'CREATE TABLE tmpxxx as SELECT...'

 Currently, I'm doing a quick hack where we download an ordered list of the
 ids (auto-incrementing integer) into Python, chunk it up into groups of ids,
 then use a WHERE IN clause to download each chunk via COPY.

This has a problem, you need to wrap everything in a single
transaction to avoid the update / delete / insert in the middle
problem you commented previously, so your transaction time is going to
be even bigger. Also,if the integers are autoincrementing, you can do
better. First get min and max  ( I do not remember if postgres already
optimized them or you'll need the 'order by limit 1' trick ). Then
just loop in appropiately sized steps ( I suppose table is dense, but
you can go to the stats, and you should know your data patterns ).

 Would dumping to a spare table and paginating a better approach? Reasons?
 (Not challenging it, I just want to understand everything).

The table is created as the single operation on a single transaction,
which has no delay problems as it does not generate output.

From there on the table is read-only, so you can use a transaction for
each chunk, and you know the ids do not change. Even if a chunk copy
fails due to the internet connection farting a bit, you just retry it.

If you are going to use limit/offset you'll need an index on the field
anyway, but you can do the index on a separate transaction after
copying ( so you do not interfere with anyone ).

You can even do everything in autocommit mode, saving server roundtrips.

Anyway, this can be done, but if you can do the server side copy and
ssh thing, that is much better. I do not know how to do it in another
OSs, but in unix, if you have a ssh-accessible account with enough
quota on the server you could do something like create a directory in
the user home, give adequate access to the server user to it, let the
server dump the data there.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explanantion on pgbouncer please

2013-11-08 Thread si24
Does any of this help this is the config file i get when I type in show
config in the pgbouncer admin console:

pgbouncer=# show config;
key|  value 
| changeable
---+-+
 job_name  | pgbouncer  
| no
 service_name  | pgbouncer  
| no
 conffile  | C:\Program Files\PostgreSQL\share\pgbouncer.ini
| yes
 logfile   | C:\Program Files\PostgreSQL\log\pgbouncer.log  
| yes
 pidfile   | C:\Program Files\PostgreSQL\log\pgbouncer.pid  
| no
 listen_addr   | *  
| no
 listen_port   | 5432   
| no
 listen_backlog| 128
| no
 auth_type | md5
| yes
 auth_file | C:\Program Files\PostgreSQL\etc\userlist.txt   
| yes
 pool_mode | transaction
| yes
 max_client_conn   | 400
| yes
 default_pool_size | 100
| yes
 min_pool_size | 0  
| yes
 reserve_pool_size | 0  
| yes
 reserve_pool_timeout  | 5  
| yes
 syslog| 0  
| yes
 syslog_facility   | daemon 
| yes
 syslog_ident  | pgbouncer  
| yes
 autodb_idle_timeout   | 3600   
| yes
 server_reset_query| DISCARD ALL
| yes
 server_check_query| select 1   
| yes
 server_check_delay| 30 
| yes
 query_timeout | 0  
| yes
 query_wait_timeout| 0  
| yes
 client_idle_timeout   | 0  
| yes
 client_login_timeout  | 60 
| yes
 idle_transaction_timeout  | 0  
| yes
 server_lifetime   | 1200   
| yes
 server_idle_timeout   | 60 
| yes
 server_connect_timeout| 15 
| yes
 server_login_retry| 15 
| yes
 server_round_robin| 0  
| yes
 suspend_timeout   | 10 
| yes
 ignore_startup_parameters | application_name,extra_float_digits
| yes
 disable_pqexec| 0  
| no
 dns_max_ttl   | 15 
| yes
 dns_zone_check_period | 0  
| yes
 max_packet_size   | 2147483647 
| yes
 pkt_buf   | 2048   
| no
 sbuf_loopcnt  | 5  
| yes
 tcp_defer_accept  | 0  
| yes
 tcp_socket_buffer | 0  
| yes
 tcp_keepalive | 1  
| yes
 tcp_keepcnt   | 0  
| yes
 tcp_keepidle  | 0  
| yes
 tcp_keepintvl | 0  
| yes
 verbose   | 0  
| yes
 admin_users   | postgres   
| yes
 stats_users   | postgres   
| yes
 stats_period  | 60 
| yes
 log_connections   | 1  
| yes
 log_disconnections| 1  
| yes
 log_pooler_errors | 1  
| yes
(54 rows)




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Explanantion-on-pgbouncer-please-tp5776515p5777452.html
Sent from the PostgreSQL - general mailing list archive at 

[GENERAL] Re: changing port numbers so pgbouncer can read geoserver and postgres

2013-11-08 Thread si24
Ok now I seem to have gotten it working by changing the store port numbers to
pgbouncer because its using port no 5432 and postgres is now using 6432.

So now it seems to be giving me the server details that is being used when
the map gets run:

pgbouncer=# show servers;
type |   user   | database | state |   addr| port | local_addr |
local_port |connect_time |request_time |   ptr| link
--+--+--+---+---+--+++-+-+--+--
 S| postgres | manifold | idle  | 127.0.0.1 | 6432 | 127.0.0.1  | 
62261 | 2013-11-08 15:08:37 | 2013-11-08 15:08:48 | 0197da20 |
 S| postgres | manifold | idle  | 127.0.0.1 | 6432 | 127.0.0.1  | 
62259 | 2013-11-08 15:08:37 | 2013-11-08 15:08:47 | 0197d930 |
(2 rows)

and connections have gone down from 100 to 5 when I check on pgadmin for the
connections.

Is it because of putting the default pool size from 20 - 2 ( I was trying to
test something).

Is meant to show that its idle or that its being used on the server part on
the top



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/changing-port-numbers-so-pgbouncer-can-read-geoserver-and-postgres-tp5776631p5777468.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where can I find the 9.3 beta1 rpm now ?

2013-11-08 Thread Adrian Klaver

On 11/06/2013 12:56 AM, Massimo Ortensi wrote:

Hi everybody. I downloaded and tested 9.3 beta 1 back in june and used
for a while.
Today I upgraded to 9.3.1 but just discovered that database cannot be
directly used, so I need to switch back to the beta version in order to
dump the data.
Is there any site I can download the old beta rpm's ?


A couple of suggestions.

1) Contact the maintainers directly:

http://yum.postgresql.org/contact.php

2) Clone  9.3beta1 from the git repo and build from source:

http://git.postgresql.org/gitweb/?p=postgresql.git


Thanks in advance







--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: changing port numbers so pgbouncer can read geoserver and postgres

2013-11-08 Thread Adrian Klaver

On 11/08/2013 05:41 AM, si24 wrote:

Ok now I seem to have gotten it working by changing the store port numbers to
pgbouncer because its using port no 5432 and postgres is now using 6432.

So now it seems to be giving me the server details that is being used when
the map gets run:

pgbouncer=# show servers;
type |   user   | database | state |   addr| port | local_addr |
local_port |connect_time |request_time |   ptr| link
--+--+--+---+---+--+++-+-+--+--
  S| postgres | manifold | idle  | 127.0.0.1 | 6432 | 127.0.0.1  |
62261 | 2013-11-08 15:08:37 | 2013-11-08 15:08:48 | 0197da20 |
  S| postgres | manifold | idle  | 127.0.0.1 | 6432 | 127.0.0.1  |
62259 | 2013-11-08 15:08:37 | 2013-11-08 15:08:47 | 0197d930 |
(2 rows)

and connections have gone down from 100 to 5 when I check on pgadmin for the
connections.

Is it because of putting the default pool size from 20 - 2 ( I was trying to
test something).

Is meant to show that its idle or that its being used on the server part on
the top



In order to get answers you will need to provide more and consistent 
information. For instance in the configure file you sent earlier default 
pool size was 100, then you mention it was 20 and now is 2. Also what is 
the store and how does it fit into the scheme of things? More generally 
what is the scheme of things? In other words detail how your 
applications(s), server(s) are laid out relative to each other when it 
comes to the flow of information. As to what the above means it would 
depend on what 'when the map gets run' means. Some of that could be 
found out by tailing the Postgres log directly and seeing what is 
happening at the same time.



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] psql variable interpolation with subsequent underscore

2013-11-08 Thread Tim Kane
Hi all,

I¹m having difficulty using variable interpolation within psql, where that
variable is within a table nameŠ


Like so..

=# set MM 201310
=# select :MM;
 ?column?
--
   201309
(1 row)


=# alter table my_table rename to my_table_:MM_raw;
ERROR:  syntax error at or near :
LINE 1: Šmy_table rename to my_table_:MM_ra...
 ^

The problem is that psql tries to interpret ŒMM_raw¹ as the variable
name, but my intention is to only interpret ŒMM¹ followed by a literal
underscore.

I can¹t find any other way to encapsulate the variable name in this wayŠ  Is
there a trick to it?

I suspect I¹ll need to work around this by altering the naming convention
such that the MM is at the Œend¹ of the table name.  Maybe..   Thoughts?


Tim







Re: [GENERAL] psql variable interpolation with subsequent underscore

2013-11-08 Thread Rémi Cura
maybe a stupid answer,
but why not use another language (plpgsql? python?).
Is it because of transaction issue?
Sorry for not being more helpfull.
Cheers,
Rémi-C


2013/11/8 Tim Kane tim.k...@gmail.com

 Hi all,

 I’m having difficulty using variable interpolation within psql, where that
 variable is within a table name…


 Like so..

 =# set MM 201310
 =# select :MM;
  ?column?
 --
201309
 (1 row)


 =# alter table my_table rename to my_table_:MM_raw;
 ERROR:  syntax error at or near :
 LINE 1: …my_table rename to my_table_:MM_ra...
  ^

 The problem is that psql tries to interpret ‘MM_raw’ as the variable
 name, but my intention is to only interpret ‘MM’ followed by a literal
 underscore.

 I can’t find any other way to encapsulate the variable name in this way…
  Is there a trick to it?

 I suspect I’ll need to work around this by altering the naming convention
 such that the MM is at the ‘end’ of the table name.  Maybe..   Thoughts?


 Tim






Re: [GENERAL] upgrading to 9.3

2013-11-08 Thread zach cruise
thanks andy!

a little off-topic, but about database reorganization - is it recommended
to group all sequences and domains under the public schema? or is a
sequence tied to one table and is better in its separate schema?

what about replication options for x64 systems since slony is not an option?


Re: [GENERAL] Strange result with SELECT ... ORDER BY random() LIMIT 1 and JOINs

2013-11-08 Thread Tom Lane
Etienne Dube etd...@gmail.com writes:
 This query yields unexpected results (tested under 9.2.4):

 SELECT
  s.car_id,
  s.color_id AS subquery_color_id,
  co.color_id AS join_color_id,
  co.color_name
 FROM
  (
  SELECT
  ca.car_id,
  (
  SELECT color_id
  FROM color
  WHERE ca.car_id = ca.car_id  -- dependency added to 
 avoid getting the same value for every row in the output
  ORDER BY random()
  LIMIT 1
  ) AS color_id
  FROM
  car ca
  ) s
  LEFT JOIN color co ON co.color_id = s.color_id;

 We can see the equality defined in the LEFT JOIN does not hold true for 
 the subquery_color_id and join_color_id column aliases in the output. 
 EXPLAIN also shows that the subplan for the inner subquery used to pick 
 a random row from the color table appears twice.

I've committed patches to prevent duplication of subplans containing
volatile functions.  Thanks for the test case!

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql variable interpolation with subsequent underscore

2013-11-08 Thread Adrian Klaver

On 11/08/2013 08:08 AM, Tim Kane wrote:

Hi all,

I’m having difficulty using variable interpolation within psql, where
that variable is within a table name…


Like so..

=# set MM 201310
=# select :MM;
  ?column?
--
201309
(1 row)


=# alter table my_table rename to my_table_:MM_raw;
ERROR:  syntax error at or near :
LINE 1: …my_table rename to my_table_:MM_ra...
  ^

The problem is that psql tries to interpret ‘MM_raw’ as the variable
name, but my intention is to only interpret ‘MM’ followed by a
literal underscore.

I can’t find any other way to encapsulate the variable name in this way…
  Is there a trick to it?

I suspect I’ll need to work around this by altering the naming
convention such that the MM is at the ‘end’ of the table name.
  Maybe..   Thoughts?



test= \set tbl_name 'my_table_':MM'_raw';

test= alter table my_table rename to :tbl_name;
ALTER TABLE

test= \d my_table_201310_raw
Table public.my_table_201310_raw
 Column |  Type   | Modifiers
+-+---
 id | integer |



Tim






--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql variable interpolation with subsequent underscore

2013-11-08 Thread Tim Kane

On 08/11/2013 16:47, Adrian Klaver adrian.kla...@gmail.com wrote:


test= \set tbl_name 'my_table_':MM'_raw';

test= alter table my_table rename to :tbl_name;
ALTER TABLE

test= \d my_table_201310_raw
Table public.my_table_201310_raw
  Column |  Type   | Modifiers
+-+---
  id | integer |



Ooh, sneaky.  Thanks Adrian.


Remi, to answer your question - this is being called as part of a shell
script. Implementing it in another language would be overkill for what I¹m
trying to do.

Cheers,

Tim




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql variable interpolation with subsequent underscore

2013-11-08 Thread John R Pierce

On 11/8/2013 9:57 AM, Bosco Rama wrote:

On 11/08/13 09:30, Tim Kane wrote:


Remi, to answer your question - this is being called as part of a shell
script.

In that case your options expand greatly...



I'd still suggest perl for this, especially if it is ever going to get 
even a little bit more complicated.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql variable interpolation with subsequent underscore

2013-11-08 Thread Bosco Rama
On 11/08/13 09:30, Tim Kane wrote:
 
 Remi, to answer your question - this is being called as part of a shell
 script.

In that case your options expand greatly...

For example:
   tbl_name=MM

   psql -c alter table my_table rename to my_table_${tbl_name}_raw;

or for more complex stuff:
   tbl_name=MM

   psql -f- EOF
   alter table my_table rename to my_table_${tbl_name}_raw;
   ... other DML/DDL
   EOF

Or even:
   (
   bunch of commands generating SQL DML/DDL

   ) | psql -f-

Or ...

The sky's the limit once you remove the restriction of working 100%
*within* psql itself.

HTH

Bosco.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] database redesign

2013-11-08 Thread zach cruise
my response hasn't shown up on
http://postgresql.1045698.n5.nabble.com/upgrading-to-9-3-td5777291.html so
trying again. sorry if both show up.

anyway, on database reorganization - is it recommended to group all
sequences and domains under one public schema? or is a sequence tied to a
table as its counter?

what are some replication choices for x64 systems since slony is not an
option?


Re: [GENERAL] database redesign

2013-11-08 Thread John R Pierce

On 11/8/2013 11:44 AM, zach cruise wrote:
my response hasn't shown up on 
http://postgresql.1045698.n5.nabble.com/upgrading-to-9-3-td5777291.html so 
trying again. sorry if both show up.


anyway, on database reorganization - is it recommended to group all 
sequences and domains under one public schema? or is a sequence tied 
to a table as its counter?




I would keep sequences in the same schema as the related table. anything 
else is chaotic.   if a domain is used by all the schemas, then putting 
it in public makes sense, otherwise, if its just used by one schema, it 
should logically be part of that schema.


what are some replication choices for x64 systems since slony is not 
an option?


the built in streaming replication is the usual first choice.

--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database redesign

2013-11-08 Thread Sergey Konoplev
On Fri, Nov 8, 2013 at 12:09 PM, John R Pierce pie...@hogranch.com wrote:
 On 11/8/2013 11:44 AM, zach cruise wrote:
 anyway, on database reorganization - is it recommended to group all
 sequences and domains under one public schema? or is a sequence tied to a
 table as its counter?

 I would keep sequences in the same schema as the related table. anything
 else is chaotic.   if a domain is used by all the schemas, then putting it
 in public makes sense, otherwise, if its just used by one schema, it should
 logically be part of that schema.

I would also like to suggest using serial/bigserial types instead of
integer/bigint + sequence. This will automatically create a sequence
that is depended on the table.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unary Operators

2013-11-08 Thread Tom Lane
Andreas Ulbrich andreas.ulbr...@matheversum.de writes:
 In the documentation I can't find, that / is not possible as an unary 
 operator.
 Are there any restrictions.

Yeah, if you look in the bison grammar you'll find out that the operators
with special precedence are hard-wired as to which syntaxes are allowed:

+ - prefix or infix
* / % ^   =   infix only

It's possible that if we just added some more productions these could be
used in nonstandard ways.  But I think interest in that is really pretty
low, and it might lead to behavior that would be surprising to people
who aren't interested in using them in weird ways.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] what could cause postgres to crash?

2013-11-08 Thread Sandeep Gupta
Hi,

 My postgres sessions, after being idle for 5 --6 hrs, crash on their own.
Sometimes with error messages sometimes without. The message I get appended
below. I was looking for suggestion to narrow down as to what could have
caused this problem. System log doesn't show anything.

Thanks.
Sandeep



LOG:  statistics collector process (PID 6631) was terminated by signal 9:
Killed
LOG:  server process (PID 15710) was terminated by signal 9: Killed
DETAIL:  Failed process was running: COMMIT PREPARED 'T13199'
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another s
erver process exited abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
LOG:  all server processes terminated; reinitializing


Re: [GENERAL] what could cause postgres to crash?

2013-11-08 Thread Tom Lane
Sandeep Gupta gupta.sand...@gmail.com writes:
  My postgres sessions, after being idle for 5 --6 hrs, crash on their own.
 Sometimes with error messages sometimes without. The message I get appended
 below. I was looking for suggestion to narrow down as to what could have
 caused this problem. System log doesn't show anything.

 LOG:  statistics collector process (PID 6631) was terminated by signal 9:
 Killed

Signal 9 is a kill.  If you didn't manually kill the process, it's almost
certainly the infamous Linux OOM killer that did it.  And yes, that would
be recorded in the kernel log ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Documents/ppts/pdfs

2013-11-08 Thread Jayadevan M
Hi,

When I search for material on PostgreSQL, once in a while I see reasonably
old, but very good content in file such as
http://www.postgresql.org/files/developer/transactions.pdf
How can I browse to a top level folder (
http://www.postgresql.org/files/developer) and see what else is available ?
I can't find any links in postgresql.org or documentation pointing to files
like this.

Regards,
Jayadevan