Re: [GENERAL] Timezone difference between Oracle SYSDATE and PostgreSQL timestamp functions

2014-08-29 Thread Albe Laurenz
Vinayak wrote:
 We have converted Oracle SYSDATE to PostgreSQL statement_timestamp() but
 there is a difference in timezone.
 SYSDATE returns the time on the server where the database instance is
 running(returns operating system time) so the time depends on the OS
 timezone setting.
 while the timezone of postgreSQL
 statement_timestamp()/now()/clock_timestamp() depends on the DBMS setting.
 so I think timezone settings are different between DBMS and OS.

[...]

 Any idea how can we set OS timezone on PostgreSQL?

If you mean the *server's* OS timezone, I guess you'd have to write
a C function that does something similar to identify_system_timezone()
in bin/initdb/findtimezone.c.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.1 issues running data directory from VMware shared folder

2014-08-29 Thread Jacob Bunk Nielsen
Arze, Cesar ca...@som.umaryland.edu writes:

 creating template1 database in /mnt/pg_data/base/1 ... FATAL:  could
 not open file pg_xlog/00010001 (log file 0, segment
 1): No such file or directory

We've seen something slightly similar when running PostgreSQL in a Linux
container. See this thread for more details:
http://www.postgresql.org/message-id/spamdrop+87ha31kxrc@atom.bunk.cc

We have not solved this problem yet, but currently I'm leaning towards
blaming the container layer, so next time we experience problems I think
we'll try to remove the virtualization.

Best regards

Jacob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-29 Thread David G Johnston
Yogesh. Sharma wrote
 Dear David,
 
 Are you currently using PostgreSQL?
 Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
 Now we plan to update this to PostgreSQL 9.0 version with  RHEL6.5. As in
 verion 9.0 I found least Compatibilities.
 
 So, please guide me.
 
 Regards,

Guidance is why we write documentation. if you have specific questions or
concerns after reading the documentation you can ask here.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Help-related-to-Postgresql-for-RHEL-6-5-tp5816742p5816876.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Single Table Report With Calculated Column

2014-08-29 Thread Rich Shepard

  I've read some on table partitioning and using nested select statements
with group by, but have not found the syntax to produce the needed results.

  From a table I extract row counts grouped by three columns:

select stream, sampdate, func_feed_grp, count(*) from benthos group
by stream, sampdate, func_feed_grp order by stream, sampdate,
func_feed_group;

And I want to include the proportion of each count based on the total rows
for each stream and sampdate. The totals are obtained with this statement:

select stream, sampdate, count(*) as tot_cnt from benthos group by stream,
sampdate order by stream, sampdate;

  What I do not know how to do is combine the two so the resulting table
contains the columns stream, sampdate, count, proportion. I want to learn
how to build the sub-select to get this result. Joe Celko's 'SQL for
Smarties, 4th Ed.' has a close example in the chapter on table partitioning,
but I could not apply that model to my table.

TIA,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Single Table Report With Calculated Column

2014-08-29 Thread David G Johnston
Rich Shepard wrote
 I've read some on table partitioning and using nested select statements
 with group by, but have not found the syntax to produce the needed
 results.
 
From a table I extract row counts grouped by three columns:
 
 select stream, sampdate, func_feed_grp, count(*) from benthos group
 by stream, sampdate, func_feed_grp order by stream, sampdate,
 func_feed_group;
 
 And I want to include the proportion of each count based on the total rows
 for each stream and sampdate. The totals are obtained with this statement:
 
 select stream, sampdate, count(*) as tot_cnt from benthos group by stream,
 sampdate order by stream, sampdate;
 
What I do not know how to do is combine the two so the resulting table
 contains the columns stream, sampdate, count, proportion. I want to learn
 how to build the sub-select to get this result. Joe Celko's 'SQL for
 Smarties, 4th Ed.' has a close example in the chapter on table
 partitioning,
 but I could not apply that model to my table.

You want to use window clause/function.

Add the following to the first query, in the select-list:

Sum(count(*)) over (partition by stream, sampdate) as stream_date_total

You function counts can then be divided into this.

The count(*) is because of the outer group by
The sum(...) is the window function

You could also just put both your queries into a with/cte (2 items) and do a
normal inner join...

Or 

Select ... From (first query) fq join (second query) sq on (...)

David J.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Single-Table-Report-With-Calculated-Column-tp5816880p5816886.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transforming pg_dump output to be compatible with SQLite 3.x

2014-08-29 Thread Kynn Jones
Greetings!

I'm looking for tools/resources/ideas for making pg_dump's output
compatible with SQLite v. 3.1.3.

Ideally, I'd love to be able to do something like this (Unix):

  % rm -f mydatabase.db
  % pg_dump --no-owner --inserts mydatabase | pg_dump2sqlite3 | sqlite3
mydatabase.db

...where pg_dump2sqlite3 stands for some program (or pipeline) that
transforms the output of pg_dump as needed so that sqlite3 can digest it.

Among the tasks that the hypothetical pg_dump2sqlite3 program has to carry,
IMO the hardest one to implement is to compute the foreign-key dependencies
among the tables, and from this compute the sequential order in which the
tables will be created and populated[1].

Am I correct?  Is there a way around this?

TIA!

kj

[1] In pg_dump's output, the sequential ordering of the CREATE TABLE
statements and of the COPY blocks that respectively define and populate the
tables does not take into account dependencies, because the specification
of these dependencies comes after all the CREATE TABLE and COPY commands,
in the form of ALTER TABLE statements.  AFAIK, however, sqlite3 does not
allow adding foreign key constraints after the table has been created.
This means that both the ordering of table creation and population must
respect the dependencies among the tables.


Re: [GENERAL] Transforming pg_dump output to be compatible with SQLite 3.x

2014-08-29 Thread John McKown
On Fri, Aug 29, 2014 at 9:06 AM, Kynn Jones kyn...@gmail.com wrote:
 Greetings!

 I'm looking for tools/resources/ideas for making pg_dump's output compatible
 with SQLite v. 3.1.3.

 Ideally, I'd love to be able to do something like this (Unix):

   % rm -f mydatabase.db
   % pg_dump --no-owner --inserts mydatabase | pg_dump2sqlite3 | sqlite3
 mydatabase.db

 ...where pg_dump2sqlite3 stands for some program (or pipeline) that
 transforms the output of pg_dump as needed so that sqlite3 can digest it.

 Among the tasks that the hypothetical pg_dump2sqlite3 program has to carry,
 IMO the hardest one to implement is to compute the foreign-key dependencies
 among the tables, and from this compute the sequential order in which the
 tables will be created and populated[1].

 Am I correct?  Is there a way around this?

 TIA!

 kj

 [1] In pg_dump's output, the sequential ordering of the CREATE TABLE
 statements and of the COPY blocks that respectively define and populate the
 tables does not take into account dependencies, because the specification of
 these dependencies comes after all the CREATE TABLE and COPY commands, in
 the form of ALTER TABLE statements.  AFAIK, however, sqlite3 does not allow
 adding foreign key constraints after the table has been created.  This means
 that both the ordering of table creation and population must respect the
 dependencies among the tables.

Read down in the man page for pg_dump. There are parameters such as
--inserts and --column-inserts which will help. And you might want
--quote-all-identifiers just in case some attribute (column name) is
an SQLite key word.

Example transcript:

pg_dump -c -t datedata -O -x --column-inserts --quote-all-identifiers
-d PostgreSQL_db | egrep -v '^SET '
--
-- PostgreSQL database dump
--

DROP TABLE public.datedata;

--
-- Name: datedata; Type: TABLE; Schema: public; Owner: -; Tablespace:
--

CREATE TABLE datedata (
id text,
date date,
value text
);


--
-- Data for Name: datedata; Type: TABLE DATA; Schema: public; Owner: -
--

INSERT INTO datedata (id, date, value) VALUES ('a', '2000-01-01', 'x');
INSERT INTO datedata (id, date, value) VALUES ('a', '2000-03-01', 'x');
INSERT INTO datedata (id, date, value) VALUES ('b', '2000-11-11', 'w');
INSERT INTO datedata (id, date, value) VALUES ('c', '2000-11-11', 'y');
INSERT INTO datedata (id, date, value) VALUES ('c', '2000-10-01', 'y');
INSERT INTO datedata (id, date, value) VALUES ('c', '2000-09-10', 'y');
INSERT INTO datedata (id, date, value) VALUES ('c', '2000-12-12', 'z');
INSERT INTO datedata (id, date, value) VALUES ('c', '2000-10-11', 'z');
INSERT INTO datedata (id, date, value) VALUES ('d', '2000-11-11', 'w');
INSERT INTO datedata (id, date, value) VALUES ('d', '2000-11-10', 'w');


--
-- PostgreSQL database dump complete
--


-- 
There is nothing more pleasant than traveling and meeting new people!
Genghis Khan

Maranatha! 
John McKown


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transforming pg_dump output to be compatible with SQLite 3.x

2014-08-29 Thread Adrian Klaver

On 08/29/2014 07:40 AM, John McKown wrote:

On Fri, Aug 29, 2014 at 9:06 AM, Kynn Jones kyn...@gmail.com wrote:

Greetings!

I'm looking for tools/resources/ideas for making pg_dump's output compatible
with SQLite v. 3.1.3.

Ideally, I'd love to be able to do something like this (Unix):

   % rm -f mydatabase.db
   % pg_dump --no-owner --inserts mydatabase | pg_dump2sqlite3 | sqlite3
mydatabase.db

...where pg_dump2sqlite3 stands for some program (or pipeline) that
transforms the output of pg_dump as needed so that sqlite3 can digest it.

Among the tasks that the hypothetical pg_dump2sqlite3 program has to carry,
IMO the hardest one to implement is to compute the foreign-key dependencies
among the tables, and from this compute the sequential order in which the
tables will be created and populated[1].

Am I correct?  Is there a way around this?

TIA!

kj

[1] In pg_dump's output, the sequential ordering of the CREATE TABLE
statements and of the COPY blocks that respectively define and populate the
tables does not take into account dependencies, because the specification of
these dependencies comes after all the CREATE TABLE and COPY commands, in
the form of ALTER TABLE statements.  AFAIK, however, sqlite3 does not allow
adding foreign key constraints after the table has been created.  This means
that both the ordering of table creation and population must respect the
dependencies among the tables.


Read down in the man page for pg_dump. There are parameters such as
--inserts and --column-inserts which will help. And you might want
--quote-all-identifiers just in case some attribute (column name) is
an SQLite key word.



Well I think the issue Kynn is referring to is Sqlites limited ability 
to do ALTER TABLE. In a Postgres dump the basic structure of the table 
is laid out using CREATE TABLE and then later ALTER TABLE commands are 
used to finish adding the bells and whistles. Sqlite does not understand 
those ALTER TABLE commands and fails on them. So to
get a dump to work you would need to create a complete CREATE TABLE 
definition. The FOREIGN KEY ordering issue could be gotten around(I 
believe) by toggling the foreign_keys PRAGMA in sqlite.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transforming pg_dump output to be compatible with SQLite 3.x

2014-08-29 Thread John McKown
You're correct. It is Friday leading to a 3 day weekend here. And it
is a short work day too. So my brain has definitely already left the
building. Thanks for pointing that out. I use SQLite some, but just
for very basic stuff and am not really familiar with it. Perhaps Kynn
could show what, in particular, is causing his problem(s).

On Fri, Aug 29, 2014 at 9:58 AM, Adrian Klaver
adrian.kla...@aklaver.com wrote:
 On 08/29/2014 07:40 AM, John McKown wrote:

 On Fri, Aug 29, 2014 at 9:06 AM, Kynn Jones kyn...@gmail.com wrote:

 Greetings!

 I'm looking for tools/resources/ideas for making pg_dump's output
 compatible
 with SQLite v. 3.1.3.

 Ideally, I'd love to be able to do something like this (Unix):

% rm -f mydatabase.db
% pg_dump --no-owner --inserts mydatabase | pg_dump2sqlite3 | sqlite3
 mydatabase.db

 ...where pg_dump2sqlite3 stands for some program (or pipeline) that
 transforms the output of pg_dump as needed so that sqlite3 can digest it.

 Among the tasks that the hypothetical pg_dump2sqlite3 program has to
 carry,
 IMO the hardest one to implement is to compute the foreign-key
 dependencies
 among the tables, and from this compute the sequential order in which the
 tables will be created and populated[1].

 Am I correct?  Is there a way around this?

 TIA!

 kj

 [1] In pg_dump's output, the sequential ordering of the CREATE TABLE
 statements and of the COPY blocks that respectively define and populate
 the
 tables does not take into account dependencies, because the specification
 of
 these dependencies comes after all the CREATE TABLE and COPY commands, in
 the form of ALTER TABLE statements.  AFAIK, however, sqlite3 does not
 allow
 adding foreign key constraints after the table has been created.  This
 means
 that both the ordering of table creation and population must respect the
 dependencies among the tables.


 Read down in the man page for pg_dump. There are parameters such as
 --inserts and --column-inserts which will help. And you might want
 --quote-all-identifiers just in case some attribute (column name) is
 an SQLite key word.


 Well I think the issue Kynn is referring to is Sqlites limited ability to do
 ALTER TABLE. In a Postgres dump the basic structure of the table is laid out
 using CREATE TABLE and then later ALTER TABLE commands are used to finish
 adding the bells and whistles. Sqlite does not understand those ALTER TABLE
 commands and fails on them. So to
 get a dump to work you would need to create a complete CREATE TABLE
 definition. The FOREIGN KEY ordering issue could be gotten around(I believe)
 by toggling the foreign_keys PRAGMA in sqlite.




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



-- 
There is nothing more pleasant than traveling and meeting new people!
Genghis Khan

Maranatha! 
John McKown


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-29 Thread Adrian Klaver

On 08/28/2014 09:14 PM, Yogesh. Sharma wrote:

Dear David,


Are you currently using PostgreSQL?

Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
Now we plan to update this to PostgreSQL 9.0 version with  RHEL6.5. As in 
verion 9.0 I found least Compatibilities.


So what are the things you are concerned about?



So, please guide me.


If you are going from 8.1 to any supported version you will be dealing 
with the type casting changes introduced in 8.3


http://www.postgresql.org/docs/9.3/interactive/release-8-3.html

E.97.2.1. General

Non-character data types are no longer automatically cast to TEXT 
(Peter, Tom)


I would spend some time testing that.




Regards,
Yogesh








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timezone difference between Oracle SYSDATE and PostgreSQL timestamp functions

2014-08-29 Thread Steve Crawford

On 08/28/2014 10:06 PM, Vinayak wrote:

Hello,
We have converted Oracle SYSDATE to PostgreSQL statement_timestamp() but
there is a difference in timezone.
SYSDATE returns the time on the server where the database instance is
running(returns operating system time) so the time depends on the OS
timezone setting.
while the timezone of postgreSQL
statement_timestamp()/now()/clock_timestamp() depends on the DBMS setting.
so I think timezone settings are different between DBMS and OS.

Any idea how can we set OS timezone on PostgreSQL?

If you mean setting the default time zone for interpreting non-qualified 
input and displaying output, start with the 'timezone' setting in 
postgresql.conf. Most installs have that default to 'localtime' which 
means to use the servers local timezone but you can set it to whatever 
timezone you prefer. Absent an override by the client, this will be the 
default.


Next, the PGTZ environment variable can set a local default for clients 
reading that variable. Finally, that can then be overridden within a 
connection through the 'set time zone...' statement.


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Performance issue with postgres_fdw

2014-08-29 Thread Emmanuel Medernach

Hello,

I use  Postgres version 9.3.5  and spot a  performance issue
with postgres_fdw.

I  have a  table  object_003_xyz with  275000  lines and  is
exported to the master node as master_object_003_xyz.

( The  following query  is  only a  part  of an  automatically
generated complex query. )

On the master:

SELECT * FROM master_object_003_xyz AS o1, master_object_003_xyz AS o2 WHERE 
o1.objectid  o2.objectid  AND cos(radians(o1.ra_PS))  * 
cos(radians(o1.decl_PS)) BETWEEN cos(radians(o2.ra_PS))  * 
cos(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2))  * 2 AND 
cos(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  + sin(radians(1.5e-5  / 2)) 
 * 2  AND sin(radians(o1.ra_PS))  * cos(radians(o1.decl_PS)) BETWEEN 
sin(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2)) 
 * 2 AND sin(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  + 
sin(radians(1.5e-5  / 2))  * 2 AND sin(radians(o1.decl_PS)) BETWEEN 
sin(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2))  * 2 AND 
sin(radians(o2.decl_PS))  + sin(radians(1.5e-5  / 2))  * 2 AND 
degrees(asin(sqrt(power(sin(radians((o2.decl_PS  - o1.decl_PS)  / 2)), 2)  + 
power(sin(radians((o2.ra_PS  - o1.ra_PS)  / 2)), 2)  * cos(radians(o1.decl_PS)) 
* cos(radians(o2.decl_PS  * 2) = 1.5e-5

(4 rows)
Time: 513711.684 ms

Here is the plan used:

 Nested Loop  (cost=200.70..44187032.64 rows=34518880 width=2168)
   -  Foreign Scan on master_object_003_xyz o2  (cost=100.00..24294.47 
rows=275449 width=1084)
   -  Foreign Scan on master_object_003_xyz o1  (cost=100.70..160.32 rows=1 
width=1084)



On the pool:

SELECT * FROM object_003_xyz AS o1, object_003_xyz AS o2 WHERE o1.objectid  
o2.objectid  AND cos(radians(o1.ra_PS))  * cos(radians(o1.decl_PS)) BETWEEN 
cos(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2)) 
 * 2 AND cos(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  + 
sin(radians(1.5e-5  / 2))  * 2  AND sin(radians(o1.ra_PS))  * 
cos(radians(o1.decl_PS)) BETWEEN sin(radians(o2.ra_PS))  * 
cos(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2))  * 2 AND 
sin(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  + sin(radians(1.5e-5  / 2)) 
 * 2 AND sin(radians(o1.decl_PS)) BETWEEN sin(radians(o2.decl_PS))  - 
sin(radians(1.5e-5  / 2))  * 2 AND sin(radians(o2.decl_PS))  + 
sin(radians(1.5e-5  / 2))  * 2 AND
degrees(asin(sqrt(power(sin(radians((o2.decl_PS  - o1.decl_PS)  / 2)), 2)  + 
power(sin(radians((o2.ra_PS  - o1.ra_PS)  / 2)), 2)  * cos(radians(o1.decl_PS)) 
* cos(radians(o2.decl_PS  * 2) = 1.5e-5

(4 rows)
Time: 2738.217 ms

It is much faster because it uses available index :

 Nested Loop  (cost=0.56..360279717.93 rows=34692216 width=2168)
   -  Seq Scan on object_003_xyz o2  (cost=0.00..18685.49 rows=275449 
width=1084)
   -  Index Scan using object_003_xyz_idx_xyz on object_003_xyz o1 
(cost=0.56..1306.64 rows=126 width=1084)
 Index Cond: (((cos(radians(ra_ps)) * cos(radians(decl_ps))) = 
((cos(radians(o2.ra_ps)) * cos(radians(o2.decl_ps))) - 
2.61799387799149e-07::double precision)) AND ((cos(radians(ra_ps)) * 
cos(radians(decl_ps))) = ((cos(radians(o2.ra_ps)) * cos(radians(o2.decl_ps))) + 
2.61799387799149e-07::double precision)) AND ((sin(radians(ra_ps)) * 
cos(radians(decl_ps))) = ((sin(radians(o2.ra_ps)) * cos(radians(o2.decl_ps))) - 
2.61799387799149e-07::double precision)) AND ((sin(radians(ra_ps)) * 
cos(radians(decl_ps))) = ((sin(radians(o2.ra_ps)) * cos(radians(o2.decl_ps))) + 
2.61799387799149e-07::double precision)) AND (sin(radians(decl_ps)) = 
(sin(radians(o2.decl_ps)) - 2.61799387799149e-07::double precision)) AND 
(sin(radians(decl_ps)) = (sin(radians(o2.decl_ps)) + 
2.61799387799149e-07::double precision)))
 Filter: ((objectid  o2.objectid) AND 
(degrees((asin(sqrt((power(sin(radians(((o2.decl_ps - decl_ps) / 2::double 
precision))), 2::double precision) + ((power(sin(radians(((o2.ra_ps - ra_ps) / 
2::double precision))), 2::double precision) * cos(radians(decl_ps))) * 
cos(radians(o2.decl_ps)) * 2::double precision)) = 1.5e-05::double precision))



Would it be possible to avoid doing a nested loop of foreign
scans when  dealing with tables on  the same pool  ?  And to
automatically export the query directly in that case ?

What may I do for now ?

Thanks for your help and best regards,
--
Emmanuel Medernach


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Single Table Report With Calculated Column

2014-08-29 Thread Rich Shepard

On Fri, 29 Aug 2014, David G Johnston wrote:


You want to use window clause/function.


David,

  I read about this, but did not absorb everything.


Add the following to the first query, in the select-list:
Sum(count(*)) over (partition by stream, sampdate) as stream_date_total
You function counts can then be divided into this.
The count(*) is because of the outer group by
The sum(...) is the window function


  So this is a way of combinging aggregates; I tried something like the
above but postfix did not like my syntax; it was not using partitions.


You could also just put both your queries into a with/cte (2 items) and do a
normal inner join...

Or

Select ... From (first query) fq join (second query) sq on (...)


  Both of which need more study on my part. I read about CTEs and did not
then see how to apply them to my needs.

Many thanks,

Rich



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Single Table Report With Calculated Column

2014-08-29 Thread Adrian Klaver

On 08/29/2014 09:50 AM, Rich Shepard wrote:

On Fri, 29 Aug 2014, David G Johnston wrote:


You want to use window clause/function.


David,

   I read about this, but did not absorb everything.


Add the following to the first query, in the select-list:
Sum(count(*)) over (partition by stream, sampdate) as stream_date_total
You function counts can then be divided into this.
The count(*) is because of the outer group by
The sum(...) is the window function


   So this is a way of combinging aggregates; I tried something like the
above but postfix did not like my syntax; it was not using partitions.


I am going to assume you mean Postgres did not like the syntax.

What was the error message you got back?




You could also just put both your queries into a with/cte (2 items)
and do a
normal inner join...

Or

Select ... From (first query) fq join (second query) sq on (...)


   Both of which need more study on my part. I read about CTEs and did not
then see how to apply them to my needs.

Many thanks,

Rich






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Single Table Report With Calculated Column

2014-08-29 Thread Rich Shepard

On Fri, 29 Aug 2014, Adrian Klaver wrote:


I am going to assume you mean Postgres did not like the syntax.


Adrian,

  Oops! Mea culpa. Yes, postgres.


What was the error message you got back?


  I don't recall. It was yesterday afternoon and I flushed it from memory
when it did not work.

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is there a function to save schema history internally?

2014-08-29 Thread Patrick Dung
Hello Postgresql users,

Is there a function to save schema history internally?
By keeping the schema history inside the DB, we can keep track of what and when 
is changed in the schema.

While searching google. It seems it is a limitation with the audit trigger:

https://wiki.postgresql.org/wiki/Audit_trigger#Limitation:_Cannot_audit_DDL


Thanks and regards,
Patrick


[GENERAL] alter column to varchar without view drop/re-creation

2014-08-29 Thread Emi Lu

  
  
Hello list, 

May I know is there a way to "alter column type to varchar"
(previous is varchar(***)) without view drop/re-creation?

Basically, looking for a way to change column without have to
drop/re-create dependent views. 

varchar(***) to varchar and no date/numeric changes. 

Thanks a lot!
Emi
---
PostgreSQL 8.3.18 on x86_64
  




Re: [GENERAL] alter column to varchar without view drop/re-creation

2014-08-29 Thread Emi Lu

  
  



  
  Hello list, 
  
  May I know is there a way to "alter column type to varchar"
  (previous is varchar(***)) without view drop/re-creation?
  
  Basically, looking for a way to change column without have to
  drop/re-create dependent views. 
  
  varchar(***) to varchar and no date/numeric changes. 
  

I saw docs mention about: update pg_attribute. May I know:

. will dependent views updated automatically or there might be
potential problems?
. If it's fine, will the following SQL enough to change column from
varchar(***) to varchar?

 update pg_attribute set atttypmod =-1 
 where attrelid = 'oid' ;

Thanks a lot!


  ---
  PostgreSQL 8.3.18 on x86_64



  




Re: [GENERAL] alter column to varchar without view drop/re-creation

2014-08-29 Thread Adrian Klaver

On 08/29/2014 12:09 PM, Emi Lu wrote:



Hello list,

May I know is there a way to alter column type to varchar (previous
is varchar(***)) without view drop/re-creation?

Basically, looking for a way to change column without have to
drop/re-create dependent views.

varchar(***) to varchar and no date/numeric changes.


I saw docs mention about: update pg_attribute. May I know:

. will dependent views updated automatically or there might be potential
problems?
. If it's fine, will the following SQL enough to change column from
varchar(***) to varchar?

   update pg_attribute set atttypmod =-1
   where  attrelid = 'oid' ;


Here is what I did. I would definitely test first and run in a transaction:

test=# SELECT version();
 version 


--
 PostgreSQL 8.3.23 on i686-pc-linux-gnu, compiled by GCC gcc (SUSE 
Linux) 4.7.2 20130108 [gcc-4_7-branch revision 195012]



test=# create TABLE base_tbl (id integer, vc_fld varchar(10));
CREATE TABLE
test=# CREATE view v_test as SELECT * from base_tbl ;
CREATE VIEW
test=# insert INTO base_tbl VALUES(1, 'one');
INSERT 0 1
test=# insert INTO base_tbl VALUES(2, 'two');
INSERT 0 1
test=# \d base_tbl
  Table public.base_tbl
 Column | Type  | Modifiers
+---+---
 id | integer   |
 vc_fld | character varying(10) |

test=# \d v_test
View public.v_test
 Column | Type  | Modifiers
+---+---
 id | integer   |
 vc_fld | character varying(10) |
View definition:
 SELECT base_tbl.id, base_tbl.vc_fld
   FROM base_tbl;

test=# UPDATE pg_attribute SET atttypmod = -1 WHERE attrelid = 
'base_tbl'::regclass AND attname = 'vc_fld';

UPDATE 1
test=# UPDATE pg_attribute SET atttypmod = -1 WHERE attrelid = 
'v_test'::regclass AND attname = 'vc_fld';

UPDATE 1
test=# \d base_tbl
Table public.base_tbl
 Column |   Type| Modifiers
+---+---
 id | integer   |
 vc_fld | character varying |

test=# \d v_test
  View public.v_test
 Column |   Type| Modifiers
+---+---
 id | integer   |
 vc_fld | character varying |
View definition:
 SELECT base_tbl.id, base_tbl.vc_fld
   FROM base_tbl;

test=# insert INTO base_tbl VALUES(3, '123456789012345678901234567890');
INSERT 0 1
test=# SELECT * from base_tbl ;
 id | vc_fld
+
  1 | one
  2 | two
  3 | 123456789012345678901234567890
(3 rows)

test=# SELECT * from v_test ;
 id | vc_fld
+
  1 | one
  2 | two
  3 | 123456789012345678901234567890
(3 rows)




Thanks a lot!


---
*PostgreSQL 8.3.18 on x86_64*






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Single Table Report With Calculated Column

2014-08-29 Thread Adrian Klaver

On 08/29/2014 10:15 AM, Rich Shepard wrote:

On Fri, 29 Aug 2014, Adrian Klaver wrote:


I am going to assume you mean Postgres did not like the syntax.


Adrian,

   Oops! Mea culpa. Yes, postgres.


What was the error message you got back?


   I don't recall. It was yesterday afternoon and I flushed it from memory
when it did not work.


Assuming you did this in psql, looking in ~/.psql_history might be a 
good way to retrieve what you did and then use that to rerun the query.




Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL receive process dies

2014-08-29 Thread Patrick Krecker
Hi Craig -- Sorry for the late response, I've been tied up with some other
things for the last day. Just to give some context, this is a machine that
sits in our office and replicates from another read slave in production via
a tunnel set up with spiped. The spiped tunnel is working and postgres is
still stuck (it has been stuck since 8-25).

The last moment that replication was working was  2014-08-25
22:06:05.03972. We have a table called replication_time with one column and
one row that has a timestamp that is updated every second, so it's easy to
tell the last time this machine was in sync with production.

recovery.conf: http://pastie.org/private/dfmystgf0wxgtmahiita
logs: http://pastie.org/private/qt1ixycayvdsxafrzj0l0q

Currently the WAL receive process is still not running. Interestingly,
another pg instance running on the same machine is replicating just fine.

A note about that: there is another instance running on that machine and a
definite race condition with restore_wal_s3.py, which writes the file to
/tmp before copying it to the destination requested by postgres (I just
discovered this today, this is not generally how we run our servers). So,
if both are restoring at the same time, they will step on the WAL archives
being unzipped in /tmp and bad things will happen. But, interestingly, I
checked the logs for the other machine and there is no activity on that
day. It does not appear that the WAL replay was invoked or that the WAL
receive timed out.

As for enabling the core dump, it seems that it needs to be done when
Postgres starts, and thought I would leave it running in its stuck state
for now. However, if you know how to enable it on a running process, let me
know. We are running Ubuntu 13.10.


On Wed, Aug 27, 2014 at 11:30 PM, Craig Ringer cr...@2ndquadrant.com
wrote:

 On 08/28/2014 09:39 AM, Patrick Krecker wrote:
  We have a periodic network connectivity issue (unrelated to Postgres)
  that is causing the replication to fail.
 
  We are running Postgres 9.3 using streaming replication. We also have
  WAL archives available to be replayed with restore_command. Typically
  when I bring up a slave it copies over WAL archives for a while before
  connecting via streaming replication.
 
  When I notice the machine is behind in replication, I also notice that
  the WAL receiver process has died. There didn't seem to be any
  information in the logs about it.

 What did you search for?

 Do you have core dumps enabled? That'd be a good first step. (Exactly
 how to do this depends on the OS/distro/version, but you basically want
 to set ulimit -c unlimited on some ancestor of the postmaster).

  1. It seems that Postgres does not fall back to copying WAL archives
  with its restore_command. I just want to confirm that this is what
  Postgres is supposed to do when its connection via streaming replication
  times out.

 It should fall back.

  2. Is it possible to restart replication after the WAL receiver process
  has died without restarting Postgres?

 PostgreSQL should do so its self.

 Please show your recovery.conf (appropriately redacted) and
 postgresql.conf for the replica, and complete logs for the time period
 of interest. You'll want to upload the logs somewhere then link to them,
 do not attach them to an email to the list.

 --
  Craig Ringer   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services



Re: [GENERAL] WAL receive process dies

2014-08-29 Thread Andres Freund
On 2014-08-29 13:04:43 -0700, Patrick Krecker wrote:
 Hi Craig -- Sorry for the late response, I've been tied up with some other
 things for the last day. Just to give some context, this is a machine that
 sits in our office and replicates from another read slave in production via
 a tunnel set up with spiped. The spiped tunnel is working and postgres is
 still stuck (it has been stuck since 8-25).
 
 The last moment that replication was working was  2014-08-25
 22:06:05.03972. We have a table called replication_time with one column and
 one row that has a timestamp that is updated every second, so it's easy to
 tell the last time this machine was in sync with production.
 
 recovery.conf: http://pastie.org/private/dfmystgf0wxgtmahiita
 logs: http://pastie.org/private/qt1ixycayvdsxafrzj0l0q

The problem is this log entry:

2014-08-27 18:44:27 PDT ERROR:  requested starting point 175/2800 is
ahead of the WAL flush position of this server 174/B76D16A8

That's the walreceiver connecting to the upstream node and askign for
wAL. Somehow the standby has gotten *considerably* ahead of the node
it's trying to receive WAL from.

Are you sure primary_conninfo / the spiped tunnel connects to the right
postgres instance? Did you possibly have a failover or something like that?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there a function to save schema history internally?

2014-08-29 Thread Adrian Klaver

On 08/29/2014 11:23 AM, Patrick Dung wrote:

Hello Postgresql users,

Is there a function to save schema history internally?
By keeping the schema history inside the DB, we can keep track of what
and when is changed in the schema.

While searching google. It seems it is a limitation with the audit trigger:
https://wiki.postgresql.org/wiki/Audit_trigger#Limitation:_Cannot_audit_DDL


Well going forward, 9.4+,  there are EVENT TRIGGERS

http://www.postgresql.org/docs/9.4/static/event-triggers.html

For a recent thread on what is possible or not see:

http://www.postgresql.org/message-id/20ee50f73664e744af948f0106fe6dfa585a7...@seambx01.sea.corp.int.untd.com




Thanks and regards,
Patrick



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter column to varchar without view drop/re-creation

2014-08-29 Thread Emi Lu

  
  
Hello, 
  
  On 08/29/2014 03:16 PM, Adrian Klaver wrote:


  
May I know is there a way to "alter
  column type to varchar" (previous
  
  is varchar(***)) without view drop/re-creation?
  
  
  Basically, looking for a way to change column without have to
  
  drop/re-create dependent views.
  
  
  varchar(***) to varchar and no date/numeric changes.
  
  

I saw docs mention about: update pg_attribute. May I know:


. will dependent views updated automatically or there might be
potential

problems?

. If it's fine, will the following SQL enough to change column
from

varchar(***) to varchar?


 update pg_attribute set atttypmod =-1

 where attrelid = 'oid' ;

  
  
  Here is what I did. I would definitely test first and run in a
  transaction:
  




  
  test=# SELECT version();
  
   version 
--
  
  PostgreSQL 8.3.23 on i686-pc-linux-gnu, compiled by GCC gcc (SUSE
  Linux) 4.7.2 20130108 [gcc-4_7-branch revision 195012]
  
  
  
  test=# create TABLE base_tbl (id integer, vc_fld varchar(10));
  
  CREATE TABLE
  
  test=# CREATE view v_test as SELECT * from base_tbl ;
  
  CREATE VIEW
  
  test=# insert INTO base_tbl VALUES(1, 'one');
  
  INSERT 0 1
  
  test=# insert INTO base_tbl VALUES(2, 'two');
  
  INSERT 0 1
  
  test=# \d base_tbl
  
   Table "public.base_tbl"
  
  Column | Type | Modifiers
  
  +---+---
  
  id | integer |
  
  vc_fld | character varying(10) |
  
  
  test=# \d v_test
  
   View "public.v_test"
  
  Column | Type | Modifiers
  
  +---+---
  
  id | integer |
  
  vc_fld | character varying(10) |
  
  View definition:
  
  SELECT base_tbl.id, base_tbl.vc_fld
  
   FROM base_tbl;
  
  
  test=# UPDATE pg_attribute SET atttypmod = -1 WHERE attrelid =
  'base_tbl'::regclass AND attname = 'vc_fld';
  
  UPDATE 1
  
  test=# UPDATE pg_attribute SET atttypmod = -1 WHERE attrelid =
  'v_test'::regclass AND attname = 'vc_fld';
  
  UPDATE 1
  
  test=# \d base_tbl
  
   Table "public.base_tbl"
  
  Column | Type | Modifiers
  
  +---+---
  
  id | integer |
  
  vc_fld | character varying |
  
  
  test=# \d v_test
  
   View "public.v_test"
  
  Column | Type | Modifiers
  
  +---+---
  
  id | integer |
  
  vc_fld | character varying |
  
  View definition:
  
  SELECT base_tbl.id, base_tbl.vc_fld
  
   FROM base_tbl;
  
  
  test=# insert INTO base_tbl VALUES(3,
  '123456789012345678901234567890');
  
  INSERT 0 1
  
  test=# SELECT * from base_tbl ;
  
  id | vc_fld
  
  +
  
   1 | one
  
   2 | two
  
   3 | 123456789012345678901234567890
  
  (3 rows)
  
  
  test=# SELECT * from v_test ;
  
  id | vc_fld
  
  +
  
   1 | one
  
   2 | two
  
   3 | 123456789012345678901234567890
  
  (3 rows)
  


This is exactly what I plan to do.
So, according to the test result, can make conclusion
that pg_attribute will auto take care of all dependent views. 

 Here is what I did. I would definitely test first and run
in a transaction:


It seems that there is no transaction block needed? The one line
command is: 
UPDATE pg_attribute SET atttypmod = -1 WHERE attrelid =
'table_name'::regclass AND attname = 'col1';

Isn't it? 

As for the "definitely test", you mean check view after the change?
Would there be any other potential problems for this approach? 

If not, I will adopt this approach since we have many view
dependencies and it seems that this was the best way to avoid view
drop/re-creation for now. If there are other ways, please do let me
know. 

Thanks a lot!
Emi
  




Re: [GENERAL] WAL receive process dies

2014-08-29 Thread Patrick Krecker
On Fri, Aug 29, 2014 at 2:11 PM, Andres Freund and...@2ndquadrant.com
wrote:

 On 2014-08-29 13:04:43 -0700, Patrick Krecker wrote:
  Hi Craig -- Sorry for the late response, I've been tied up with some
 other
  things for the last day. Just to give some context, this is a machine
 that
  sits in our office and replicates from another read slave in production
 via
  a tunnel set up with spiped. The spiped tunnel is working and postgres is
  still stuck (it has been stuck since 8-25).
 
  The last moment that replication was working was  2014-08-25
  22:06:05.03972. We have a table called replication_time with one column
 and
  one row that has a timestamp that is updated every second, so it's easy
 to
  tell the last time this machine was in sync with production.
 
  recovery.conf: http://pastie.org/private/dfmystgf0wxgtmahiita
  logs: http://pastie.org/private/qt1ixycayvdsxafrzj0l0q

 The problem is this log entry:

 2014-08-27 18:44:27 PDT ERROR:  requested starting point 175/2800 is
 ahead of the WAL flush position of this server 174/B76D16A8

 That's the walreceiver connecting to the upstream node and askign for
 wAL. Somehow the standby has gotten *considerably* ahead of the node
 it's trying to receive WAL from.

 Are you sure primary_conninfo / the spiped tunnel connects to the right
 postgres instance? Did you possibly have a failover or something like that?

 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


We didn't have a failover. We just have one master and replica in different
AZs on Amazon EC2.

I ran the following on the local endpoint of spiped:

while [ true ]; do psql -h localhost -p 5445 judicata -U marbury -c select
modtime, pg_last_xlog_receive_location(), pg_last_xlog_replay_location()
from replication_time;  done;

And the same command on production and I was able to verify that the xlogs
for a given point in time were the same (modtime is updated every second by
an upstart job):

spiped from office - production:
  modtime   | pg_last_xlog_receive_location |
pg_last_xlog_replay_location
+---+--
 2014-08-29 15:23:25.563766 | 177/2E80C9F8  | 177/2E80C9F8

Ran directly on production replica:
  modtime   | pg_last_xlog_receive_location |
pg_last_xlog_replay_location
+---+--
 2014-08-29 15:23:25.563766 | 177/2E80C9F8  | 177/2E80C9F8

To me, this is sufficient proof that spiped is indeed talking to the
machine I think it's talking to (also lsof reports the correct hostname).

I created another basebackup from the currently stuck postgres intance on
another machine and I also get this error:

2014-08-29 15:27:30 PDT FATAL:  could not receive data from WAL stream:
ERROR:  requested starting point 177/2D00 is ahead of the WAL flush
position of this server 174/B76D16A8

However, this new instance is able to fetch logs from S3 and replay them
without issue.

Is it possible that the data dir on the stuck instance is just corrupt? It
is not impossible for this to have happened at some point in the past due
to the race condition in fetching logs from S3 I mentioned above.


Re: [GENERAL] WAL receive process dies

2014-08-29 Thread Andres Freund
[FWIW: proper quoting makes answering easier and thus more likely]

On 2014-08-29 15:37:51 -0700, Patrick Krecker wrote:
 I ran the following on the local endpoint of spiped:
 
 while [ true ]; do psql -h localhost -p 5445 judicata -U marbury -c select
 modtime, pg_last_xlog_receive_location(), pg_last_xlog_replay_location()
 from replication_time;  done;
 
 And the same command on production and I was able to verify that the xlogs
 for a given point in time were the same (modtime is updated every second by
 an upstart job):
 
 spiped from office - production:
   modtime   | pg_last_xlog_receive_location |
 pg_last_xlog_replay_location
 +---+--
  2014-08-29 15:23:25.563766 | 177/2E80C9F8  | 177/2E80C9F8
 
 Ran directly on production replica:
   modtime   | pg_last_xlog_receive_location |
 pg_last_xlog_replay_location
 +---+--
  2014-08-29 15:23:25.563766 | 177/2E80C9F8  | 177/2E80C9F8
 
 To me, this is sufficient proof that spiped is indeed talking to the
 machine I think it's talking to (also lsof reports the correct hostname).
 
 I created another basebackup from the currently stuck postgres intance on
 another machine and I also get this error:
 
 2014-08-29 15:27:30 PDT FATAL:  could not receive data from WAL stream:
 ERROR:  requested starting point 177/2D00 is ahead of the WAL flush
 position of this server 174/B76D16A8

Uh. this indicates that the machine you're talking to is *not* one of
the above as it has a flush position of '174/B76D16A8' - not something
that's really possible when the node actually is at '177/2E80C9F8'.

Could you run, on the standby that's having problems, the following
command:
psql 'host=127.0.0.1 port=5445 user=XXX password=XXX' -c 'IDENTIFY_SYSTEM;'

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL receive process dies

2014-08-29 Thread Patrick Krecker
On Fri, Aug 29, 2014 at 3:46 PM, Andres Freund and...@2ndquadrant.com
wrote:

 [FWIW: proper quoting makes answering easier and thus more likely]

 On 2014-08-29 15:37:51 -0700, Patrick Krecker wrote:
  I ran the following on the local endpoint of spiped:
 
  while [ true ]; do psql -h localhost -p 5445 judicata -U marbury -c
 select
  modtime, pg_last_xlog_receive_location(), pg_last_xlog_replay_location()
  from replication_time;  done;
 
  And the same command on production and I was able to verify that the
 xlogs
  for a given point in time were the same (modtime is updated every second
 by
  an upstart job):
 
  spiped from office - production:
modtime   | pg_last_xlog_receive_location |
  pg_last_xlog_replay_location
 
 +---+--
   2014-08-29 15:23:25.563766 | 177/2E80C9F8  |
 177/2E80C9F8
 
  Ran directly on production replica:
modtime   | pg_last_xlog_receive_location |
  pg_last_xlog_replay_location
 
 +---+--
   2014-08-29 15:23:25.563766 | 177/2E80C9F8  |
 177/2E80C9F8
 
  To me, this is sufficient proof that spiped is indeed talking to the
  machine I think it's talking to (also lsof reports the correct hostname).
 
  I created another basebackup from the currently stuck postgres intance on
  another machine and I also get this error:
 
  2014-08-29 15:27:30 PDT FATAL:  could not receive data from WAL stream:
  ERROR:  requested starting point 177/2D00 is ahead of the WAL flush
  position of this server 174/B76D16A8

 Uh. this indicates that the machine you're talking to is *not* one of
 the above as it has a flush position of '174/B76D16A8' - not something
 that's really possible when the node actually is at '177/2E80C9F8'.

 Could you run, on the standby that's having problems, the following
 command:
 psql 'host=127.0.0.1 port=5445 user=XXX password=XXX' -c 'IDENTIFY_SYSTEM;'

 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


RE: quoting, I wonder if Gmail is messing it up somehow? Or am I doing
something else wrong? Sorry :(

First, I apologize for the misleading information, but when I made another
basebackup and tried to use it, I configured the machine to cascade from
the stuck replica, *not* from the spiped endpoint. When I properly
connected it to the spiped endpoint it synced up fine, giving this log line:

2014-08-29 16:16:21 PDT LOG:  started streaming WAL from primary at
177/4F00 on timeline 1

The command as you gave reported a syntax error as is, but I googled a
little bit and run this one:

psql 'replication=1 dbname=XXX host=127.0.0.1 port=5445 user=XXX
password=XXX' -c 'IDENTIFY_SYSTEM;'

And it gave me this output:

  systemid   | timeline |   xlogpos
-+--+--
 5964163898407843711 |1 | 177/53091990


Re: [GENERAL] Is there a function to save schema history internally?

2014-08-29 Thread Patrick Dung
Hi Adrian,

Thanks for the info.

Thanks and regards,
Patrick



On Saturday, August 30, 2014 5:28 AM, Adrian Klaver adrian.kla...@aklaver.com 
wrote:
 


On 08/29/2014 11:23 AM, Patrick Dung wrote:
 Hello Postgresql users,

 Is there a function to save schema history internally?
 By keeping the schema history inside the DB, we can keep track of what
 and when is changed in the schema.

 While searching google. It seems it is a limitation with the audit trigger:
 https://wiki.postgresql.org/wiki/Audit_trigger#Limitation:_Cannot_audit_DDL

Well going forward, 9.4+,  there are EVENT TRIGGERS

http://www.postgresql.org/docs/9.4/static/event-triggers.html

For a recent thread on what is possible or not see:

http://www.postgresql.org/message-id/20ee50f73664e744af948f0106fe6dfa585a7...@seambx01.sea.corp.int.untd.com




 Thanks and regards,
 Patrick


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] How to make use of partitioned table for faster query?

2014-08-29 Thread Patrick Dung
Hello Postgresql users,

Suppose the table 'attendance' is very large:
id bigint
student_name varchar
late boolean

record_timestamp timestamp


The table is already partitioned by year (attendance_2012p, attendance_2013p, 
...).
I would like to count the number of lates by year.

Instead of specifying the partition tables name:
select count(*) from attendance_2012p where student_name=Student A and 
late='true';select count(*) from attendance_2013p where student_name=Student 
A and late='true';
select count(*) from attendance_2014p where student_name=Student A and 
late='true';
...


Is it possible to query the master table attendance), and the query could make 
use of the partitioned table for faster query?

Thanks and regards,
Patrick


Re: [GENERAL] How to make use of partitioned table for faster query?

2014-08-29 Thread John R Pierce

On 8/29/2014 9:38 PM, Patrick Dung wrote:

Suppose the table 'attendance' is very large:
id bigint
student_name varchar
late boolean
record_timestamp timestamp

The table is already partitioned by year (attendance_2012p, 
attendance_2013p, ...).

I would like to count the number of lates by year.

Instead of specifying the partition tables name:
select count(*) from attendance_2012p where student_name=Student A 
and late='true';
select count(*) from attendance_2013p where student_name=Student A 
and late='true';
select count(*) from attendance_2014p where student_name=Student A 
and late='true';

...

Is it possible to query the master table attendance), and the query 
could make use of the partitioned table for faster query?



select student_name as student,extract(year from record_timestamp) as 
year, count(*) as count_lates from attendance where late group by 1,2;


now, if your partitioning is by school year, that will be somewhat 
trickier.   what are your partitioning expression ?


as far as faster, well, your query has to read from all of the tables.   
there won't be any speedup from partition pruning...






--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] How to make use of partitioned table for faster query?

2014-08-29 Thread Patrick Dung
Thanks for reply.

The constraint is like:


  ADD CONSTRAINT attandence_2014p_record_timestamp_check CHECK 
(record_timestamp = '2014-01-01 00:00:00'::timestamp without time zone AND 
record_timestamp  '2015-01-01 00:00:00'::timestamp without time zone);

Let us assume it is a complete year (Jan-Dec) instead of school year.
I thought the data in table partition 2014 can check with the table partition 
2014. It do not need to check with other partitions. Same for other partitions.



On Saturday, August 30, 2014 12:52 PM, John R Pierce pie...@hogranch.com 
wrote:
 


On 8/29/2014 9:38 PM, Patrick Dung wrote:

Suppose the table 'attendance' is very large:
id bigint
student_name varchar

late boolean

record_timestamp timestamp



The table is already partitioned by year (attendance_2012p, attendance_2013p, 
...).
I would like to count the number of lates by year.


Instead of specifying the partition tables name:
select count(*) from attendance_2012p where student_name=Student A and 
late='true';
select count(*) from attendance_2013p where student_name=Student A and 
late='true';
select count(*) from attendance_2014p where student_name=Student
  A and late='true';
...


Is it possible to query the master table attendance), and the query could make 
use of the partitioned table for faster query?

select student_name as student,extract(year from record_timestamp)
as year, count(*) as count_lates from attendance where late group by
1,2;

now, if your partitioning is by school year, that will be somewhat
trickier.   what are your partitioning expression ?

as far as faster, well, your query has to read from all of the
tables.   there won't be any speedup from partition pruning...






-- 
john r pierce  37N 122W
somewhere on the middle of the left coast