Re: [GENERAL] inequality testing in jsonb query

2014-07-21 Thread vibhor.ku...@enterprisedb.com

On Jul 21, 2014, at 9:06 PM, Larry White  wrote:

> Is it possible to query a table with a jsob column to find values that were 
> in some range? For example, If I have a document like this (from the PG 
> documentation:
> 
> {
> "guid": "9c36adc1-7fb5-4d5b-83b4-90356a46061a",
> "name": "Angela Barton",
> "is_active": true,
> "company": "Magnafone",
> "address": "178 Howard Place, Gulf, Washington, 702",
> "registered": "2009-11-07T08:53:22 +08:00",
> "latitude": 19.793713,
> "longitude": 86.513373,
> "tags": [
> "enim",
> "aliquip",
> "qui"
> ]
> }
> 
> Could I modify the following query to find those records where the date 
> "registered" is between November 1, 2009 and November 30, 2009? 
> 
> SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"company": 
> "Magnafone"}’;


Yes, you can try something like given below:
SELECT jdoc->'guid', jdoc->'name',(jdoc->'registered')::text::timestamptz  
FROM api 
   WHERE (jdoc->'registered')::text::timestamptz BETWEEN '2009-11-01'::date AND 
'2009-11-30'::date;


Thanks & Regards,
Vibhor Kumar
(EDB) EnterpriseDB Corporation
The Postgres Database Company
Blog:http://vibhork.blogspot.com



Re: [GENERAL] Need r_constraint_name

2014-07-21 Thread Adrian Klaver

On 07/19/2014 12:26 PM, Ramesh T wrote:

Hi,
In oracle got constraint details using user_constraint,

But in postgres how to get the r_constraint_name,constraint_name  of the
particular table...?

mainly i need r_constraint_name on table.. how to get it?please let me know




From psql:

test=> CREATE TABLE parent_tbl(id serial primary key, fld_1 text);
NOTICE:  CREATE TABLE will create implicit sequence "parent_tbl_id_seq" 
for serial column "parent_tbl.id"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
"parent_tbl_pkey" for table "parent_tbl"

CREATE TABLE

test=> CREATE TABLE child_tbl (id serial primary key, fk_fld integer 
references parent_tbl, fld_2 text);
NOTICE:  CREATE TABLE will create implicit sequence "child_tbl_id_seq" 
for serial column "child_tbl.id"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
"child_tbl_pkey" for table "child_tbl"

CREATE TABLE

test=> \d parent_tbl
 Table "public.parent_tbl"
 Column |  Type   |Modifiers 


+-+-
 id | integer | not null default nextval('parent_tbl_id_seq'::regclass)
 fld_1  | text|
Indexes:
"parent_tbl_pkey" PRIMARY KEY, btree (id)
Referenced by:
TABLE "child_tbl" CONSTRAINT "child_tbl_fk_fld_fkey" FOREIGN KEY 
(fk_fld) REFERENCES parent_tbl(id)


test=> \d child_tbl
 Table "public.child_tbl"
 Column |  Type   |   Modifiers
+-+
 id | integer | not null default nextval('child_tbl_id_seq'::regclass)
 fk_fld | integer |
 fld_2  | text|
Indexes:
"child_tbl_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"child_tbl_fk_fld_fkey" FOREIGN KEY (fk_fld) REFERENCES parent_tbl(id)


If you want to know what query psql uses to get this information start 
psql with -E, this will tell you that the queries are:



To get the child key that references the parent from the parent:

test=> SELECT conname, conrelid::pg_catalog.regclass,
  pg_catalog.pg_get_constraintdef(c.oid, true) as condef
FROM pg_catalog.pg_constraint c
WHERE c.confrelid = 'parent_tbl'::regclass AND c.contype = 'f' ORDER BY 1
;

conname| conrelid  | condef 


---+---+
 child_tbl_fk_fld_fkey | child_tbl | FOREIGN KEY (fk_fld) REFERENCES 
parent_tbl(id)



To get the information from the child table:

test=> SELECT conname,
  pg_catalog.pg_get_constraintdef(r.oid, true) as condef
FROM pg_catalog.pg_constraint r
WHERE r.conrelid = 'child_tbl'::regclass AND r.contype = 'f' ORDER BY 1
;
conname| condef
---+
 child_tbl_fk_fld_fkey | FOREIGN KEY (fk_fld) REFERENCES parent_tbl(id)


I used the regclass cast to convert the table names to the appropriate 
ids the query expects. In the psql output you will see the numbers.










--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] inequality testing in jsonb query

2014-07-21 Thread Larry White
Hi,

Is it possible to query a table with a jsob column to find values that were
in some range? For example, If I have a document like this (from the PG
documentation:

{
"guid": "9c36adc1-7fb5-4d5b-83b4-90356a46061a",
"name": "Angela Barton",
"is_active": true,
"company": "Magnafone",
"address": "178 Howard Place, Gulf, Washington, 702",
"registered": "2009-11-07T08:53:22 +08:00",
"latitude": 19.793713,
"longitude": 86.513373,
"tags": [
"enim",
"aliquip",
"qui"
]
}

Could I modify the following query to find those records where the date
"registered" is between November 1, 2009 and November 30, 2009?

SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @> '{"company":
"Magnafone"}';


Thank you for your help.

larry


Re: [GENERAL] Question regarding log files in postgresql

2014-07-21 Thread Adrian Klaver

On 07/21/2014 02:21 PM, lucas.g...@gmail.com wrote:



I've rolled postgres to 'production', so box is in prod, but the
applications aren't active yet.

When I rolled the new box I'm seeing 2 log files:

postgresql--MM-dd_hhmmss.log
AND
postgresql-9.3-main.log


Are they both in the same location?

I'm going to say you used a Debian/Ubuntu package correct?



The 'main' log doesn't appear to be used, however the time stamped file is.


That is the log file set up by the packaging system.



Does this make sense?  Is it normal?


My guess is either:

1) You are running a Debian style package and went into postgresql.conf 
and changed the log settings so you are not using the package logging.


or

2) You have/had a Debian package installed and you did a source(or other 
install) and you have two logs as a result.



I think it would be a good idea to investigate further for no other 
reason than ensure that you really do not have competing systems at 
work. Might save you some headaches down the road.




My logrotate entry makes some assumptions that may be incorrect now.


Another reason to verify what is going on.



Gary



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question regarding log files in postgresql

2014-07-21 Thread David G Johnston
lucas.g...@gmail.com wrote
> I've rolled postgres to 'production', so box is in prod, but the
> applications aren't active yet.
> 
> When I rolled the new box I'm seeing 2 log files:
> 
> postgresql--MM-dd_hhmmss.log
> AND
> postgresql-9.3-main.log
> 
> The 'main' log doesn't appear to be used, however the time stamped file
> is.
> 
> Does this make sense?  Is it normal?
> 
> My logrotate entry makes some assumptions that may be incorrect now.
> 
> Gary

Likely the main log is the one that is used before any log file settings in
postgres.conf are read in and applied.  Then, once the date tagged format in
postgres.conf is processed that file is used and the main file no longer is
needed.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Question-regarding-log-files-in-postgresql-tp5812284p5812292.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question regarding log files in postgresql

2014-07-21 Thread lucas.g...@gmail.com
I've rolled postgres to 'production', so box is in prod, but the
applications aren't active yet.

When I rolled the new box I'm seeing 2 log files:

postgresql--MM-dd_hhmmss.log
AND
postgresql-9.3-main.log

The 'main' log doesn't appear to be used, however the time stamped file is.

Does this make sense?  Is it normal?

My logrotate entry makes some assumptions that may be incorrect now.

Gary


Re: [GENERAL] Checkpoint_segments optimal value

2014-07-21 Thread John R Pierce

On 7/21/2014 1:51 PM, Kevin Grittner wrote:

The above might help, but I think the biggest problem may be your
VM.  You show very low disk I/O numbers, but a pretty significant
fraction of the time waiting for I/O.  The previously-suggested
iostat output may help nail it down more specifically, but
basically you seem to have a big problem with bandwidth for storage
access.  It's pretty disturbing to see lines in vmstat output which
show zero disk in or out, but over 10% of CPU time waiting for
storage?!?


that would suggest to me a large number of VMs sharing a single SATA 
drive, or similar.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checkpoint_segments optimal value

2014-07-21 Thread Kevin Grittner
Prabhjot Sheena  wrote:

> I m running postgresql 8.3

That is out of support.  If you are at all concerned about
performance, you would do well to upgrade to a recent and
supported major release.

http://www.postgresql.org/support/versioning/

Anyway, it is always a good idea to show the output of:

SELECT version();

> temp_buffers = 64MB

You don't say what you have set for max_connections, but
temp_buffers is allocated at need *separately for each connection*
and once allocated to a connection it is never released until the
connection is closed.  So, hypothetically, if you had 100
connections open and they each had used temporary tables at some
time or other, even when all are idle they could have 6.4GB of
your 12GB of RAM tied up.

> work_mem = 512MB

Unlike temp_buffers, this one is released at the end of the
command (or earlier), but it is possible to have multiple
allocations for a single connection.  If you had 100 queries
active, each using one work_mem allocation, that would allocate
about 51.2GB of RAM, which looks like it is more than you have.  I
generally recommend setting max_connections only to what is needed
(with a small reserve) and setting work_mem to 25% of machine (or
VM) RAM divided by max_connections.

On the face of it, neither of the above seems to be causing
problems at the time you ran vmstat, but they worry me.

> max_fsm_pages = 809600

Since the EXPLAIN ANALYZE output you later posted show estimates 
which were off by several orders of magnitude, quite possibly 
leading to a suboptimal plan, you should probably run queries to 
check for bloat and update statistics.  If you find badly bloated 
tables you should run CLUSTER or VACUUM FULL on them to eliminate 
bloat.  Any bloated indexes, and any indexed on bloated tables you 
cleaned up using VACUUM FULL should be fixed with REINDEX.  (Once 
you are on 9.0 or later it is no longer necessary to REINDEX a 
table after using VACUUM FULL on it, although a regular VACUUM 
after the VACUUM FULL is still important.)  If you find bloat you 
should make autovacuum more aggressive or add VACUUM ANALYZE 
crontab jobs, to prevent a recurrence.

You may want to look at increasing statistics targets, either
globally or on specific columns used for selection criteria.

After doing the above, or if you decide not to do any or all of it,
you should probably run EXPLAIN ANALYZE VERBOSE; (without
specifying a table name and without the FULL option) and review the
output.  The most important part of the output is the last few 
lines, which in 8.3 might include warnings that suggest 
configuration changes.

The above might help, but I think the biggest problem may be your
VM.  You show very low disk I/O numbers, but a pretty significant
fraction of the time waiting for I/O.  The previously-suggested
iostat output may help nail it down more specifically, but
basically you seem to have a big problem with bandwidth for storage
access.  It's pretty disturbing to see lines in vmstat output which 
show zero disk in or out, but over 10% of CPU time waiting for 
storage?!?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.3

2014-07-21 Thread Christian Jensen
unsubscribe


On Mon, Jul 21, 2014 at 1:22 PM, Alan Hodgson  wrote:

> On Monday, July 21, 2014 09:05:18 PM Karthik Iyer wrote:
> > Hello,
> >
> >  We are planning to upgrade Postgres from 9.0 to 9.3. Was wondering
> > if there are any serious changes that I have to look out for
> > (syntax/datatypes changes) so that my code does not break.
> >
>
> http://www.postgresql.org/docs/9.1/static/release-9-1.html
> http://www.postgresql.org/docs/9.3/static/release-9-2.html
> http://www.postgresql.org/docs/9.3/static/release-9-3.html
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 

*Christian Jensen*
2151 Burnside Road West
Unit #5
Victoria, BC V9B 0P5
+1 (778) 996-4283
christ...@jensenbox.com


Re: [GENERAL] Upgrade to 9.3

2014-07-21 Thread Alan Hodgson
On Monday, July 21, 2014 09:05:18 PM Karthik Iyer wrote:
> Hello,
> 
>  We are planning to upgrade Postgres from 9.0 to 9.3. Was wondering
> if there are any serious changes that I have to look out for
> (syntax/datatypes changes) so that my code does not break.
> 

http://www.postgresql.org/docs/9.1/static/release-9-1.html
http://www.postgresql.org/docs/9.3/static/release-9-2.html
http://www.postgresql.org/docs/9.3/static/release-9-3.html



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrade to 9.3

2014-07-21 Thread Karthik Iyer

Hello,

We are planning to upgrade Postgres from 9.0 to 9.3. Was wondering 
if there are any serious changes that I have to look out for 
(syntax/datatypes changes) so that my code does not break.


Thanks in advance.

- Karthik


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I need to explicitly cast a string literal to text?

2014-07-21 Thread Kevin Grittner
Derek Poon  wrote:

> LEFT OUTER JOIN
> (SELECT 3 AS fizzstep, CAST('Fizz' AS CHAR(4)) AS fizz) AS 
>Fizz
> ON n % fizzstep = 0
> LEFT OUTER JOIN
> (SELECT 5 AS buzzstep, CAST('Buzz' AS CHAR(4)) AS buzz) AS 
>Buzz
> ON n % buzzstep = 0

> I'd like to know, why are the two explicit casts necessary?  Casting to
> VARCHAR or to TEXT also works.  However, if I omit the casts, I get…
>
> ERROR: failed to find conversion function from unknown to text: …
>
> I would expect that PostgreSQL should be able to infer that the fizz and buzz
> columns were some kind of text.  (What else could they be?)  It seems like a
> design flaw to require a literal string to be cast to text, right?

They may look like what the SQL standard calls a , but in PostgreSQL, due to the heavy use of custom types, 
we treat it as being of type "unknown" for as long as we can and 
use the "input" routine for the type which it seems to be.  This 
helps people use custom types more as "first class types".  For 
example:

test=# create table area(id int primary key, rectangle box not null);
CREATE TABLE
test=# insert into area values
test-#   (1,'((1,10),(2,20))'),
test-#   (2,'((20,40),(60,80))');
INSERT 0 2
test=# select * from area where rectangle = '((1,10),(2,20))';
 id |   rectangle  
+---
  1 | (2,20),(1,10)
(1 row)

test=# select 10 + '-3';
 ?column? 
--
    7
(1 row)

In my example above, it is clear from the context what type the 
"unknown" literals should be.  In your example the planner is not 
able to make the determination in time to avoid an error.  Maybe 
that can be fixed for this particular case, but in general 
PostgreSQL needs occasional type casts in situations where other 
DBMSs don't, in order to be able to omit them in many cases that 
other products simply don't support.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] statement_timeout doesn't work

2014-07-21 Thread Sergey Konoplev
On Mon, Jul 21, 2014 at 10:16 AM, David G Johnston
 wrote:
>> So, If I separate the commands everything will will work as expected,
>> correct?
>
> I would assume so.
>
> If you wait to send the DROP/ALTER index commands until the SET LOCAL
> command returns successfully then both of those commands will die if they
> exceed the timeout specified.

Thank you. I'll try it.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I need to explicitly cast a string literal to text?

2014-07-21 Thread David G Johnston
Derek Poon-2 wrote
> As an exercise, I've written the following query to implement
> [FizzBuzz][1].
> 
> SELECT COALESCE(fizz || buzz, fizz, buzz, '' || n) AS fizzbuzz
> FROM (
> SELECT n0 + 3 * n3 + 9 * n9 + 27 * n27 + 81 * n81 AS n
> FROM
> (SELECT 0 AS n0  UNION ALL SELECT 1 UNION ALL
> SELECT 2 AS n0) AS N0,
> (SELECT 0 AS n3  UNION ALL SELECT 1 UNION ALL
> SELECT 2 AS n3) AS N3,
> (SELECT 0 AS n9  UNION ALL SELECT 1 UNION ALL
> SELECT 2 AS n9) AS N9,
> (SELECT 0 AS n27 UNION ALL SELECT 1 UNION ALL
> SELECT 2 AS n27) AS N27,
> (SELECT 0 AS n81 UNION ALL SELECT 1   
> AS n81) AS N81
> ) AS N
> LEFT OUTER JOIN
> (SELECT 3 AS fizzstep, CAST('Fizz' AS CHAR(4)) AS fizz) AS
> Fizz
> ON n % fizzstep = 0
> LEFT OUTER JOIN
> (SELECT 5 AS buzzstep, CAST('Buzz' AS CHAR(4)) AS buzz) AS
> Buzz
> ON n % buzzstep = 0
> WHERE n BETWEEN 1 AND 100
> ORDER BY n;
> 
> I realize that it could be vastly simplified using GENERATE_SERIES(), but
> I'm aiming for the solution to be portable to SQLite 2, SQLite 3, and
> MySQL as well.
> 
> I'd like to know, why are the two explicit casts necessary?  Casting to
> VARCHAR or to TEXT also works.  However, if I omit the casts, I get…
> 
> ERROR: failed to find conversion function from unknown to text: …
> 
> I would expect that PostgreSQL should be able to infer that the fizz and
> buzz columns were some kind of text.  (What else could they be?)  It seems
> like a design flaw to require a literal string to be cast to text, right?

Because 'Buzz' is an "unknown literal" and not a "literal string".  

The parser attempts to infer the actual type of the unknown (e.g. date,
text, boolean, etc...) but if it cannot it simply tags it as an unknown type
and because implicit type conversions are now avoided when possible that
unknown type becomes effectively useless for anything but display.

The parser cannot infer the types in this situation because the sub-query in
which the literal appears is evaluated without knowing that eventually the
fizz/buzz columns are going to be used in a string concatenation and aside
from that there is nothing else to provide it a hint of the final type the
unknown literal may take.

It has been argued before that this indeed is a design flaw but no one is
really willing to invest the effort to modify such a deeply entwined part of
the system.  There is probably lots more history here that others could
reference on why such a change would be problematic to effect.

For reference:  is a valid literal having a type "date"

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Why-would-I-need-to-explicitly-cast-a-string-literal-to-text-tp5811823p5812247.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cursor return null

2014-07-21 Thread David G Johnston
rameshparnanditech wrote
> Hello,
>in postgres function (id bigint ),the following code not return
> any value with artNums ,But when i do  select statement ony it's output
> the
> values with out include cursor
> 
>  i.e,cursor problem ..?
> 
> please let me know what should i do to getvalues from cursor, but i dont
> where i missing
> 
>  function (id bigint )
> 
> DECLARE
>   PartNums   varchar (1);
>  artNums CURSOR for
>   SELECT p.PART_NUM part_num
> FROM lineitem sol, part p
>WHERE sol.ORDER_ID = id AND p.PART_ID = sol.PART_ID;
> 
>BEGIN
>   FOR cPart IN  artNums LOOP
>BEGIN
> PartNums := PartNums || cPart.part_num || ', ';
> END;

You have two BEGIN statements and no "END LOOP" statement...you need to read
the documentation and learn the correct syntax for using CURSORS and FOR
loops.

You should also post the entire function and not just excerpts to minimize
the risk of copy/paste problems and not reporting your reality.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/cursor-return-null-tp5812219p5812246.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] statement_timeout doesn't work

2014-07-21 Thread David G Johnston
Sergey Konoplev-2 wrote
> On Fri, Jul 18, 2014 at 6:15 PM, David G Johnston
> <

> david.g.johnston@

> > wrote:
>>>   query   | BEGIN;
>>> SET LOCAL statement_timeout TO 1000;
>>> DROP INDEX public.idx1;
>>> ALTER INDEX public.idx2 RENAME TO idx1;
>>> END;
>>
>> If I read this correctly you sent the entire begin...end as a single
>> compound statement and so, depending on how you did this, the actual SET
>> LOCAL command never got executed since the entire command is waiting for
>> the
>> necessary locks before it can be executed.
> 
> Right, I send it as a single command.
> 
>> Your sample test doesn't appear to correctly exercise this behavior.  Are
>> you maybe using -c in the problem case?  Or a client library besides psql
>> that would behave in this manner?
> 
> In this case I use DBD::Pg, but I haven't found any notes about such
> kind of behavior.
> 
>> Note that the fact that "query" is a compound statement is why I claim
>> the
>> above...
> 
> So, If I separate the commands everything will will work as expected,
> correct?

I would assume so.

If you wait to send the DROP/ALTER index commands until the SET LOCAL
command returns successfully then both of those commands will die if they
exceed the timeout specified.

Dave




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/statement-timeout-doesn-t-work-tp5811704p5812243.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres_fdw - push down conditionals for ENUMs

2014-07-21 Thread Tom Lane
Sergiy Zuban  writes:
> 1. Is there any plans to add "non-strict mode" (configurable via options on
> server/table/column level) to allow pushing down conditions for all data
> types?

No.  You might as well call it a "return random answers" mode.

> 2. There is an option that allows to map foreign table column to column
> with another name. What about adding another option to specify column type
> to be send to remote server?

Same problem.  We don't have any way of knowing whether type foo on the
remote end acts like foo locally.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres_fdw - IS TRUE/FALSE conditions are not pushed down

2014-07-21 Thread Tom Lane
Sergiy Zuban  writes:
> DO I need to report this as bug?

It's not a bug, but an unimplemented feature.  A patch submission would be
more likely to get you somewhere.  Look into contrib/postgres_fdw/deparse.c,
and add support for BooleanTest (the NullTest support is a good model).

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres_fdw - push down conditionals for ENUMs

2014-07-21 Thread Sergiy Zuban
Hi

It's well known that pushing down of WHERE conditions supported for
built-in data types, operators and functions only.

So if your main table has columns declared with custom domain (CREATE
DOMAIN ID_TYPE AS INT NOT NULL) and you want just to proxy all queries over
FDW foreign table needs to be declared as INT. This approach works fine for
any domain based on build-in type. But ENUM is a special case. When I
declare foreign table with TEXT column it accepts all queries like SELECT *
FROM proxy WHERE status = 'active', but 'active' pushed down with explicit
type cast 'active'::text and this creates a problem because origin server
expects ENUM value ('active' or 'active'::STATUS_TYPE) rather than TEXT.

CREATE CAST (TEXT as STATUS_TYPE) WITH function to_status_type(text) AS
IMPLICIT;

All attempts to cast text to ENUM type were unsuccessful (probably because
PostgreSQL converts ENUM values to INTs on query rewriting stage, but
casting works later, when data accessed):

CREATE CAST (STATUS_TYPE AS TEXT) WITH INOUT AS IMPLICIT;

Casting in reverse direction works fine, but this dirty trick forces
PostgreSQL to convert ENUMs to TEXT (which is less optimal as working
internally with INTs) for absolutely all requests, not only forwarded over
FDW.

Questions to developers:

1. Is there any plans to add "non-strict mode" (configurable via options on
server/table/column level) to allow pushing down conditions for all data
types?

2. There is an option that allows to map foreign table column to column
with another name. What about adding another option to specify column type
to be send to remote server?

Thanks.

Tested on 9.3.4
-- 
Sergiy Zuban


[GENERAL] postgres_fdw - IS TRUE/FALSE conditions are not pushed down

2014-07-21 Thread Sergiy Zuban
Hi

I've setup foreign table and noticed inconsistency how conditionals handled
for boolean columns.

explain verbose select * from temporary_testing.t_proxied where active is
true;
  QUERY PLAN

--
 Foreign Scan on temporary_testing.t_proxied  (cost=100.00..194.72
rows=1412 width=5)
   Output: id, active
   Filter: (t_proxied.active IS TRUE)
   Remote SQL: SELECT id, active FROM main_testing.t_url

But, when = used instead of IS it works correctly. All conditions below
(including IS NULL) are pushed down to remote server:

explain verbose select * from temporary_testing.t_proxied where active;
explain verbose select * from temporary_testing.t_proxied where
NOT(active);
explain verbose select * from temporary_testing.t_proxied where active =
false;
explain verbose select * from temporary_testing.t_proxied where active =
true;


--
 Foreign Scan on temporary_testing.t_proxied  (cost=100.00..166.48
rows=1412 width=5)
   Output: id, active
   Remote SQL: SELECT id, active FROM main_testing.t_url WHERE (active)


explain verbose select * from temporary_testing.t_proxied where active is
null;
 QUERY PLAN


 Foreign Scan on temporary_testing.t_proxied  (cost=100.00..138.52 rows=14
width=5)
   Output: id, active
   Remote SQL: SELECT id, active FROM main_testing.t_url WHERE ((active IS
NULL))

DO I need to report this as bug?

Tested on PostgreSQL 9.3.4
-- 
Sergiy Zuban


[GENERAL] Referencing serial col's sequence for insert

2014-07-21 Thread Anil Menon
Hi,

I have a question on the right/correct practice on using the serial col's
sequence for insert.

Best way of explanation is by an example:

create table id01 (col1 serial, col2 varchar(10));

insert into id01(col2) values ( 'data'||currval('id01_col1_seq')::varchar);

while I do get what I want:

select  * from id01;
 col1 | col2
--+---
1 | data1

Is this guaranteed to work : I am assuming that an insert triggers the
id01_col1_seq's nextval first hence using
id01_col1_seq's currval subsequently will have the "correct" / expected
value (and not the previous value before the insert).

Is my assumption correct?

Thanks in advance,
AK


Re: [GENERAL] giving a user permission to kill their processes only

2014-07-21 Thread Jakub Jindra
I was looking for solution like this. Actually I made a few changes
proposed by Robert Treat:
* SESSION_USER insetead of CURRENT_USER
* Added filter for username and procpid from pg_stat_activity

Alternatively you can create similar function for pg_terminate_backend - it
pid's terminates backend connection. I hope this could help someone.

BEGIN;
CREATE FUNCTION my_cancel_backend(integer) RETURNS boolean AS 'SELECT
pg_terminate_backend((SELECT procpid FROM pg_stat_activity WHERE
usename=SESSION_USER AND procpid=$1));'
LANGUAGE SQL SECURITY DEFINER;
REVOKE EXECUTE ON FUNCTION my_cancel_backend(integer) FROM PUBLIC;
COMMIT;

GRANT EXECUTE ON FUNCTION my_cancel_backend(integer) TO myuser;

-- 
Jakub Jindra



[GENERAL] Need r_constraint_name

2014-07-21 Thread Ramesh T
Hi,
   In oracle got constraint details using user_constraint,

But in postgres how to get the r_constraint_name,constraint_name  of the
particular table...?

mainly i need r_constraint_name on table.. how to get it?please let me know


[GENERAL] cursor return null

2014-07-21 Thread Ramesh T
Hello,
   in postgres function (id bigint ),the following code not return
any value with artNums ,But when i do  select statement ony it's output the
values with out include cursor

 i.e,cursor problem ..?

please let me know what should i do to getvalues from cursor, but i dont
where i missing

 function (id bigint )

DECLARE
  PartNums   varchar (1);
 artNums CURSOR for
  SELECT p.PART_NUM part_num
FROM lineitem sol, part p
   WHERE sol.ORDER_ID = id AND p.PART_ID = sol.PART_ID;

   BEGIN
  FOR cPart IN  artNums LOOP
   BEGIN
PartNums := PartNums || cPart.part_num || ', ';
END;


Re: [GENERAL] statement_timeout doesn't work

2014-07-21 Thread Sergey Konoplev
On Fri, Jul 18, 2014 at 6:15 PM, David G Johnston
 wrote:
>>   query   | BEGIN;
>> SET LOCAL statement_timeout TO 1000;
>> DROP INDEX public.idx1;
>> ALTER INDEX public.idx2 RENAME TO idx1;
>> END;
>
> If I read this correctly you sent the entire begin...end as a single
> compound statement and so, depending on how you did this, the actual SET
> LOCAL command never got executed since the entire command is waiting for the
> necessary locks before it can be executed.

Right, I send it as a single command.

> Your sample test doesn't appear to correctly exercise this behavior.  Are
> you maybe using -c in the problem case?  Or a client library besides psql
> that would behave in this manner?

In this case I use DBD::Pg, but I haven't found any notes about such
kind of behavior.

> Note that the fact that "query" is a compound statement is why I claim the
> above...

So, If I separate the commands everything will will work as expected, correct?

>
> David J.
>
>
>
>
>
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/statement-timeout-doesn-t-work-tp5811704p5812037.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] check database integrity

2014-07-21 Thread Tom Lane
=?ISO-8859-1?Q?Torsten_F=F6rtsch?=  writes:
> Another question, just out of curiosity, for vm and main forks I use
> pg_relation_size to figure out the highest page number. That does not
> work for fsm. I have at least one fsm file that it 24 kb. Fetching page
> 0 works, page 1 and above gives an error:

> db=# select page_header(get_raw_page(2836::oid::regclass::text, 'fsm', 0));
>  page_header
> --
>  (114/23485F78,19084,0,24,8192,8192,8192,4,0)
> (1 row)

> db=# select page_header(get_raw_page(2836::oid::regclass::text, 'fsm',
> 1));
> ERROR:  block number 1 is out of range for relation "pg_toast_1255"
> db=# select pg_relation_size(2836::oid::regclass, 'fsm');

>  pg_relation_size
> --
> 24576

That's bizarre.  AFAICS, pg_relation_size() reduces to a stat() call,
while the other error looks like it's coming from rawpage.c's check on
RelationGetNumberOfBlocks() which depends on mdnblocks() which prefers
to look at the result of lseek(SEEK_END).  But both of those should
surely get the same answer, if the file's not changing.

Could you trace through it and see where the results diverge?  Also,
what's the actual size of the file on disk?

Alternatively, if you have a way to reproduce this from a standing
start, I'd be happy to poke into it here.

regards, tom lane

PS: you've not messed around with the value of BLCKSZ have you?
If different bits of code had different ideas of that constant's
value, it'd possibly explain this.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Track changes to function code

2014-07-21 Thread Rebecca Clarke
Thanks guys. I'll check that out.



On Mon, Jul 21, 2014 at 2:12 PM, Pavel Stehule 
wrote:

>
> Hi
>
> 2014-07-21 14:36 GMT+02:00 Jacob Bunk Nielsen :
>
> Rebecca Clarke  writes:
>>
>> > At present when a function is being edited we keep note of when and
>> > by who within comments in the function's code.
>>
>> That sounds cumbersome.
>>
>> > Is there, or can anyone recommend, any open source software that
>> > tracks function activity when it comes to edits (not executions)?
>>
>> How about keeping the code outside of the database in a VCS such as git,
>> Subversion or whatever you are used to using for other code projects?
>>
>> That will also make it possible easily go back to previous versions if
>> you should need to some day.
>>
>> You simply put your functions in one or more .sql files that you
>> version control using e.g. git.
>>
>> Once you have written your functions you can put them in you database
>> using:
>>
>> psql -f myfile.sql
>>
>
> yes, it is good way
>
> stored procedures are code as any other and there are same rules. Use
> files, use a your preferred editor, use makefiles, use GIT
>
> Regards
>
> Pavel Stehule
>
>
>>
>> --
>> Jacob
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] Track changes to function code

2014-07-21 Thread Pavel Stehule
Hi

2014-07-21 14:36 GMT+02:00 Jacob Bunk Nielsen :

> Rebecca Clarke  writes:
>
> > At present when a function is being edited we keep note of when and
> > by who within comments in the function's code.
>
> That sounds cumbersome.
>
> > Is there, or can anyone recommend, any open source software that
> > tracks function activity when it comes to edits (not executions)?
>
> How about keeping the code outside of the database in a VCS such as git,
> Subversion or whatever you are used to using for other code projects?
>
> That will also make it possible easily go back to previous versions if
> you should need to some day.
>
> You simply put your functions in one or more .sql files that you
> version control using e.g. git.
>
> Once you have written your functions you can put them in you database
> using:
>
> psql -f myfile.sql
>

yes, it is good way

stored procedures are code as any other and there are same rules. Use
files, use a your preferred editor, use makefiles, use GIT

Regards

Pavel Stehule


>
> --
> Jacob
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Track changes to function code

2014-07-21 Thread Jacob Bunk Nielsen
Rebecca Clarke  writes:

> At present when a function is being edited we keep note of when and
> by who within comments in the function's code.

That sounds cumbersome.

> Is there, or can anyone recommend, any open source software that
> tracks function activity when it comes to edits (not executions)? 

How about keeping the code outside of the database in a VCS such as git,
Subversion or whatever you are used to using for other code projects?

That will also make it possible easily go back to previous versions if
you should need to some day.

You simply put your functions in one or more .sql files that you
version control using e.g. git.

Once you have written your functions you can put them in you database
using:

psql -f myfile.sql

-- 
Jacob



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Track changes to function code

2014-07-21 Thread Rebecca Clarke
Hi all,

At present when a function is being edited we keep note of when and by who
within comments in the function's code.
Is there, or can anyone recommend, any open source software that tracks
function activity when it comes to edits (not executions)?

I tried searching on the web, but all I find concerns the tracking of data
changes.

Thanks in advance.