Re: [GENERAL] RETURNS TABLE function: ERROR: column reference "word" is ambiguous

2016-08-11 Thread Sridhar N Bamandlapally
need to return query with alias

*example:*

create table emp (id integer, ename text);
insert into emp values(1, 'aaa');

create or replace function f_sample1() returns table (id integer, ename
text) as $$
declare
begin
return query select id, ename from emp;
end$$ language plpgsql;

select f_sample1();  this will throw ERROR:  column reference "id"
is ambiguous LINE 1: select id, ename from emp


create or replace function f_sample1() returns table (id integer, ename
text) as $$
declare
begin
return query select a.id, a.ename from emp a;
end$$ language plpgsql;

select f_sample1();  success





thanks
Sridhar




On Thu, Aug 11, 2016 at 1:56 AM, Adrian Klaver 
wrote:

> On 08/10/2016 01:14 PM, Alexander Farber wrote:
>
>> No, actually both variants work for me right now at 9.5.3 on Mac -
>>
>
> I thought the question you where asking was:
>
> "Where does RETURN NEXT EXPRESSION work, on 9.6?"
>
> In the examples below you are not doing that.
>
> Inline comments below.
>
>
>> On Wed, Aug 10, 2016 at 8:31 PM, Adrian Klaver
>> > wrote:
>>
>>
>> Given what you are doing, RETURN TABLE it will not work there for
>> the same reason it does not work in 9.5:
>>
>> https://www.postgresql.org/docs/9.6/static/plpgsql-control-
>> structures.html#PLPGSQL-STATEMENTS-RETURNING
>> > structures.html#PLPGSQL-STATEMENTS-RETURNING>
>>
>> "If you declared the function with output parameters, write just
>> RETURN NEXT with no expression. On each execution, the current
>> values of the output parameter variable(s) will be saved for
>> eventual return as a row of the result. Note that you must declare
>> the function as returning SETOF record when there are multiple
>> output parameters, or SETOF sometype when there is just one output
>> parameter of type sometype, in order to create a set-returning
>> function with output parameters."
>>
>>
>> Either:
>>
>> CREATE OR REPLACE FUNCTION words_check_words(
>> IN in_uid integer,
>> IN in_gid integer,
>> IN in_tiles jsonb
>> OUT out_word varchar,
>> OUT out_score integer
>> ) RETURNS SETOF RECORD AS
>> $func$
>>
>>
>> Or:
>>
>> CREATE OR REPLACE FUNCTION words_check_words(
>> IN in_uid integer,
>> IN in_gid integer,
>> IN in_tiles jsonb
>> ) RETURNS TABLE (out_word varchar, out_score integer) AS
>> $func$
>>
>>
>> And then I assign values to the variables and call RETURN NEXT:
>>
>>   out_word := ... ;
>>   out_score := ... ;
>>   RETURN NEXT;
>>
>
> RETURN SETOF and RETURN TABLE are comparable, as you found out:
>
> https://www.postgresql.org/docs/9.5/static/plpgsql-overview.
> html#PLPGSQL-ARGS-RESULTS
>
> "PL/pgSQL functions can also be declared with output parameters in place
> of an explicit specification of the return type. This does not add any
> fundamental capability to the language, but it is often convenient,
> especially for returning multiple values. The RETURNS TABLE notation can
> also be used in place of RETURNS SETOF."
>
>
>
>> Regards
>> Alex
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Sub-query having NULL row returning FALSE result

2016-07-01 Thread Sridhar N Bamandlapally
Hi

Just for info.

Actual query in Oracle (below)
--
MERGE INTO relDocumentTypeMetaDataName t
USING (SELECT ? as DocumentTypeID, ? as DocumentContextID, ? as
MetaDataNameID, ? as DocumentAbstractionIndexID FROM DUAL) s
ON (t.MetaDataNameID = s.MetaDataNameID AND t.DocumentTypeID =
s.DocumentTypeID)
WHEN NOT MATCHED THEN
INSERT (DocumentTypeID, DocumentContextID, MetaDataNameID, FlatColumnNo)
values
(s.DocumentTypeID, s.DocumentContextID, s.MetaDataNameID, (select
nvl(max(FlatColumnNo), 0) + 1 FROM relDocumentTypeMetaDataName WHERE
DocumentTypeID = ?));


Migrated query in PG (phase 1)
-
WITH s AS (SELECT ? as DocumentTypeID, ? as DocumentContextID, ? as
MetaDataNameID, ? as DocumentAbstractionIndexID)
INSERT INTO relDocumentTypeMetaDataName (DocumentTypeID, DocumentContextID,
MetaDataNameID, FlatColumnNo, DocumentAbstractionIndexID)
SELECT s.DocumentTypeID, s.DocumentContextID, s.MetaDataNameID,
(SELECT coalesce(MAX(FlatColumnNo), 0) + 1 FROM relDocumentTypeMetaDataName
WHERE DocumentTypeID = ?), s.DocumentAbstractionIndexID
FROM s WHERE s.DocumentTypeID NOT IN (SELECT DocumentTypeID FROM
relDocumentTypeMetaDataName);


Migrated query in PG (phase 2) - after Tom Lane reply

WITH s AS (SELECT ? as DocumentTypeID, ? as DocumentContextID, ? as
MetaDataNameID, ? as DocumentAbstractionIndexID)
INSERT INTO relDocumentTypeMetaDataName (DocumentTypeID, DocumentContextID,
MetaDataNameID, FlatColumnNo, DocumentAbstractionIndexID)
SELECT s.DocumentTypeID, s.DocumentContextID, s.MetaDataNameID,
(SELECT coalesce(MAX(FlatColumnNo), 0) + 1 FROM relDocumentTypeMetaDataName
WHERE DocumentTypeID = ?), s.DocumentAbstractionIndexID
FROM s WHERE s.DocumentTypeID NOT IN (SELECT DocumentTypeID FROM
relDocumentTypeMetaDataName WHERE DocumentTypeID IS NOT NULL);


Thanks
Sridhar
OpenText



On Wed, Jun 29, 2016 at 6:04 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Jun 29, 2016 at 2:07 AM, Sridhar N Bamandlapally <
> sridhar@gmail.com> wrote:
>
>> Hi
>>
>> Please go through below case
>>
>> postgres=# CREATE TABLE emp (id INTEGER unique, ename VARCHAR);
>> CREATE TABLE
>> postgres=# INSERT INTO emp VALUES (null, 'aaa');
>> INSERT 0 1
>> postgres=# INSERT INTO emp VALUES (null, 'bbb');
>> INSERT 0 1
>> postgres=# INSERT INTO emp VALUES (3, 'ccc');
>> INSERT 0 1
>> postgres=# INSERT INTO emp VALUES (4, 'ddd');
>> INSERT 0 1
>> postgres=# SELECT * FROM emp ;
>>  id | ename
>> +---
>> | aaa
>> | bbb
>>   3 | ccc
>>   4 | ddd
>> (4 rows)
>>
>> postgres=# SELECT * FROM (SELECT 5::integer id, 'eee'::varchar ename) nr;
>>  id | ename
>> +---
>>   5 | eee
>> (1 row)
>>
>> postgres=# INSERT INTO emp SELECT * FROM (SELECT 5::integer id,
>> 'eee'::varchar ename) nr WHERE id NOT IN (SELECT id FROM emp);
>> INSERT 0 0
>> postgres=# SELECT * FROM emp ;
>>  id | ename
>> +---
>> | aaa
>> | bbb
>>   3 | ccc
>>   4 | ddd
>> (4 rows)
>>
>> postgres=#
>>
>>
>> The application is generating SQL-Statement to avoid exception while
>> inserting
>>
>> The expected behavior is to INSERT row if the NEW id is not existing in
>> table, but this is doing FALSE
>>
>> Please advise me if am doing something wrong here or any alternate
>>
>
> ​Subjectively, you are allowing an ID field to be NULL.  That, for me, is
> wrong.​
>
> ​Given this, as a follow-up to what Tom said, you need to decide what you
> wish to happen for your NULL IDs.  Until you explain that behavior it is
> not possible to provide valid alternatives.​
>
> ​Usually you want to use "EXISTS", not "IN"​
>
> ​
> https://www.postgresql.org/docs/9.6/static/functions-subquery.html#FUNCTIONS-SUBQUERY-EXISTS
>
> Oh, and try adding "WHERE id IS NOT NULL"​
>
> ​David J.​
>
>


Re: [GENERAL] table name size

2016-07-01 Thread Sridhar N Bamandlapally
Hi

Is there any specific reason not releasing any version with "NAMEDATALEN 255"
in file src/include/pg_config_manual.h ?

this will really nice if we do with "NAMEDATALEN 255" src/include/pg_config_
manual.h

Please

Thanks
Sridhar
OpenText



On Mon, Jun 13, 2016 at 5:40 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Jun 13, 2016 at 7:21 AM, Sridhar N Bamandlapally <
> sridhar@gmail.com> wrote:
>
>> Hi
>>
>> how to create table name with size, the limitation we are facing is 63
>> length
>>
>> these are dynamic tables created from application
>>
>> issue is:
>> we cannot suggest/tell client to change NAMEDATALEN constant in
>> src/include/pg_config_manual.h
>>
>> do we have any other option,
>>
>>
> ​Ensure that the application's algorithm for generating names doesn't
> generate names that exceed 63 characters.​
>
> ​If you're asking if there a runtime setting to control this the answer is
> no.​
>
> ​David J.​
>
>


Re: [GENERAL] Sub-query having NULL row returning FALSE result

2016-06-29 Thread Sridhar N Bamandlapally
Hi

The actual statement is MERGE INTO  NOT MATCHED, which in PG
migrated to WITH - INSERT

however, yes, the SQL-statement in previous does not work in other
databases too, I was wrong

Thanks, thanks again
Sridhar
OpenText


On Wed, Jun 29, 2016 at 11:58 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Sridhar N Bamandlapally <sridhar@gmail.com> writes:
> > postgres=# CREATE TABLE emp (id INTEGER unique, ename VARCHAR);
> > postgres=# INSERT INTO emp VALUES (null, 'aaa');
> > ...
> > postgres=# INSERT INTO emp SELECT * FROM (SELECT 5::integer id,
> > 'eee'::varchar ename) nr WHERE id NOT IN (SELECT id FROM emp);
> > INSERT 0 0
>
> This is expected.  NOT IN can never succeed if there are any nulls
> returned by the sub-select, because the nulls represent "unknown",
> and so it's unknown whether there is a match to the outer "id"
> value, and WHERE takes a null (unknown) result as false not true.
> Certainly there are things to quibble with in that behavior, but
> it's what's been required by the SQL standard since 1992.
>
> > but this is working with other databases
>
> Really?  None that are compliant with the SQL standard, for sure.
>
> regards, tom lane
>


[GENERAL] Sub-query having NULL row returning FALSE result

2016-06-29 Thread Sridhar N Bamandlapally
Hi

Please go through below case

postgres=# CREATE TABLE emp (id INTEGER unique, ename VARCHAR);
CREATE TABLE
postgres=# INSERT INTO emp VALUES (null, 'aaa');
INSERT 0 1
postgres=# INSERT INTO emp VALUES (null, 'bbb');
INSERT 0 1
postgres=# INSERT INTO emp VALUES (3, 'ccc');
INSERT 0 1
postgres=# INSERT INTO emp VALUES (4, 'ddd');
INSERT 0 1
postgres=# SELECT * FROM emp ;
 id | ename
+---
| aaa
| bbb
  3 | ccc
  4 | ddd
(4 rows)

postgres=# SELECT * FROM (SELECT 5::integer id, 'eee'::varchar ename) nr;
 id | ename
+---
  5 | eee
(1 row)

postgres=# INSERT INTO emp SELECT * FROM (SELECT 5::integer id,
'eee'::varchar ename) nr WHERE id NOT IN (SELECT id FROM emp);
INSERT 0 0
postgres=# SELECT * FROM emp ;
 id | ename
+---
| aaa
| bbb
  3 | ccc
  4 | ddd
(4 rows)

postgres=#


The application is generating SQL-Statement to avoid exception while
inserting

The expected behavior is to INSERT row if the NEW id is not existing in
table, but this is doing FALSE

Please advise me if am doing something wrong here or any alternate

but this is working with other databases


Thanks
Sridhar


[GENERAL] table name size

2016-06-13 Thread Sridhar N Bamandlapally
Hi

how to create table name with size, the limitation we are facing is 63
length

these are dynamic tables created from application

issue is:
we cannot suggest/tell client to change NAMEDATALEN constant in
src/include/pg_config_manual.h

do we have any other option,

please

Thanks
Sridhar
OpenText


Re: [GENERAL] [HACKERS] Online DW

2016-06-10 Thread Sridhar N Bamandlapally
Ok, let me put this way,

I need every transaction coming from application sync with both production
and archive db,
but the transactions I do to clean old data(before 7 days) on production db
in daily maintenance window should not sync with archive db,

Archive db need read-only, used for maintaining integrity with other
business applications

Issue here is,
1. etl is scheduler, cannot run on every transaction, even if it does, its
expensive

2. Materialize view(refresh on commit) or slony, will also sync clean-up
transactions

3. Replication is not archive, definitely not option

I say, every online archive db is use case for this.

Thanks
Sridhar
Opentext


On 10 Jun 2016 22:36, "David G. Johnston" <david.g.johns...@gmail.com>
wrote:

> On Fri, Jun 10, 2016 at 4:11 AM, Sridhar N Bamandlapally <
> sridhar@gmail.com> wrote:
>
>> Hi
>>
>> Is there any feature in PostgreSQL where online DW (Dataware housing) is
>> possible ?
>>
>> am looking for scenario like
>>
>> 1. Production DB will have CURRENT + LAST 7 DAYS data only
>>
>> 2. Archive/DW DB will have CURRENT + COMPLETE HISTORY
>>
>> expecting something like streaming, but not ETL
>>
>>
> ​The entire DB couldn't operate this way since not every record has a
> concept of time and if you use any kind of physical time you are going to
> have issues as well.
>
> First impression is you want to horizontally partition your
> "time-impacted" tables so that each partition contains only data having the
> same ISO Week number in the same ISO Year.
>
> Remove older tables from the inheritance and stick them on a separate
> tablespace and/or stream them to another database.
>
> As has been mentioned there are various tools out there today that can
> likely be used to fulfill whatever fundamental need you have.  "Not ETL" is
> not a need though, its at best a "nice-to-have" unless you are willing to
> forgo any solution to your larger problem just because the implementation
> is not optimal.
>
> Unless you define your true goals and constraints its going to be hard to
> make recommendations.
>
> David J.
>
>


Re: [GENERAL] [HACKERS] Online DW

2016-06-10 Thread Sridhar N Bamandlapally
One thing we can restrict to "begin noarchive" transaction block are DELETE
and SELECT only

Sridhar
Opentext
On 10 Jun 2016 21:57, "Sridhar N Bamandlapally" <sridhar@gmail.com>
wrote:

> This is what I feel will give me solution to maintain production
> (current+7days) and archive(current+history) without any etl/scheduler
>
> But there is no feature available in any database
>
> Sridhar
> Opentext
> On 10 Jun 2016 19:03, "Craig Ringer" <cr...@2ndquadrant.com> wrote:
>
>> On 10 June 2016 at 18:56, John R Pierce <pie...@hogranch.com> wrote:
>>
>>> On 6/10/2016 2:18 AM, Sridhar N Bamandlapally wrote:
>>>
>>>> This/These will be performed in Production to clean-up archive which
>>>> will not be sync with Archive/DW DB only
>>>>
>>>> one heads-up is Archive/DW DB may need to build WITHOUT CONSTRAINTS
>>>>
>>>> May need to introduce ARCHIVE system/tag in pg_hba.conf
>>>>
>>>
>>> there's a whole lot of implied magic here unless you want to get way
>>> more specific what these features do, exactly, under all possible
>>> conditions.
>>
>>
>> I'd go further than that and say I can't see how something like this
>> could possibly work with physical (block based) replication. It's total
>> hand-waving.
>>
>> --
>>  Craig Ringer   http://www.2ndQuadrant.com/
>>  PostgreSQL Development, 24x7 Support, Training & Services
>>
>


Re: [GENERAL] [HACKERS] Online DW

2016-06-10 Thread Sridhar N Bamandlapally
This is what I feel will give me solution to maintain production
(current+7days) and archive(current+history) without any etl/scheduler

But there is no feature available in any database

Sridhar
Opentext
On 10 Jun 2016 19:03, "Craig Ringer" <cr...@2ndquadrant.com> wrote:

> On 10 June 2016 at 18:56, John R Pierce <pie...@hogranch.com> wrote:
>
>> On 6/10/2016 2:18 AM, Sridhar N Bamandlapally wrote:
>>
>>> This/These will be performed in Production to clean-up archive which
>>> will not be sync with Archive/DW DB only
>>>
>>> one heads-up is Archive/DW DB may need to build WITHOUT CONSTRAINTS
>>>
>>> May need to introduce ARCHIVE system/tag in pg_hba.conf
>>>
>>
>> there's a whole lot of implied magic here unless you want to get way more
>> specific what these features do, exactly, under all possible conditions.
>
>
> I'd go further than that and say I can't see how something like this could
> possibly work with physical (block based) replication. It's total
> hand-waving.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [GENERAL] [HACKERS] Online DW

2016-06-10 Thread Sridhar N Bamandlapally
One thing looks possible ( feature not available), just an idea

example/syntax:

BEGIN NOARCHIVE;

  --- transaction-1
  --- transaction-2
  .
  .
  --- transaction-N

END;

This/These will be performed in Production to clean-up archive which will
not be sync with Archive/DW DB only

one heads-up is Archive/DW DB may need to build WITHOUT CONSTRAINTS

May need to introduce ARCHIVE system/tag in pg_hba.conf

Thanks
Sridhar
OpenText














On Fri, Jun 10, 2016 at 2:22 PM, Craig Ringer <cr...@2ndquadrant.com> wrote:

> On 10 June 2016 at 16:11, Sridhar N Bamandlapally <sridhar@gmail.com>
> wrote:
>
>> Hi
>>
>> Is there any feature in PostgreSQL where online DW (Dataware housing) is
>> possible ?
>>
>> am looking for scenario like
>>
>> 1. Production DB will have CURRENT + LAST 7 DAYS data only
>>
>> 2. Archive/DW DB will have CURRENT + COMPLETE HISTORY
>>
>> expecting something like streaming, but not ETL
>>
>
> There's nothing built-in, but that's exactly the sort of thing pglogical
> is intended for. You can also build something along those lines with
> Londiste fairly easily.
>
> Hopefully this is the sort of thing we can move toward with built-in
> logical replication in coming releases.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


[GENERAL] Online DW

2016-06-10 Thread Sridhar N Bamandlapally
Hi

Is there any feature in PostgreSQL where online DW (Dataware housing) is
possible ?

am looking for scenario like

1. Production DB will have CURRENT + LAST 7 DAYS data only

2. Archive/DW DB will have CURRENT + COMPLETE HISTORY

expecting something like streaming, but not ETL

Thanks
Sridhar


Re: [GENERAL] [HACKERS] OUT parameter and RETURN table/setof

2016-06-06 Thread Sridhar N Bamandlapally
Existing application code written to call function in Oracle which return
no.of rows in out parameter and return-values is cursor-result

this need migrate to PostgreSQL, need help here

example: (actual function declaration only)
*Oracle:*
CREATE OR REPLACE PROCEDURE sc_getapppermissionlist (
v_role_ids IN VARCHAR2,
v_rowsfound OUT INTEGER,
result_cursor1 OUT SYS_REFCURSOR
) ...


*PostgreSQL:*
*method 1*:
CREATE OR REPLACE PROCEDURE sc_getapppermissionlist (
v_role_ids IN VARCHAR,
v_rowsfound OUT INTEGER,
result_cursor1 OUT REFCURSOR
) ...

but this approach issue is, need to do in BEGIN - END block inside
with FETCH ALL IN ""
  - here we need/think common approach for database

*method 2:*
CREATE OR REPLACE PROCEDURE sc_getapppermissionlist (
v_role_ids IN VARCHAR,
v_rowsfound OUT INTEGER)
RETURNS TABLE/SETOF
...

this approach is not working


Thanks
Sridhar
OpenText







On Mon, Jun 6, 2016 at 5:57 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Jun 6, 2016 at 7:17 AM, Sridhar N Bamandlapally <
> sridhar@gmail.com> wrote:
>
>> Hi
>>
>> Is there any option in PGPLSQL which can RETURNS table or SETOF rows
>> along with an OUT parameter?
>>
>>
> ​No, there would be no point given the internals of how functions work.
>
> ​What is it you are trying to do?
>
> David J.
> ​
>
>


[GENERAL] OUT parameter and RETURN table/setof

2016-06-06 Thread Sridhar N Bamandlapally
Hi

Is there any option in PGPLSQL which can RETURNS table or SETOF rows along
with an OUT parameter?

please

Thanks
Sridhar
OpenText


Re: [GENERAL] UUID datatype

2016-05-30 Thread Sridhar N Bamandlapally
This I got, need some implicit way, like maybe in RULE on SELECT can we
write this ?

Thanks
Sridhar
OpenText

On Mon, May 30, 2016 at 1:05 PM, Michael Paquier <michael.paqu...@gmail.com>
wrote:

> On Mon, May 30, 2016 at 4:25 PM, Sridhar N Bamandlapally
> <sridhar@gmail.com> wrote:
> > Hi
> >
> > Is there a way to implicit SELECT on UUID datatype in uppercase ?
>
> You could always cast an UUID back to text and use that with upper(),
> though you are not explaining what you are tying to achieve:
> =# select upper(gen_random_uuid()::text);
>
>  upper
> --
>  057A3BC2-0E62-4D68-B01A-C44D20F91450
> (1 row)
> --
> Michael
>


[GENERAL] UUID datatype

2016-05-30 Thread Sridhar N Bamandlapally
Hi

Is there a way to implicit SELECT on UUID datatype in uppercase ?

Please

Thanks
Sridhar


Re: [GENERAL] Connections - Postgres 9.2

2016-05-17 Thread Sridhar N Bamandlapally
Hi

I control this way

if "state_change" is from longtime and "state" is idle then I use

function:
  *pg_terminate_backend ( integer ) * ==> return TRUE if killed-successful
else FALSE

example:
# select pg_terminate_backend ( pid ) from pg_stat_activity where
state='idle' and state_change < (current_timestamp - interval '1 hour');

may be helpful

NOTE: we come to decision to kill sessions in idle state more then "1 hour"
after lot of discussion with application / implementation / stake-holders
team

*removed history as thrown error due to mail length


Thanks
Sridhar
OpenText


Re: [GENERAL] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Hi Adam

we need simple concatenation of all variables(which values may come NULL or
valid-values based on functional process),

coalesce is different functionality

Thanks
Sridhar
OpenText


On Thu, May 12, 2016 at 4:56 PM, Adam Pearson <
adam.pear...@realisticgames.co.uk> wrote:

> Hello Sridhar,
>
>   Have you tried the 'coalesce' function to handle the nulls?
>
>
> Kind Regards,
>
> Adam Pearson
> --
> *From:* pgsql-general-ow...@postgresql.org <
> pgsql-general-ow...@postgresql.org> on behalf of Sridhar N Bamandlapally <
> sridhar@gmail.com>
> *Sent:* 12 May 2016 09:47
> *To:* PG-General Mailing List; PostgreSQL-hackers
> *Subject:* [GENERAL] NULL concatenation
>
> Hi
>
> In migration, am facing issue with NULL concatenation in plpgsql,
> by concatenating NULL between any where/position to Text / Varchar, the
> total string result is setting value to NULL
>
>
> *In Oracle:*
>
> declare
> txt1 VARCHAR2(100) := 'ABCD';
> txt2 VARCHAR2(100) := NULL;
> txt3 VARCHAR2(100) := 'EFGH';
> txt VARCHAR2(100) := NULL;
> begin
>   txt:= txt1 || txt2 || txt3;
>   dbms_output.put_line (txt);
> end;
> /
>
> abcdefgh   *===>return value*
>
>
>
> *In Postgres*
>
> do $$
> declare
> txt1 text := 'ABCD';
> txt2 text := NULL;
> txt3 text := 'EFGH';
> txt text := NULL;
> begin
> txt:= txt1 || txt2 || txt3;
> raise notice '%', txt;
> end$$ language plpgsql;
>
> NOTICE:*===> return value*
>
>
> SQL-Server also does same like Oracle
>
> Is there any way alternate we have for same behavior in PostgreSQL
>
> Please
>
> Thanks
> Sridhar
> OpenText
>
>


Re: [GENERAL] [HACKERS] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Thanks Pavel

Great !!

I was thinking both || and CANCAT does same

Thanks again

-
Sridhar
OpenText


On Thu, May 12, 2016 at 2:22 PM, Pavel Stehule <pavel.steh...@gmail.com>
wrote:

> Hi
>
> 2016-05-12 10:47 GMT+02:00 Sridhar N Bamandlapally <sridhar@gmail.com>
> :
>
>> Hi
>>
>> In migration, am facing issue with NULL concatenation in plpgsql,
>> by concatenating NULL between any where/position to Text / Varchar, the
>> total string result is setting value to NULL
>>
>>
>> *In Oracle:*
>>
>> declare
>> txt1 VARCHAR2(100) := 'ABCD';
>> txt2 VARCHAR2(100) := NULL;
>> txt3 VARCHAR2(100) := 'EFGH';
>> txt VARCHAR2(100) := NULL;
>> begin
>>   txt:= txt1 || txt2 || txt3;
>>   dbms_output.put_line (txt);
>> end;
>> /
>>
>> abcdefgh   *===>return value*
>>
>>
>>
>> *In Postgres*
>>
>> do $$
>> declare
>> txt1 text := 'ABCD';
>> txt2 text := NULL;
>> txt3 text := 'EFGH';
>> txt text := NULL;
>> begin
>> txt:= txt1 || txt2 || txt3;
>> raise notice '%', txt;
>> end$$ language plpgsql;
>>
>> NOTICE:*===> return value*
>>
>>
>> SQL-Server also does same like Oracle
>>
>> Is there any way alternate we have for same behavior in PostgreSQL
>>
>
> use function concat
> http://www.postgresql.org/docs/9.5/static/functions-string.html
>
>  postgres=# select concat('AHOJ', NULL,'XXX');
>  concat
> -
>  AHOJXXX
> (1 row)
>
> Regards
>
> Pavel
>
>
>> Please
>>
>> Thanks
>> Sridhar
>> OpenText
>>
>>
>


[GENERAL] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Hi

In migration, am facing issue with NULL concatenation in plpgsql,
by concatenating NULL between any where/position to Text / Varchar, the
total string result is setting value to NULL


*In Oracle:*

declare
txt1 VARCHAR2(100) := 'ABCD';
txt2 VARCHAR2(100) := NULL;
txt3 VARCHAR2(100) := 'EFGH';
txt VARCHAR2(100) := NULL;
begin
  txt:= txt1 || txt2 || txt3;
  dbms_output.put_line (txt);
end;
/

abcdefgh   *===>return value*



*In Postgres*

do $$
declare
txt1 text := 'ABCD';
txt2 text := NULL;
txt3 text := 'EFGH';
txt text := NULL;
begin
txt:= txt1 || txt2 || txt3;
raise notice '%', txt;
end$$ language plpgsql;

NOTICE:*===> return value*


SQL-Server also does same like Oracle

Is there any way alternate we have for same behavior in PostgreSQL

Please

Thanks
Sridhar
OpenText


Re: [GENERAL] pg_largeobject

2016-03-30 Thread Sridhar N Bamandlapally
Is there any way we can change the segment file size,

I am trying to look into the possibility of segment file size Vs bytea size
limitation

PostgreSQL installation

step 1:  ./configure --enable-largefile --with-segsize ( throwing error
"configure: error: Large file support is not enabled. Segment size cannot
be larger than 1GB" )

Thanks
Sridhar



On Tue, Mar 29, 2016 at 9:01 PM, Daniel Verite <dan...@manitou-mail.org>
wrote:

> Sridhar N Bamandlapally wrote:
>
> > due to size limitation BYTEA was not considered
>
> You could adopt for a custom table the same kind of structure that
> pg_largeobject has, that is an ordered series of BYTEA chunks.
>
> # \d pg_largeobject
> Table "pg_catalog.pg_largeobject"
>  Column |  Type   | Modifiers
> +-+---
>  loid   | oid | not null
>  pageno | integer | not null
>  data   | bytea   |
>
> Say you create a table looking  like this:
> (
>   object_id int
>   pageno integer
>   data bytea
> )
> with a unique index on (object_id,pageno),
> and octet_length(data) never exceeding a reasonable max size,
> such as 1MB, so if you want a 5GB object, that's just 5*1024 rows in
> that table, one row per pageno.
>
> It's really a good plan if your client code cooperates by streaming
> contents ordered by pageno instead of handling the blob as
> a monolith.
>
> About the chunk size, by comparison, the large object facility limits
> pg_largeobject.data to a quarter of a page, or 2048 bytes per row
> (=LOBLKSIZE, see comments around
> http://doxygen.postgresql.org/large__object_8h_source.html#l00072 )
>
> Having your own table has several advantages:
> - it contains much less rows for the same contents, if the  choosen chunk
> size is
>  much larger than 2048 bytes.
> - TOAST storage is enabled so that the main relation is way smaller.
> - it can be partitioned.
> - it can have triggers (good for custom replication)
>
> The drawback being that your application has to provide the equivalent
> code to the lo_* client-side and server-side functions that it needs.
> But that's a relatively easy work for a programmer, especially if the blobs
> happen to be immutable, as is often the case.
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: [GENERAL] pg_largeobject

2016-03-29 Thread Sridhar N Bamandlapally
We are doing application/database migration compatible with postgresql on
cloud, DR/replication also in plan

at present I feel need of configurable multi-table storage instead of
pg_largeobject only

Thanks
Sridhar


On Tue, Mar 29, 2016 at 6:08 PM, Alvaro Aguayo Garcia-Rada <
aagu...@opensysperu.com> wrote:

> Some time ago I had to setup a replicated file system between multiple
> linux servers. I tried everything I could based on postgres, including
> large objects, but everything was significantly slower than a regular
> filesystem.
>
> My conclussion: postgres is not suitable for storing large files
> efficiently.
>
> Do you need that for replication, or just for file storage?
>
> Alvaro Aguayo
> Jefe de Operaciones
> Open Comb Systems E.I.R.L.
>
> Oficina: (+51-1) 3377813 | RPM: #034252 / (+51) 995540103  | RPC:
> (+51) 954183248
> Website: www.ocs.pe
>
> Sent from my Sony Xperia™ smartphone
>
>
>  Sridhar N Bamandlapally wrote 
>
>
> all media files are stored in database with size varies from 1MB - 5GB
>
> based on media file types and user-group we storing in different tables,
> but PostgreSQL store OID/Large-object in single table (pg_largeobject), 90%
> of database size is with table pg_largeobject
>
> due to size limitation BYTEA was not considered
>
> Thanks
> Sridhar
>
>
>
> On Tue, Mar 29, 2016 at 3:05 PM, John R Pierce <pie...@hogranch.com>
> wrote:
>
>> On 3/29/2016 2:13 AM, Sridhar N Bamandlapally wrote:
>>
>>> Hi
>>>
>>> pg_largeobject is creating performance issues as it grow due to single
>>> point storage(for all tables)
>>>
>>> is there any alternate apart from bytea ?
>>>
>>> like configuration large-object-table at table-column level and oid
>>> PK(primary key) stored at pg_largeobject
>>>
>>>
>> I would as soon use a NFS file store for larger files like images, audio,
>> videos, or whatever.   use SQL for the relational metadata.
>>
>> just sayin'
>>
>>
>>
>> --
>> john r pierce, recycling bits in santa cruz
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] pg_largeobject

2016-03-29 Thread Sridhar N Bamandlapally
all media files are stored in database with size varies from 1MB - 5GB

based on media file types and user-group we storing in different tables,
but PostgreSQL store OID/Large-object in single table (pg_largeobject), 90%
of database size is with table pg_largeobject

due to size limitation BYTEA was not considered

Thanks
Sridhar



On Tue, Mar 29, 2016 at 3:05 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 3/29/2016 2:13 AM, Sridhar N Bamandlapally wrote:
>
>> Hi
>>
>> pg_largeobject is creating performance issues as it grow due to single
>> point storage(for all tables)
>>
>> is there any alternate apart from bytea ?
>>
>> like configuration large-object-table at table-column level and oid
>> PK(primary key) stored at pg_largeobject
>>
>>
> I would as soon use a NFS file store for larger files like images, audio,
> videos, or whatever.   use SQL for the relational metadata.
>
> just sayin'
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] pg_largeobject

2016-03-29 Thread Sridhar N Bamandlapally
Hi

pg_largeobject is creating performance issues as it grow due to single
point storage(for all tables)

is there any alternate apart from bytea ?

like configuration large-object-table at table-column level and oid
PK(primary key) stored at pg_largeobject


Thanks
Sridhar


[GENERAL] Nested funtion

2016-03-27 Thread Sridhar N Bamandlapally
Hi

Is there any way to create nested function?

oracle to postgres migration required super function variable reference
into nested function without nested function parameter

Oracle sample:
---
create or replace function f1(n number) return number
is
vs number:=1;
function nf1(m number) return number is
begin
return vs + m + n;
end;
begin
return nf1(2);
end;
/

run:

SQL> select f1(9) from dual;

 F1(9)
--
12



Thanks
Sridhar BN


[GENERAL] Rules on View

2016-03-01 Thread Sridhar N Bamandlapally
Hi

Is there a way to avoid creating rule under creation of view ?

please let me know

Thanks
Sridhar


Re: [GENERAL] [JDBC] JDBC behaviour

2016-02-18 Thread Sridhar N Bamandlapally
Ok, let me put this way

in JDBC we have *setAutoCommit( false ) *, and all dmls are independent
transactions

and when any transaction fails then the session not allowing next
transactions

in Java when we do setAutoCommit( false ) its behaving like all
transactions in BEGIN-END block, this is not expected behavior

i guess this is bug





On Thu, Feb 18, 2016 at 2:00 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:

> > but in this case all these transactions are independent with autocommit
> off,
>
> At database level, there is no "autocommit=off".
> There's just "begin-end".
>
> It is database who forbids .commit, not the JDBC driver.
> Vladimir
>


Re: [GENERAL] [JDBC] JDBC behaviour

2016-02-18 Thread Sridhar N Bamandlapally
If we want transactions in "begin-end" then its fine,

but in this case all these transactions are independent with autocommit off,

user choice to continue with commit or rollback

Thanks
Sridhar



On Thu, Feb 18, 2016 at 1:43 PM, Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:

> >Is it a bug or do we have other any alternate way to handle this ?
>
> PostgreSQL is strongly against "partial commits to the database". If
> you think a bit about it, it is not that bad.
> You got an error, what is the business case to commit the partial
> transaction then?
>
> Exceptions should not be used for a "control flow", should they?
>
>
> If you want to shoot yourself in a foot for fun and profit, you can
> try https://github.com/pgjdbc/pgjdbc/pull/477.
> What it does, it creates savepoints before each statement, then it
> rollbacks to that savepoint in case of failure.
>
> Vladimir
>


Re: [GENERAL] JDBC behaviour

2016-02-18 Thread Sridhar N Bamandlapally
Hi

We are facing issue with PostgreSQL JDBC behaviour

in Java, under autocommit false mode,

1. In between if any transaction then for next transaction, throws
exception saying "current transaction is aborted, commands ignored until
end of transaction block"

2. Even if exception is suppressed with try-catch then too for next
transaction, throws exception saying "current transaction is aborted,
commands ignored until end of transaction block"

3. The same is not happening with Oracle or SQL-Server, in this with-out
any exception handling it works

Is it a bug or do we have other any alternate way to handle this ?

Please I need some help in this

Thanks
Sridhar


[GENERAL] JDBC behaviour

2016-02-17 Thread Sridhar N Bamandlapally
Hi

We are facing issue with PostgreSQL JDBC behaviour

in Java, under autocommit false mode,

1.in between if any transaction


[GENERAL] How to drop stats on table

2015-11-20 Thread Sridhar N Bamandlapally
Hi

is there any feature available in postgres to drop stats on table?

thanks
Sridhar


Re: [GENERAL] [ADMIN] How to drop stats on table

2015-11-20 Thread Sridhar N Bamandlapally
the actual issue is, when

1. temp table is created with rows
2. stats/analyze on table (tmp1)
3. table dropped (tmp1)

but in stats related catalog tables a blot is created

In this scenario, thousands of temp tables created per day, blots are
increasing and stats related tables are growing to 10's of GB

however, we schedule vacuum on catalog tables to control size

the worry is, catalog tables also undergo MVCC concept

I think when table is dropped, should have option to remove or reuse
related blot-space on catalog tables

-Sridhar




On Fri, Nov 20, 2015 at 5:54 PM, Albe Laurenz <laurenz.a...@wien.gv.at>
wrote:

> Sridhar N Bamandlapally wrote:
> > is there any feature available in postgres to drop stats on table?
>
> What about
>
> DELETE FROM pg_catalog.pg_statistic WHERE starelid = 
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] avoid lock conflict between SELECT and TRUNCATE

2015-09-11 Thread Sridhar N Bamandlapally
as there is no option for incremental update/insert on user and renaming
will have app query errors

I guess

1) creating temporary table (say temp_users) on table users with required
data/columns-list and index on column user_id,
...this will be faster as there will be no joins with other tables

2) also need index on table auths_with_trans column user_id

3) replacing users with temp_users in BEGIN block

with this may reduce impact/dependency on regular crontab


Thanks
Sridhar BN




On Fri, Sep 11, 2015 at 10:52 AM, Venkata Balaji N 
wrote:

>
> On Thu, Sep 10, 2015 at 3:54 AM, Florin Andrei 
> wrote:
>
>> Once in a while, I have a report running a complex query such as this:
>>
>> BEGIN;declare "SQL_CUR04919850" cursor with hold for SELECT
>> "auths_with_trans"."user_id" AS "user_id (auths_with_trans)",
>> MAX("auths_with_trans"."user_created") AS
>> "TEMP(attr:user_created:ok)(2099950671)(0)",
>> MIN("auths_with_trans"."user_created") AS
>> "TEMP(attr:user_created:ok)(99676510)(0)",
>> MIN("auths_with_trans"."trans_time") AS
>> "usr:Calculation_6930907163324031:ok",
>> MIN("auths_with_trans"."auth_created") AS
>> "usr:Calculation_9410907163052141:ok"
>> FROM "public"."users" "users"
>> LEFT JOIN "public"."auths_with_trans" "auths_with_trans" ON
>> ("users"."user_id" = "auths_with_trans"."user_id")
>> GROUP BY 1;fetch 100 in "SQL_CUR04919850"
>>
>> But it takes a long time to complete, and meanwhile a cron job tries to
>> rebuild the users table by first doing "TRUNCATE TABLE users" and then
>> repopulating it with data. Obviously, TRUNCATE is blocked until the long
>> SELECT finishes.
>>
>> I'm looking for ways to avoid the conflict. One way would be to do
>> incremental updates to the users table - that's not an option yet.
>>
>> What if I rename the users table to users_MMDD? Would that still be
>> blocked by SELECT? If it's not blocked, then I could rename users out of
>> the way, and then recreate it with fresh data as plain 'users'. Then I'd
>> have a cron job dropping old users tables when they get too old.
>>
>>
>
> Yes. Renaming the table would interrupt the ongoing SELECT. The best
> approach would be (if possible) to isolate the SELECT itself. You can
> consider renaming the "users" table before the SELECT starts (say renamed
> to users_orig) and then execute the SELECT on user_orig table and drop it
> (if the data is no more needed) after the SELECT finishes. Instead of
> TRUNCATE, you can consider re-creating the "users" table and populating the
> data. If you take this approach, you will need to be careful regarding
> privileges/grants and dependencies on the table.
>
> Or the second approach would be --
>
> Create a table called users_orig from the "users" table and execute SELECT
> on user_orig table and let the TRUNCATE/data-repopulation operation run on
> "users" table. This will be a problem if the data is huge. It might take up
> your hardware resources.
>
> Third and simple approach would be to -
>
> Execute SELECT and TRUNCATE at different times.
>
> All of the above approaches are without considering data-size and other
> critical aspects of environment, which you need to worry about.
>
> Regards,
> Venkata Balaji N
>
> Fujitsu Australia
>
>
>


[GENERAL] PostgreSQL customer list

2015-08-18 Thread Sridhar N Bamandlapally
Hi

I need some top 10 customers list in financial or healthcare domain (
irrespective of open-source or proprietary tool )

We need to showcase to our customer for building analytical database

Please do share, it will be really helpful

Thanks
Sridhar BN