Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread Adrian Klaver

On 01/20/2016 04:54 PM, AI Rumman wrote:

But, will it not create transaction wraparound for those table?


See below for complete details:

http://www.postgresql.org/docs/9.1/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

but short version from above:


23.1.4. Preventing Transaction ID Wraparound Failures

1)

"PostgreSQL's MVCC transaction semantics depend on being able to compare 
transaction ID (XID) numbers: a row version with an insertion XID 
greater than the current transaction's XID is "in the future" and should 
not be visible to the current transaction. But since transaction IDs 
have limited size (32 bits) a cluster that runs for a long time (more 
than 4 billion transactions) would suffer transaction ID wraparound: the 
XID counter wraps around to zero, and all of a sudden transactions that 
were in the past appear to be in the future — which means their output 
become invisible. In short, catastrophic data loss. (Actually the data 
is still there, but that's cold comfort if you cannot get at it.) To 
avoid this, it is necessary to vacuum every table in every database at 
least once every two billion transactions."




2)

"The maximum time that a table can go unvacuumed is two billion 
transactions minus the vacuum_freeze_min_age value at the time VACUUM 
last scanned the whole table. If it were to go unvacuumed for longer 
than that, data loss could result. To ensure that this does not happen, 
autovacuum is invoked on any table that might contain XIDs older than 
the age specified by the configuration parameter 
autovacuum_freeze_max_age. (This will happen even if autovacuum is 
disabled.)"




Thanks.

On Wed, Jan 20, 2016 at 4:44 PM, Melvin Davidson > wrote:


ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false,
toast.autovacuum_enabled = false);

On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman > wrote:

Hi,

I have a table with daily partition schema on Postgresql 9.1
where we are keeping 2 years of data.
Often I experience that autovacuum process is busy with old
tables where there is no change. How can I stop it?
Please advice.

Thanks.




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] adding a bdr node using bcv backup

2016-01-20 Thread Craig Ringer
On 21 January 2016 at 08:29, (Daniel Stolf)  wrote:

> Hello there...
>
> I'm new to postgres and I'm trying out BDR replication...
>
> I know that when I issue the bdr.bdr_group_join command, it will copy the
> entire database from the host I specify on parameter 'join_using_dsn' and
> this may take a while depending on the network and the size of the
> database...
>
> What I wanted to know is if I can leverage a bcv backup... Is it possible?
>

BCV seems to be an EMC backup system. It looks like a snapshot. If the
snapshot taken is consistent and atomic, and if it includes both pg_xlog
and the rest of the datadir and all tablespaces in the SAME snapshot taken
at the SAME instant, then you can treat it much like a pg_basebackup. In
that case you can use bdr_init_copy to bring it up as a new BDR node. You
must either stop all writes to all other nodes or pre-create the
replication slots *before* taking the snapshot though, otherwise the new
node won't be able to catch up to writes done after the snapshot and before
it was started.

If this sounds too complex then stick to the documented methods that work.
Working from separately taken snapshots is hard to get right and could lead
to subtle data problems if you get it wrong.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


[GENERAL] adding a bdr node using bcv backup

2016-01-20 Thread (Daniel Stolf)
Hello there...

I'm new to postgres and I'm trying out BDR replication...

I know that when I issue the bdr.bdr_group_join command, it will copy the
entire database from the host I specify on parameter 'join_using_dsn' and
this may take a while depending on the network and the size of the
database...

What I wanted to know is if I can leverage a bcv backup... Is it possible?

I have a 2 nodes on my test environment and wanted to add a 3rd using this
approach... But when I restored the backup, the bdr information from node1
went along, so when I use 'bdr.bdr_group_join', it tells me:
ERROR:  This node is already a member of a BDR group
HINT:  Connect to the node you wish to add and run bdr_group_join from it
instead


Re: [GENERAL] long transfer time for binary data

2016-01-20 Thread Andy Colson

On 01/20/2016 03:29 PM, Johannes wrote:

I noticed transferring a large object or bytea data between client and
server takes a long time.
For example: An image with a real size of 11 MB could be read on server
side (explain analyze) in 81ms. Fine.

But on client side the result was completed after 6.7 seconds without
ssl compression and 4.5 seconds with ssl compression (both via 100MBit
ethernet).

SSL compression seems to be not a good idea anymore, since this had
become a security risk. Its still possible with pgadmin, but afaik not
with java/jdbc .

Are there any other solutions available to display my images in my
client application more quickly? Or are there planned improvements to
postgresql (transferring the real binary data)?

Best regards
Johannes



Yep, that's slow.  The ssl compression is very odd if the image is jpeg'ish and 
already compressed.  If its a bitmap or uncompressed tif then its not so 
surprising.

A few tests you could try:

1) copy the same 11 meg file from server to client via regular file copy and 
time it.  At 100 Mbit/s it should take about a second.  If it takes 6 you have 
network problems, not PG problems.

2) try it via psql command line (or at least something other than java), to see 
if its java thats the problem.

3) watch wireshark/tcpdump, maybe you'll see something glaring that'll point 
you in the right direction.

-Andy

PS: I've never heard that ssl compression was a security risk, got links/proof?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread Scott Mead


> On Jan 20, 2016, at 19:54, AI Rumman  wrote:
> 
> But, will it not create transaction wraparound for those table?
> 
> Thanks.
> 
>> On Wed, Jan 20, 2016 at 4:44 PM, Melvin Davidson  
>> wrote:
>> 
>> ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false, 
>> toast.autovacuum_enabled = false);
>> 
>>> On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman  wrote:
>>> Hi,
>>> 
>>> I have a table with daily partition schema on Postgresql 9.1 where we are 
>>> keeping 2 years of data.
>>> Often I experience that autovacuum process is busy with old tables where 
>>> there is no change. How can I stop it?
>>> Please advice.
>>> 
I typically run a vacuum freeze in old partitions that don't get any changes as 
part of a maintenance script.  If the tables actually get no changes, autovac 
should ignore them unless wrap becomes an issue at max_freeze_age... Which, it 
shouldn't of you vacuum freeze and there are no changes. 


>>> Thanks.
>> 
>> 
>> 
>> -- 
>> Melvin Davidson
>> I reserve the right to fantasize.  Whether or not you 
>> wish to share my fantasy is entirely up to you. 
> 


[GENERAL] Variable not found

2016-01-20 Thread Sachin Srivastava
Dear Folks,

I have a question about global variables in Oracle pl/sql package. Where
are these variables when package is converted to schema from Oracle to
Postgres through Ora2PG Tool?


For example, package 

Re: [GENERAL] long transfer time for binary data

2016-01-20 Thread George Neuner
On Wed, 20 Jan 2016 22:29:07 +0100, Johannes  wrote:

>I noticed transferring a large object or bytea data between client and
>server takes a long time.
>For example: An image with a real size of 11 MB could be read on server
>side (explain analyze) in 81ms. Fine.
>
>But on client side the result was completed after 6.7 seconds without
>ssl compression and 4.5 seconds with ssl compression (both via 100MBit
>ethernet).

I think at ~4 seconds you're actually running pretty close to the
limit of what is possible.

Remember that, even assuming the execution plan is accurate and also
is representative of an average request, your 81ms image fetch may be
arbitrarily delayed due to server load.

Even a quiet network has overhead: layers of packet headers, TCP
checksums (CRC) and ack packets, etc. ... it's quite hard to sustain
more than 95% of the theoretical bandwidth even on a full duplex
private subnet.  So figure 11MB of data will take ~1.2 seconds under
_optimal_ conditions.  Any competing traffic will just slow it down.

Also note that if the image data was stored already compressed,
additionally trying to use connection level compression may expand the
data and increase the transmission time, as well as adding processing
overhead at both ends.

And then the client has to convert the image from the storage format
into a display compatible bitmap and get it onto the screen.


>Are there any other solutions available to display my images in my
>client application more quickly? Or are there planned improvements to
>postgresql (transferring the real binary data)?

You don't say what is the client platform/software or what format are
the images.  11MB is (equivalent to) 1500+ pixels square depending on
pixel/color depth.  That's a relatively large image - even from a
local file, rendering that would take a couple of seconds.  Add a
couple more seconds for request turn-around and there is your time
gone.

BMP and GIF repectively are the formats that are quickest to render.
If your stored images are in different format, it might be worth
converting them to one of these.

GIF and _some_ BMP formats support direct compression of the pixel
data.  If you find you must store the pixel data uncompressed, you can
always gzip the resulting image file and store that.

Then don't use connection level compression.  With images stored
already compressed the transmitted size is minimized, and you will
only ever decompress (on the client) data in the critical path to the
display.


Hope this helps,
George



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread AI Rumman
But, will it not create transaction wraparound for those table?

Thanks.

On Wed, Jan 20, 2016 at 4:44 PM, Melvin Davidson 
wrote:

>
> ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false,
> toast.autovacuum_enabled = false);
>
> On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman  wrote:
>
>> Hi,
>>
>> I have a table with daily partition schema on Postgresql 9.1 where we are
>> keeping 2 years of data.
>> Often I experience that autovacuum process is busy with old tables where
>> there is no change. How can I stop it?
>> Please advice.
>>
>> Thanks.
>>
>>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread Melvin Davidson
You can only get a transaction wraparound if you actually do transactions
(insert/update/delete) on a table.
Since YOU claim you are not doing that, then you should not get a
transaction wraparound for that table.


On Wed, Jan 20, 2016 at 7:54 PM, AI Rumman  wrote:

> But, will it not create transaction wraparound for those table?
>
> Thanks.
>
> On Wed, Jan 20, 2016 at 4:44 PM, Melvin Davidson 
> wrote:
>
>>
>> ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false,
>> toast.autovacuum_enabled = false);
>>
>> On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman  wrote:
>>
>>> Hi,
>>>
>>> I have a table with daily partition schema on Postgresql 9.1 where we
>>> are keeping 2 years of data.
>>> Often I experience that autovacuum process is busy with old tables where
>>> there is no change. How can I stop it?
>>> Please advice.
>>>
>>> Thanks.
>>>
>>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Syntax error for Function

2016-01-20 Thread Adrian Klaver

On 01/20/2016 06:32 AM, Sachin Srivastava wrote:

Dear Thom,

Please find the complete code as below and suggest now.


I would suggest spending some time here:

http://www.postgresql.org/docs/9.4/interactive/plpgsql.html

in particular:

http://www.postgresql.org/docs/9.4/interactive/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS

and here:

http://www.postgresql.org/docs/9.4/interactive/plpgsql-porting.html

Comments in-line



--

-- Function: gen_budget_for_next_year(bigint, bigint, bigint)
-- DROP FUNCTION gen_budget_for_next_year(bigint, bigint, bigint);
CREATE OR REPLACE FUNCTION gen_budget_for_next_year(
 subid bigint,
 compid bigint,
 formonth bigint)
   RETURNS void AS
$BODY$
DECLARE
--Version:
--2015.01 PM T15422-10- Generate budget period for coming years. 05/01/2015
cur1 CURSOR FOR SELECT distinct(language_id) from
"languaget...@repos.birchstreet.net


Have you tried the above. I know quoting it got you pass the syntax 
error, but I am pretty sure it not going to do what it did in Oracle.



";
sFound bigint := 0;
recCount bigint :=0;
period varchar(100);
firstDate varchar(100);
lastDate varchar(100);
curMonth varchar(100);
langCursor RECORD; --cur1%rowtype;

BEGIN
   loop
   open cur1;
   IF NOT FOUND THEN EXIT; END IF; -- apply on recCount >= forMonth;
 select to_char(TO_DATE(LOCALTIMESTAMP) + recCount*'1
month'::interval,'MM') into period  ;
 select to_date(period||'01','MMDD') into firstDate  ;
 select TO_DATE(LOCALTIMESTAMP) + recCount*'1 month'::interval
into lastDate  ;
 select to_char(TO_DATE(LOCALTIMESTAMP) + recCount*'1
month'::interval, ' MM MONTH') into curMonth ;
 recCount :=recCount+1;
   loop
   fetch cur1 into langCursor;
   exit when cur1


From loop link above, this needs to be

exit when cur1;


 select Count(0) into sFound  from budget_period t where
t.subscriber_id =subID
 and t.period_number = period and
t.language_id=langCursor.Language_Id;
 if(sFound = 0)then
 insert into budget_period (subscriber_id, company_id,
period_number, period_name,
 period_length_code, first_day, last_day,creation_date,
creation_user, update_date, update_user, language_id)
 values(subID, compID, period,  curMonth,  'MONTH',
 firstDate, lastDate, LOCALTIMESTAMP,
'Admin',LOCALTIMESTAMP, 'Admin', langCursor.Language_Id);
 end if;
   end loop;
   close cur1;
   end loop;

commit;
END;
$BODY$
   LANGUAGE plpgsql VOLATILE SECURITY DEFINER
   COST 100;
ALTER FUNCTION gen_budget_for_next_year(bigint, bigint, bigint)
   OWNER TO postgres;



On Wed, Jan 20, 2016 at 6:05 PM, Thom Brown > wrote:

On 20 January 2016 at 12:15, Sachin Srivastava
> wrote:
 > I am unable to find out the syntax error in below code, please
suggest?
 >
 >
 >
 > ERROR:  syntax error at or near "select"
 > LINE 44: select Count(0) into sFound  from
budget_period ...
 >  ^
 > ** Error **
 > ERROR: syntax error at or near "select"
 > SQL state: 42601
 > Character: 1190
 >
 > Code as below:
 > -
 >
 > select Count(0) into sFound  from budget_period t where
t.subscriber_id
 > =subID
 > and t.period_number = period and
 > t.language_id=langCursor.Language_Id;
 > if(sFound = 0)then
 > insert into budget_period (subscriber_id, company_id,
 > period_number, period_name,
 > period_length_code, first_day,
last_day,creation_date,
 > creation_user, update_date, update_user, language_id)
 > values(subID, compID, period,  curMonth,  'MONTH',
 > firstDate, lastDate, LOCALTIMESTAMP,
'Admin',LOCALTIMESTAMP,
 > 'Admin', langCursor.Language_Id);
 > end if;
 >
 > 

Well, it says that the problem occurs on line 44, so what's on the
previous lines it's receiving?  Are you sending an unterminated query
prior to that?

Thom





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Steve Crawford
Right, I was just mentioning the existence of that built-in data in case it
was of use to the OP. But I should have also mentioned some caveats in case
it is important to his use-case:

1. Full names are not as standardized as one might like so there are
multiple possible full names for a time zone, i.e.
"America/Los_Angeles", "posix/America/Los_Angeles",
"posix/US/Pacific", "PST8PDT", ...

2. Unlike full names, abbreviations do *not* distinctly identify a single
time zone. CST is the short name for US Central Standard Time, Cuba, ROC
and PRC among others.

3. pg_timezone_names is a *view* and the results for abbreviation and
offset change depending on time of year. Right now it's winter on the US
West Coast so the abbreviation for "posix/US/Pacific" is PST and I will get
results searching for abbreviations matching "PST" but none for "PDT". Come
spring, that will change.

Cheers,
Steve


On Wed, Jan 20, 2016 at 7:46 AM, Pavel Stehule 
wrote:

>
>
> 2016-01-20 16:38 GMT+01:00 Steve Crawford 
> :
>
>> Is this of any use?
>>
>> select * from pg_timezone_names where name = 'Europe/Lisbon';
>> name  | abbrev | utc_offset | is_dst
>> ---+++
>> Europe/Lisbon | WET| 00:00:00   | f
>>
>>
> This is list of know timezones. So if you are searching "abbrev" then you
> can find it there.
>
>
> Pavel
>
>
>
>> -Steve
>>
>> On Wed, Jan 20, 2016 at 7:24 AM, Steve Rogerson <
>> steve...@yewtc.demon.co.uk> wrote:
>>
>>> On 20/01/16 13:27, Pavel Stehule wrote:
>>> >
>>> >
>>> >
>>> > Postgres doesn't store original TZ. It does recalculation to local TZ.
>>> If you
>>> > need original TZ, you have to store it separetely.
>>> >
>>>
>>> I know and that's what I'm trying to deal with. Given I know the origin
>>> TZ  -
>>> as in Europe/Lisbon I'm trying to determine the short name so I can
>>> store it.
>>>
>>> I guess I'll have to use something other than pg to do it.
>>>
>>> Steve
>>>
>>>
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>>
>


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Steve Rogerson
On 20/01/16 15:38, Steve Crawford wrote:
> Is this of any use?
> 
> select * from pg_timezone_names where name = 'Europe/Lisbon';
> name  | abbrev | utc_offset | is_dst
> ---+++
> Europe/Lisbon | WET| 00:00:00   | f
> 

A bit - but what's missing is to know if daylight savings is in effect. So it
could be WET or WEST so we need a date time and an TZ to work it out.


Steve







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-20 Thread Simon Riggs
On 18 January 2016 at 18:02, Joshua D. Drake  wrote:


> O.k. so I let every thing sit with V7 for several days and we have
> received no further feedback. I believe we have reached a point where we
> can reasonably consider this Final or at least Final Draft.
>
> This final draft incorporates all reasonable feedback I have received as
> well as rewriting it in a more conversational tone from Kevin Grittner's
> efforts.
>
> == PostgreSQL Community Code of Conduct (CoC) ==
>
> This document provides community guidelines for a safe, respectful,
> productive, and collaborative place for any person who is willing to
> contribute to the PostgreSQL community. It applies to all "collaborative
> space", which is defined as community communications channels (such as
> mailing lists, IRC, submitted patches, commit comments, etc.).
>
> * We are tolerant of people’s right to have opposing views.
>
> * Participants must ensure that their language and actions are free
> of personal attacks and disparaging personal remarks.
>
> * When interpreting the words and actions of others, participants
> should always assume good intentions.
>
> * Participants who disrupt the collaborative space, or participate in a
> pattern of behaviour which could be considered harassment will not be
> tolerated.
>

I think this is well intentioned. All new laws should be subject to
scrutiny as to how they will be applied and who will apply them.

There are difficulties here and I'm of the opinion it will have the
opposite effect to its intention.

Person1: "I'd like you to stop doing that, it has bad effects"

(Lets assume that something bad has actually happened, enacted by Person 2)
Person2: "But everything I do is for the common good." - now anything that
is said further violates point 3, straying near point 2.

Any attempt by Person1 to carry on the discussion until a reasonable
outcome is achieved also violates point 4.

So even though Person2 has done something bad, Person1 is unable to discuss
this without being sanctioned.

My observation is this isn't just a set of rules for behaviour, its a set
of rules that controls people's ability to object, which is dangerous and
would not be in the longer term interests of the community.

I suggest we remove point 3 entirely. Point 2 is sufficient to limit what
is said.

Who will decide how this code is enacted? Rules imply rulers, so what is
the constitution of the governing body?

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [GENERAL] BDR with postgres 9.5

2016-01-20 Thread Vik Fearing
On 01/20/2016 11:41 AM, Nikhil wrote:
> Hello All,
> 
> 
> What is the timeline for BDR with postgres 9.5 released version.

Currently there are no plans for BDR with 9.5.
https://github.com/2ndQuadrant/bdr/issues/157#issuecomment-172402366
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-20 Thread Brian Dunavant
> * Participants who disrupt the collaborative space, or participate in
> a pattern of behaviour which could be considered harassment will not
> be tolerated.

Perhaps changing the ", or participate" to " by engaging" would make
that statement more focused.

> "Disrupting the collaborative space" is very hard to define even when
> nobody has an agenda. When there are agendas, it almost certainly will
> lead to selective enforcement.

PHP is currently going through a CoC discussion as well.  Paul Jones
has a good blog post on the dangers of CoC's and their abuse.

http://paul-m-jones.com/archives/6214


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Pavel Stehule
Hi

2016-01-20 16:24 GMT+01:00 Steve Rogerson :

> On 20/01/16 13:27, Pavel Stehule wrote:
> >
> >
> >
> > Postgres doesn't store original TZ. It does recalculation to local TZ.
> If you
> > need original TZ, you have to store it separetely.
> >
>
> I know and that's what I'm trying to deal with. Given I know the origin
> TZ  -
> as in Europe/Lisbon I'm trying to determine the short name so I can store
> it.
>
> I guess I'll have to use something other than pg to do it.
>

probably it isn't possible - the transformation to local zone is
immediately after input string is parsed.

Regards

Pavel



> Steve
>
>
>


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Pavel Stehule
2016-01-20 16:38 GMT+01:00 Steve Crawford :

> Is this of any use?
>
> select * from pg_timezone_names where name = 'Europe/Lisbon';
> name  | abbrev | utc_offset | is_dst
> ---+++
> Europe/Lisbon | WET| 00:00:00   | f
>
>
This is list of know timezones. So if you are searching "abbrev" then you
can find it there.


Pavel



> -Steve
>
> On Wed, Jan 20, 2016 at 7:24 AM, Steve Rogerson <
> steve...@yewtc.demon.co.uk> wrote:
>
>> On 20/01/16 13:27, Pavel Stehule wrote:
>> >
>> >
>> >
>> > Postgres doesn't store original TZ. It does recalculation to local TZ.
>> If you
>> > need original TZ, you have to store it separetely.
>> >
>>
>> I know and that's what I'm trying to deal with. Given I know the origin
>> TZ  -
>> as in Europe/Lisbon I'm trying to determine the short name so I can store
>> it.
>>
>> I guess I'll have to use something other than pg to do it.
>>
>> Steve
>>
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] CoC [Final]

2016-01-20 Thread Geoff Winkless
On 20 January 2016 at 15:19, Brian Dunavant  wrote:
>> * Participants who disrupt the collaborative space, or participate in
>> a pattern of behaviour which could be considered harassment will not
>> be tolerated.
>
> Perhaps changing the ", or participate" to " by engaging" would make
> that statement more focused.

Well yes, it makes it more focussed, but also completely changes the meaning.

I could have got it wrong, but as I understood it the intention was
that disrupting the collaborative space by other means (say, by
posting multiple threads about something about which the majority have
no interest to a mailing list where it might reasonably be considered
offtopic, and telling anyone who complains that they can "just ignore
them"?) would also not be tolerated.

But maybe I'm just being facetious :)

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Steve Crawford
Is this of any use?

select * from pg_timezone_names where name = 'Europe/Lisbon';
name  | abbrev | utc_offset | is_dst
---+++
Europe/Lisbon | WET| 00:00:00   | f

-Steve

On Wed, Jan 20, 2016 at 7:24 AM, Steve Rogerson 
wrote:

> On 20/01/16 13:27, Pavel Stehule wrote:
> >
> >
> >
> > Postgres doesn't store original TZ. It does recalculation to local TZ.
> If you
> > need original TZ, you have to store it separetely.
> >
>
> I know and that's what I'm trying to deal with. Given I know the origin
> TZ  -
> as in Europe/Lisbon I'm trying to determine the short name so I can store
> it.
>
> I guess I'll have to use something other than pg to do it.
>
> Steve
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Syntax error for Function

2016-01-20 Thread Adrian Klaver

On 01/20/2016 08:00 AM, Sachin Srivastava wrote:

Dear Adiran,

Thanks for your help !!

First I want to say it's not giving the error for
this ""languaget...@repos.birchstreet.net
", so there is any need to do
the change as suggested by you.


You are not seeing a syntax error, but I am pretty sure you will see a 
run error as from what I gather languagetype@repos.. is an Oracle construct:


http://stackoverflow.com/questions/296263/what-is-the-meaning-of-symbol-in-oracle-sql

Pretty sure Postgres is going to fail on:

cur1 CURSOR FOR SELECT distinct(language_id) from 
"languaget...@repos.birchstreet.net "


when it tries to execute the query.



Second you suggested " exit with cur1; " - *You are right after putting
the semi column my code is working fine.*

There is also one query I have changed this line
"*langCursor cur1%rowtype;" as below:*
*
*
**
langCursor RECORD; --cur1%rowtype;


No ; at end of comment:

http://www.postgresql.org/docs/9.4/static/plpgsql-structure.html



Please read my code once again and suggest I did correct these change or
not because this is suggested by one of my friend and first I am getting
the error for this line.

I am the new one for plsql code that's why I am taking the expert advice.


That is why I suggested taking a look at the plpgsql section of the 
manual. A quick run through will answer most of your questions.





Thanks
SS

On Wed, Jan 20, 2016 at 8:30 PM, Adrian Klaver
> wrote:

On 01/20/2016 06:32 AM, Sachin Srivastava wrote:

Dear Thom,

Please find the complete code as below and suggest now.


I would suggest spending some time here:

http://www.postgresql.org/docs/9.4/interactive/plpgsql.html

in particular:


http://www.postgresql.org/docs/9.4/interactive/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS

and here:

http://www.postgresql.org/docs/9.4/interactive/plpgsql-porting.html

Comments in-line


--

-- Function: gen_budget_for_next_year(bigint, bigint, bigint)
-- DROP FUNCTION gen_budget_for_next_year(bigint, bigint, bigint);
CREATE OR REPLACE FUNCTION gen_budget_for_next_year(
  subid bigint,
  compid bigint,
  formonth bigint)
RETURNS void AS
$BODY$
DECLARE
--Version:
--2015.01 PM T15422-10- Generate budget period for coming years.
05/01/2015
cur1 CURSOR FOR SELECT distinct(language_id) from
"languaget...@repos.birchstreet.net



Have you tried the above. I know quoting it got you pass the syntax
error, but I am pretty sure it not going to do what it did in Oracle.

>";
sFound bigint := 0;
recCount bigint :=0;
period varchar(100);
firstDate varchar(100);
lastDate varchar(100);
curMonth varchar(100);
langCursor RECORD; --cur1%rowtype;

BEGIN
loop
open cur1;
IF NOT FOUND THEN EXIT; END IF; -- apply on recCount >=
forMonth;
  select to_char(TO_DATE(LOCALTIMESTAMP) + recCount*'1
month'::interval,'MM') into period  ;
  select to_date(period||'01','MMDD') into firstDate  ;
  select TO_DATE(LOCALTIMESTAMP) + recCount*'1
month'::interval
into lastDate  ;
  select to_char(TO_DATE(LOCALTIMESTAMP) + recCount*'1
month'::interval, ' MM MONTH') into curMonth ;
  recCount :=recCount+1;
loop
fetch cur1 into langCursor;
exit when cur1


 From loop link above, this needs to be

exit when cur1;

  select Count(0) into sFound  from budget_period t
where
t.subscriber_id =subID
  and t.period_number = period and
t.language_id=langCursor.Language_Id;
  if(sFound = 0)then
  insert into budget_period (subscriber_id,
company_id,
period_number, period_name,
  period_length_code, first_day,
last_day,creation_date,
creation_user, update_date, update_user, language_id)
  values(subID, compID, period,  curMonth,  'MONTH',
  firstDate, lastDate, LOCALTIMESTAMP,
'Admin',LOCALTIMESTAMP, 'Admin', langCursor.Language_Id);
  end if;
end loop;
close cur1;
end loop;

commit;
END;
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER
COST 100;
ALTER FUNCTION gen_budget_for_next_year(bigint, bigint, bigint)
  

Re: [GENERAL] Syntax error for Function

2016-01-20 Thread Sachin Srivastava
Dear Adiran,

Thanks for your help !!

First I want to say it's not giving the error for this ""
languaget...@repos.birchstreet.net", so there is any need to do the change
as suggested by you.

Second you suggested " exit with cur1; " - *You are right after putting the
semi column my code is working fine.*

There is also one query I have changed this line"*langCursor cur1%rowtype;"
as below:*

langCursor RECORD; --cur1%rowtype;

Please read my code once again and suggest I did correct these change or
not because this is suggested by one of my friend and first I am getting
the error for this line.

I am the new one for plsql code that's why I am taking the expert advice.


Thanks
SS

On Wed, Jan 20, 2016 at 8:30 PM, Adrian Klaver 
wrote:

> On 01/20/2016 06:32 AM, Sachin Srivastava wrote:
>
>> Dear Thom,
>>
>> Please find the complete code as below and suggest now.
>>
>
> I would suggest spending some time here:
>
> http://www.postgresql.org/docs/9.4/interactive/plpgsql.html
>
> in particular:
>
>
> http://www.postgresql.org/docs/9.4/interactive/plpgsql-control-structures.html#PLPGSQL-CONTROL-STRUCTURES-LOOPS
>
> and here:
>
> http://www.postgresql.org/docs/9.4/interactive/plpgsql-porting.html
>
> Comments in-line
>
>
>> --
>>
>> -- Function: gen_budget_for_next_year(bigint, bigint, bigint)
>> -- DROP FUNCTION gen_budget_for_next_year(bigint, bigint, bigint);
>> CREATE OR REPLACE FUNCTION gen_budget_for_next_year(
>>  subid bigint,
>>  compid bigint,
>>  formonth bigint)
>>RETURNS void AS
>> $BODY$
>> DECLARE
>> --Version:
>> --2015.01 PM T15422-10- Generate budget period for coming years.
>> 05/01/2015
>> cur1 CURSOR FOR SELECT distinct(language_id) from
>> "languaget...@repos.birchstreet.net
>>
>
> Have you tried the above. I know quoting it got you pass the syntax error,
> but I am pretty sure it not going to do what it did in Oracle.
>
> ";
>> sFound bigint := 0;
>> recCount bigint :=0;
>> period varchar(100);
>> firstDate varchar(100);
>> lastDate varchar(100);
>> curMonth varchar(100);
>> langCursor RECORD; --cur1%rowtype;
>>
>> BEGIN
>>loop
>>open cur1;
>>IF NOT FOUND THEN EXIT; END IF; -- apply on recCount >= forMonth;
>>  select to_char(TO_DATE(LOCALTIMESTAMP) + recCount*'1
>> month'::interval,'MM') into period  ;
>>  select to_date(period||'01','MMDD') into firstDate  ;
>>  select TO_DATE(LOCALTIMESTAMP) + recCount*'1 month'::interval
>> into lastDate  ;
>>  select to_char(TO_DATE(LOCALTIMESTAMP) + recCount*'1
>> month'::interval, ' MM MONTH') into curMonth ;
>>  recCount :=recCount+1;
>>loop
>>fetch cur1 into langCursor;
>>exit when cur1
>>
>
> From loop link above, this needs to be
>
> exit when cur1;
>
>  select Count(0) into sFound  from budget_period t where
>> t.subscriber_id =subID
>>  and t.period_number = period and
>> t.language_id=langCursor.Language_Id;
>>  if(sFound = 0)then
>>  insert into budget_period (subscriber_id, company_id,
>> period_number, period_name,
>>  period_length_code, first_day, last_day,creation_date,
>> creation_user, update_date, update_user, language_id)
>>  values(subID, compID, period,  curMonth,  'MONTH',
>>  firstDate, lastDate, LOCALTIMESTAMP,
>> 'Admin',LOCALTIMESTAMP, 'Admin', langCursor.Language_Id);
>>  end if;
>>end loop;
>>close cur1;
>>end loop;
>>
>> commit;
>> END;
>> $BODY$
>>LANGUAGE plpgsql VOLATILE SECURITY DEFINER
>>COST 100;
>> ALTER FUNCTION gen_budget_for_next_year(bigint, bigint, bigint)
>>OWNER TO postgres;
>>
>> 
>>
>> On Wed, Jan 20, 2016 at 6:05 PM, Thom Brown > > wrote:
>>
>> On 20 January 2016 at 12:15, Sachin Srivastava
>> > wrote:
>>  > I am unable to find out the syntax error in below code, please
>> suggest?
>>  >
>>  >
>>  >
>>  > ERROR:  syntax error at or near "select"
>>  > LINE 44: select Count(0) into sFound  from
>> budget_period ...
>>  >  ^
>>  > ** Error **
>>  > ERROR: syntax error at or near "select"
>>  > SQL state: 42601
>>  > Character: 1190
>>  >
>>  > Code as below:
>>  > -
>>  >
>>  > select Count(0) into sFound  from budget_period t where
>> t.subscriber_id
>>  > =subID
>>  > and t.period_number = period and
>>  > t.language_id=langCursor.Language_Id;
>>  > if(sFound = 0)then
>>  > insert into budget_period (subscriber_id,
>> company_id,
>>  > period_number, period_name,
>>  > period_length_code, first_day,
>> 

Re: [GENERAL] CoC [Final]

2016-01-20 Thread Simon Riggs
On 20 January 2016 at 19:05, Kevin Grittner  wrote:

> On Wed, Jan 20, 2016 at 12:47 PM, Simon Riggs 
> wrote:
> > On 18 January 2016 at 18:02, Joshua D. Drake 
> wrote:
>
> >> * We are tolerant of people’s right to have opposing views.
> >>
> >> * Participants must ensure that their language and actions are free
> >> of personal attacks and disparaging personal remarks.
> >>
> >> * When interpreting the words and actions of others, participants
> >> should always assume good intentions.
> >>
> >> * Participants who disrupt the collaborative space, or participate in a
> >> pattern of behaviour which could be considered harassment will not be
> >> tolerated.
>
> > I suggest we remove point 3 entirely. Point 2 is sufficient to limit
> what is
> > said.
>
> That came about because of the point made by someone for whom
> English is a second language, who attempted to complement someone
> by saying the work was "gross" (meaning "a big thing"), when that
> was initially taken as an insult (thinking "disgusting" was meant).
> Perhaps it belongs more in the preamble or could be omitted, but
> it was an attempt to recognize that simple miscommunication due to
> language or cultural differences can turn into flame wars if people
> don't give each other some benefit of the doubt.
>

Which means that anyone who violates point 2 cannot be held to account,
because doing so would violate point 3.

I agree it is a great idea to assume the good intentions of others, but its
a difficult principle to enforce.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [GENERAL] CoC [Final]

2016-01-20 Thread Kevin Grittner
On Wed, Jan 20, 2016 at 12:47 PM, Simon Riggs  wrote:
> On 18 January 2016 at 18:02, Joshua D. Drake  wrote:

>> * We are tolerant of people’s right to have opposing views.
>>
>> * Participants must ensure that their language and actions are free
>> of personal attacks and disparaging personal remarks.
>>
>> * When interpreting the words and actions of others, participants
>> should always assume good intentions.
>>
>> * Participants who disrupt the collaborative space, or participate in a
>> pattern of behaviour which could be considered harassment will not be
>> tolerated.

> I suggest we remove point 3 entirely. Point 2 is sufficient to limit what is
> said.

That came about because of the point made by someone for whom
English is a second language, who attempted to complement someone
by saying the work was "gross" (meaning "a big thing"), when that
was initially taken as an insult (thinking "disgusting" was meant).
Perhaps it belongs more in the preamble or could be omitted, but
it was an attempt to recognize that simple miscommunication due to
language or cultural differences can turn into flame wars if people
don't give each other some benefit of the doubt.

> Who will decide how this code is enacted? Rules imply rulers, so what is the
> constitution of the governing body?

It has been stated several times on this thread by multiple people
that we should settle on the code to implement before talking about
enforcement processes.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-20 Thread Karsten Hilbert
On Wed, Jan 20, 2016 at 01:05:15PM -0600, Kevin Grittner wrote:

>>> * When interpreting the words and actions of others, participants
>>> should always assume good intentions.
...
> That came about because of the point made by someone for whom
> English is a second language, who attempted to complement someone
> by saying the work was "gross" (meaning "a big thing"), when that
> was initially taken as an insult (thinking "disgusting" was meant).
> Perhaps it belongs more in the preamble or could be omitted, but
> it was an attempt to recognize that simple miscommunication due to
> language or cultural differences can turn into flame wars if people
> don't give each other some benefit of the doubt.

* When interpreting the words and actions of others, participants
  should always consider the possibility of misunderstandings.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Vik Fearing
On 01/20/2016 04:24 PM, Steve Rogerson wrote:
>> Postgres doesn't store original TZ. It does recalculation to local TZ. If you
>> need original TZ, you have to store it separetely.
>>
> 
> I know and that's what I'm trying to deal with. Given I know the origin TZ  -
> as in Europe/Lisbon I'm trying to determine the short name so I can store it.

I would recommend against storing the abbreviation.  The abbreviations
are not globally unique and don't follow daylight savings.  If you want
to store the original time zone, I would use the full name.

Something like this might be relative to your interests:

INSERT INTO tbl (ts, tz)
VALUES ('2016-01-20 00:00', current_setting('TimeZone'));

This will do the right thing regardless of where the client is (unless
it's set to "localtime" and then it's useless).
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-20 Thread Alvaro Herrera
Joshua D. Drake wrote:

> == PostgreSQL Community Code of Conduct (CoC) ==
> 
> This document provides community guidelines for a safe, respectful,
> productive, and collaborative place for any person who is willing to
> contribute to the PostgreSQL community. It applies to all "collaborative
> space", which is defined as community communications channels (such as
> mailing lists, IRC, submitted patches, commit comments, etc.).

I think the words "collaborative space, which is defined as" can be
omitted completely without loss of meaning; and since it's already
agreed that this CoC only applies to online media; I'd also add the word
"online" there.  So

  "It applies to all online communication channels (such as ...)".

> * We are tolerant of people’s right to have opposing views.

Reading the fine print of this phrase, I think it doesn't really convey
what we want it to convey.  "We are tolerant of people that have
opposing views", perhaps, or "We recognize people's right to have
opposing views".  My points is that we are not tolerant of _the right_
-- that seems nonsensical to me.

(Merriam Webster defines "tolerant" as "inclined to tolerate", and "to
tolerate" as "2a. to allow to be or to be done without prohibition,
hindrance, or contradiction")

However the "we" also seems a bit wrong to me.  Who is "we"?  In
concordance with the other points, I think this should start
"Participants must be" or something along those lines.  If not, the
perhaps this point should be in the preamble instead of being a
bulleted point.

> * Participants must ensure that their language and actions are free
> of personal attacks and disparaging personal remarks.

There have been no comments to this point on this thread.
Congratulations :-)

> * When interpreting the words and actions of others, participants
> should always assume good intentions.

Karsten Hilbert proposed a different wording for this, +1 for that one.

> * Participants who disrupt the collaborative space, or participate in a
> pattern of behaviour which could be considered harassment will not be
> tolerated.

"which could be considered" is too open-ended.  Since this point is
the one and only that can cause enforcement to occur, it should be more
strict as to what it is that will not be tolerated.  I'd propose
something like "is widely regarded as harassment" or something like
that, so that it needs to be clear that there is a large group of people
that considers the behavior unwanted rather than some minority.

I also agree that what we don't tolerate is the behavior, not the person
engaging in the behavior.  Regarding mailing list misbehavior, for
instance, I would think that this means that that person's post would be
moderated (and each post would only be approved if it has no personal
attacks, etc) instead of the person being completely banned from a list.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CoC [Final]

2016-01-20 Thread Simon Riggs
On 20 January 2016 at 19:14, Karsten Hilbert 
wrote:

> On Wed, Jan 20, 2016 at 01:05:15PM -0600, Kevin Grittner wrote:
>
> >>> * When interpreting the words and actions of others, participants
> >>> should always assume good intentions.
> ...
> > That came about because of the point made by someone for whom
> > English is a second language, who attempted to complement someone
> > by saying the work was "gross" (meaning "a big thing"), when that
> > was initially taken as an insult (thinking "disgusting" was meant).
> > Perhaps it belongs more in the preamble or could be omitted, but
> > it was an attempt to recognize that simple miscommunication due to
> > language or cultural differences can turn into flame wars if people
> > don't give each other some benefit of the doubt.
>
> * When interpreting the words and actions of others, participants
>   should always consider the possibility of misunderstandings.
>

+1

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [GENERAL] Why PG uses nested-loop join when no indexes are available?

2016-01-20 Thread David Grelaud
Thank you both for your help.

We will test your patch but we need to understand a bit more the code in
order to follow your discussions.
Actually, your patch helps us to find where to start in the code ;).

> The planner is never going to get it right 100% of the time.

Yes, I agree.
In production environnements, even if PostgreSQL chooses such a bad plan 1%
of the time, it is enough to make clients angry. My goal is to eradicate
this risk of choosing a nested loop in certain cases, which freezes
PostgreSQL during many minutes, whereas a hash-join or something else takes
only 2 seconds to complete. The performance difference is huge.
I mean, even if the plan is not the best one 100% of the time, it should at
least choose a "risk-free" plan, without these "bad" nested-loops. It is
maybe easier said than done but we want to try.

Regards,

*David Grelaud*

2016-01-15 2:16 GMT+01:00 David Rowley :

> On 15 January 2016 at 04:00, Tom Lane  wrote:
>
>> David Rowley  writes:
>> > Perhaps separating out enable_nestloop so that it only disables
>> > non-parameterised nested loops, and add another GUC for parameterised
>> > nested loops would be a good thing to do. Likely setting
>> enable_nestloop to
>> > off in production would be a slightly easier decision to make, if that
>> was
>> > the case.
>> > It looks pretty simple to do this, so I hacked it up, and attached it
>> here.
>> > There's no doc changes and I'm not that interested in fighting for this
>> > change, it's more just an idea for consideration.
>>
>> I'm not terribly excited by this idea either.  If making such a change
>> actually makes things better for someone consistently, I'd argue that
>> the problem is a mistaken cost estimate elsewhere, and we'd be better off
>> to find and fix the real problem.  (There have already been discussions
>> of only believing single-row rowcount estimates when they're provably
>> true, which might help if we can figure out how to do it cheaply enough.)
>>
>
> Actually, it's not very hard to hit a bad underestimate at all. All you
> need is a join on two columns which are co-related. Since PostgreSQL
> multiplies the estimated selectivities the row count is going to come out
> too low. This also tricks the planner into thinking that this is a good
> join to perform early, since (it thinks that) it does not produce many rows
> at all. You only need 1 more join to occur after that to choose a nested
> loop join mistakenly to hit the issue.
>
> FWIW TPC-H Q9 has this exact trip hazard with the partsupp table, which is
> the exact reason why this patch was born:
> https://commitfest.postgresql.org/7/210/
>
> I also think that the attitude that we can *always* fix the costs and
> estimates is not the right one. The planner is never going to get it right
> 100% of the time. If we ever think we can build such a planner then someone
> needs to come along and direct us back into the real world.
>
> --
>  David Rowley   http://www.2ndQuadrant.com/
> 
>  PostgreSQL Development, 24x7 Support, Training & Services
>


[GENERAL] long transfer time for binary data

2016-01-20 Thread Johannes
I noticed transferring a large object or bytea data between client and
server takes a long time.
For example: An image with a real size of 11 MB could be read on server
side (explain analyze) in 81ms. Fine.

But on client side the result was completed after 6.7 seconds without
ssl compression and 4.5 seconds with ssl compression (both via 100MBit
ethernet).

SSL compression seems to be not a good idea anymore, since this had
become a security risk. Its still possible with pgadmin, but afaik not
with java/jdbc .

Are there any other solutions available to display my images in my
client application more quickly? Or are there planned improvements to
postgresql (transferring the real binary data)?

Best regards
Johannes



signature.asc
Description: OpenPGP digital signature


[GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread AI Rumman
Hi,

I have a table with daily partition schema on Postgresql 9.1 where we are
keeping 2 years of data.
Often I experience that autovacuum process is busy with old tables where
there is no change. How can I stop it?
Please advice.

Thanks.


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Steve Rogerson
On 20/01/16 13:27, Pavel Stehule wrote:
> 
>
> 
> Postgres doesn't store original TZ. It does recalculation to local TZ. If you
> need original TZ, you have to store it separetely.
> 

I know and that's what I'm trying to deal with. Given I know the origin TZ  -
as in Europe/Lisbon I'm trying to determine the short name so I can store it.

I guess I'll have to use something other than pg to do it.

Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] adding a bdr node using bcv backup

2016-01-20 Thread (Daniel Stolf)
Hello there...

I'm new to postgres and I'm trying out BDR replication...

I know that when I issue the bdr.bdr_group_join command, it will copy the
entire database from the host I specify on parameter 'join_using_dsn' and
this may take a while depending on the network and the size of the
database...

What I wanted to know is if I can leverage a bcv backup... Is it possible?

I have a 2 nodes on my test environment and wanted to add a 3rd using this
approach... But when I restored the backup, the bdr information from node1
went along, so when I use 'bdr.bdr_group_join', it tells me:
ERROR:  This node is already a member of a BDR group
HINT:  Connect to the node you wish to add and run bdr_group_join from it
instead


Re: [GENERAL] How to stop autovacuum for daily partition old tables

2016-01-20 Thread Melvin Davidson
ALTER TABLE your_schema.your_table SET (autovacuum_enabled = false,
toast.autovacuum_enabled = false);

On Wed, Jan 20, 2016 at 6:22 PM, AI Rumman  wrote:

> Hi,
>
> I have a table with daily partition schema on Postgresql 9.1 where we are
> keeping 2 years of data.
> Often I experience that autovacuum process is busy with old tables where
> there is no change. How can I stop it?
> Please advice.
>
> Thanks.
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] JSONB performance enhancement for 9.6

2016-01-20 Thread Oleg Bartunov
On Wed, Jan 20, 2016 at 4:51 AM, Bruce Momjian  wrote:

> On Mon, Jan 11, 2016 at 09:01:03PM -0500, Tom Smith wrote:
> > Hi,
> >
> > Congrats on the official release of 9.5
> >
> > And I'd like bring up the issue again about if 9.6 would address the
> jsonb
> > performance issue
> > with large number of top level keys.
> > It is true that it does not have to use JSON format. it is about
> serialization
> > and fast retrieval
> > of dynamic tree structure objects. (at top level, it might be called
> dynamic
> > columns)
> > So if postgresql can have its own way, that would work out too as long
> as it
> > can have intuitive query
> > (like what are implemented for json and jsonb) and fast retrieval of a
> tree
> > like object,
> > it can be called no-sql data type. After all, most motivations of using
> no-sql
> > dbs like MongoDB
> > is about working with dynamic tree object.
> >
> > If postgresql can have high performance on this, then many no-sql dbs
> would
> > become history.
>
> I can give you some backstory on this.  TOAST was designed in 2001 as a
> way to store, in a data-type-agnostic way, long strings compressed and
> any other long data type, e.g. long arrays.
>
> In all previous cases, _part_ of the value wasn't useful.  JSONB is a
> unique case because it is one of the few types that can be processed
> without reading the entire value, e.g. it has an index.
>
> We are going to be hesitant to do something data-type-specific for
> JSONB.  It would be good if we could develop a data-type-agnostic
> approach to has TOAST can be improved.  I know of no such work for 9.6,
> and it is unlikely it will be done in time for 9.6.
>

I'm looking on this time to time.


>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
> + As you are, so once was I. As I am, so you will be. +
> + Roman grave inscription +
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] JSONB performance enhancement for 9.6

2016-01-20 Thread Dorian Hoxha
Is there any database that actually supports what the original poster
wanted ?

The only thing that I know that's similar is bigtable/hbase/hypertable wide
column store.
The way it works is:
break the lexicographically sorted rows into blocks of compressed XXKB, and
then keeps an index on the start_key+end_key of each block.

This way we can store the index(that links to several toast values) on the
row and depending on which key you need it will get+decompress the required
block.
You can interpret nested values by using a separator on the key like
"first_level:2ndlevel:3rd_level:value".
If the index is too big, you can store the index itself in a toast value.

Note: I have no idea how to(if it can be) actually code this.

On Wed, Jan 20, 2016 at 9:32 AM, Oleg Bartunov  wrote:

>
>
> On Wed, Jan 20, 2016 at 4:51 AM, Bruce Momjian  wrote:
>
>> On Mon, Jan 11, 2016 at 09:01:03PM -0500, Tom Smith wrote:
>> > Hi,
>> >
>> > Congrats on the official release of 9.5
>> >
>> > And I'd like bring up the issue again about if 9.6 would address the
>> jsonb
>> > performance issue
>> > with large number of top level keys.
>> > It is true that it does not have to use JSON format. it is about
>> serialization
>> > and fast retrieval
>> > of dynamic tree structure objects. (at top level, it might be called
>> dynamic
>> > columns)
>> > So if postgresql can have its own way, that would work out too as long
>> as it
>> > can have intuitive query
>> > (like what are implemented for json and jsonb) and fast retrieval of a
>> tree
>> > like object,
>> > it can be called no-sql data type. After all, most motivations of using
>> no-sql
>> > dbs like MongoDB
>> > is about working with dynamic tree object.
>> >
>> > If postgresql can have high performance on this, then many no-sql dbs
>> would
>> > become history.
>>
>> I can give you some backstory on this.  TOAST was designed in 2001 as a
>> way to store, in a data-type-agnostic way, long strings compressed and
>> any other long data type, e.g. long arrays.
>>
>> In all previous cases, _part_ of the value wasn't useful.  JSONB is a
>> unique case because it is one of the few types that can be processed
>> without reading the entire value, e.g. it has an index.
>>
>> We are going to be hesitant to do something data-type-specific for
>> JSONB.  It would be good if we could develop a data-type-agnostic
>> approach to has TOAST can be improved.  I know of no such work for 9.6,
>> and it is unlikely it will be done in time for 9.6.
>>
>
> I'm looking on this time to time.
>
>
>>
>> --
>>   Bruce Momjian  http://momjian.us
>>   EnterpriseDB http://enterprisedb.com
>>
>> + As you are, so once was I. As I am, so you will be. +
>> + Roman grave inscription +
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


[GENERAL] Error in Update and Set statement

2016-01-20 Thread Sachin Srivastava
Dear Folks,

How to handle the below error, please suggest. I have migrated my database
from oracle to postgres through Ora2PG then I am getting this error for
function.

If I am writing the code for every column which are within set (column
name1, column name2, etc) then it's running successfully but if there are
so many columns with set then how can I handle this because in below code
there are 7 column with set statement and we have to write very big code
for this. Please suggest how to handle this situation.



ERROR:  syntax error at or near "SELECT"
LINE 2: ...LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY) =(SELECT LIN...
 ^
** Error **


ERROR: syntax error at or near "SELECT"
SQL state: 42601
Character: 123

Code as below:

---

update ppo_master_detail set (LINE_STATUS,LINE_TYPE,PROMISE_DATE,
  LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY) =(SELECT LINE_STATUS,
  LINE_TYPE,PROMISE_DATE,LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY
  from ppo_master_detail where subscriber_id = 65 and
row_number=supplierdetail.row_number
  and po_number=supplierdetail.po_number and
company_id=supplierdetail.company_id )
  where po_number =PONum.po_number and company_id=PONum.company_id and
subscriber_id = 65 and row_number=supplierdetail.row_number;
  --iRowCounter:=iRowCounter+1;




[GENERAL] Postgres and timezones

2016-01-20 Thread Steve Rogerson
Hi, this is wrong:

# select to_char('2016-01-20 00:00'::timestamp at time zone 'Europe/Lisbon',
'TZ');
 to_char
-
 GMT
(1 row)


It should be WET, "Western European Time". Is there something I'm doing wrong?


Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] BDR with postgres 9.5

2016-01-20 Thread Nikhil
Hello All,


What is the timeline for BDR with postgres 9.5 released version.


Best Regards,
Nikhil


Re: [GENERAL] Error in Update and Set statement

2016-01-20 Thread Pavel Stehule
Hi

2016-01-20 11:36 GMT+01:00 Sachin Srivastava :

> Dear Folks,
>
> How to handle the below error, please suggest. I have migrated my database
> from oracle to postgres through Ora2PG then I am getting this error for
> function.
>
> If I am writing the code for every column which are within set (column
> name1, column name2, etc) then it's running successfully but if there are
> so many columns with set then how can I handle this because in below code
> there are 7 column with set statement and we have to write very big code
> for this. Please suggest how to handle this situation.
>
>
>
> ERROR:  syntax error at or near "SELECT"
> LINE 2: ...LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY) =(SELECT LIN...
>  ^
> ** Error **
>
>
> ERROR: syntax error at or near "SELECT"
> SQL state: 42601
> Character: 123
>
> Code as below:
>
> ---
>
> update ppo_master_detail set (LINE_STATUS,LINE_TYPE,PROMISE_DATE,
>   LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY) =(SELECT LINE_STATUS,
>
> LINE_TYPE,PROMISE_DATE,LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY
>   from ppo_master_detail where subscriber_id = 65 and
> row_number=supplierdetail.row_number
>   and po_number=supplierdetail.po_number and
> company_id=supplierdetail.company_id )
>   where po_number =PONum.po_number and company_id=PONum.company_id and
> subscriber_id = 65 and row_number=supplierdetail.row_number;
>   --iRowCounter:=iRowCounter+1;
>
>
This Oracle's syntax is supported from 9.5.

Regards

Pavel


> 
>


Re: [GENERAL] Error in Update and Set statement

2016-01-20 Thread Pavel Stehule
Hi

2016-01-20 11:45 GMT+01:00 Sachin Srivastava :

> Boss !!
>
> I am using postgres 9.4, so how to handle this.
>
>
I don't know Ora2Pg - try to find some option to generate in old format -
or manually rewrite to supported syntax

UPDATE tab SET a = x.a, ... FROM x

Regards

Pavel


Re: [GENERAL] Error in Update and Set statement

2016-01-20 Thread Sachin Srivastava
Boss !!

I am using postgres 9.4, so how to handle this.

On Wed, Jan 20, 2016 at 4:11 PM, Pavel Stehule 
wrote:

> Hi
>
> 2016-01-20 11:36 GMT+01:00 Sachin Srivastava :
>
>> Dear Folks,
>>
>> How to handle the below error, please suggest. I have migrated my
>> database from oracle to postgres through Ora2PG then I am getting this
>> error for function.
>>
>> If I am writing the code for every column which are within set (column
>> name1, column name2, etc) then it's running successfully but if there are
>> so many columns with set then how can I handle this because in below code
>> there are 7 column with set statement and we have to write very big code
>> for this. Please suggest how to handle this situation.
>>
>>
>>
>> ERROR:  syntax error at or near "SELECT"
>> LINE 2: ...LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY) =(SELECT LIN...
>>  ^
>> ** Error **
>>
>>
>> ERROR: syntax error at or near "SELECT"
>> SQL state: 42601
>> Character: 123
>>
>> Code as below:
>>
>> ---
>>
>> update ppo_master_detail set (LINE_STATUS,LINE_TYPE,PROMISE_DATE,
>>   LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY) =(SELECT
>> LINE_STATUS,
>>
>> LINE_TYPE,PROMISE_DATE,LEAD_TIME,ITEM_NUMBER,ORDER_UOM_CODE,ORDER_QTY
>>   from ppo_master_detail where subscriber_id = 65 and
>> row_number=supplierdetail.row_number
>>   and po_number=supplierdetail.po_number and
>> company_id=supplierdetail.company_id )
>>   where po_number =PONum.po_number and company_id=PONum.company_id
>> and subscriber_id = 65 and row_number=supplierdetail.row_number;
>>   --iRowCounter:=iRowCounter+1;
>>
>>
> This Oracle's syntax is supported from 9.5.
>
> Regards
>
> Pavel
>
>
>> 
>>
>
>


[GENERAL] ERROR for '@' for function

2016-01-20 Thread Sachin Srivastava
How to handle this below situation, I am getting error for function, my
Postgres version is 9.4
 is

ERROR:  syntax error at or near "@"
LINE 67:   autonumbersett...@repos.birchstreet.net
^
** Error **
ERROR: syntax error at or near "@"
SQL state: 42601
Character: 3274




BEGIN
  select NEXT_NUMBER into STRICT  id from
  autonumbersett...@repos.birchstreet.net
  --pbeach_repos8.AUTONUMBERSETTING
  where subscriber_id=subscriber and AUTO_NUMBER_ID=348 and company_id=
buyer;
   exception
  when no_data_found then
   insert into
smerror_log(error_log_id,subscriber_id,company_id,error_message,system_message,
   method_name,creation_date,creation_user,update_date,update_user)
 values (nextval('smerror_log_sequence'),subscriber,buyer,'Auto Number
not found',null,
   'PunchoutProfile'  ,LOCALTIMESTAMP,'Admin',LOCALTIMESTAMP,'Admin');
   commit;
  return;
  end;

---


Re: [GENERAL] ERROR for '@' for function

2016-01-20 Thread Thomas Kellerer
Sachin Srivastava schrieb am 20.01.2016 um 12:42:
> How to handle this below situation, I am getting error for function, my  
> Postgres version is 9.4
>  is
> 
> ERROR:  syntax error at or near "@"
> LINE 67:   autonumbersett...@repos.birchstreet.net 
> 
> ^
> ** Error **
> ERROR: syntax error at or near "@"
> SQL state: 42601
> Character: 3274
> 

Well, autonumbersett...@repos.birchstreet.net is an invalid identifier. 
A "@" is not allowed as part of a table name. 

You need to enclose that in double quotes:

SELECT ...
FROM "autonumbersett...@repos.birchstreet.net"




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Syntax error for Function

2016-01-20 Thread Sachin Srivastava
I am unable to find out the syntax error in below code, please suggest?



ERROR:  syntax error at or near "select"
LINE 44: select Count(0) into sFound  from budget_period ...
 ^
** Error **
ERROR: syntax error at or near "select"
SQL state: 42601
Character: 1190

Code as below:
-

select Count(0) into sFound  from budget_period t where t.subscriber_id
=subID
and t.period_number = period and
t.language_id=langCursor.Language_Id;
if(sFound = 0)then
insert into budget_period (subscriber_id, company_id,
period_number, period_name,
period_length_code, first_day, last_day,creation_date,
creation_user, update_date, update_user, language_id)
values(subID, compID, period,  curMonth,  'MONTH',
firstDate, lastDate, LOCALTIMESTAMP,
'Admin',LOCALTIMESTAMP, 'Admin', langCursor.Language_Id);
end if;




Re: [GENERAL] JSONB performance enhancement for 9.6

2016-01-20 Thread Bill Moran
On Tue, 19 Jan 2016 23:53:19 -0300
Alvaro Herrera  wrote:

> Bill Moran wrote:
> 
> > As far as a current solution: my solution would be to decompose the
> > JSON into an optimized table. I.e.:
> > 
> > CREATE TABLE store1 (
> >  id SERIAL PRIMARY KEY,
> >  data JSONB
> > );
> > 
> > CREATE TABLE store2 (
> >  id INT NOT NULL REFERENCES store1(id),
> >  top_level_key VARCHAR(1024),
> >  data JSONB,
> >  PRIMARY KEY(top_level_key, id)
> > );
> 
> Isn't this what ToroDB already does?
> https://www.8kdata.com/torodb/

Looks like. I wasn't aware of ToroDB, thanks for the link.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR for '@' for function

2016-01-20 Thread Sachin Srivastava
Thanks Thomas !!!

Now, it's working fine.

On Wed, Jan 20, 2016 at 5:18 PM, Thomas Kellerer  wrote:

> Sachin Srivastava schrieb am 20.01.2016 um 12:42:
> > How to handle this below situation, I am getting error for function, my
> Postgres version is 9.4
> >  is
> >
> > ERROR:  syntax error at or near "@"
> > LINE 67:   autonumbersett...@repos.birchstreet.net  autonumbersett...@repos.birchstreet.net>
> > ^
> > ** Error **
> > ERROR: syntax error at or near "@"
> > SQL state: 42601
> > Character: 3274
> >
>
> Well, autonumbersett...@repos.birchstreet.net is an invalid identifier.
> A "@" is not allowed as part of a table name.
>
> You need to enclose that in double quotes:
>
> SELECT ...
> FROM "autonumbersett...@repos.birchstreet.net"
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Syntax error for Function

2016-01-20 Thread Thom Brown
On 20 January 2016 at 12:15, Sachin Srivastava  wrote:
> I am unable to find out the syntax error in below code, please suggest?
>
>
>
> ERROR:  syntax error at or near "select"
> LINE 44: select Count(0) into sFound  from budget_period ...
>  ^
> ** Error **
> ERROR: syntax error at or near "select"
> SQL state: 42601
> Character: 1190
>
> Code as below:
> -
>
> select Count(0) into sFound  from budget_period t where t.subscriber_id
> =subID
> and t.period_number = period and
> t.language_id=langCursor.Language_Id;
> if(sFound = 0)then
> insert into budget_period (subscriber_id, company_id,
> period_number, period_name,
> period_length_code, first_day, last_day,creation_date,
> creation_user, update_date, update_user, language_id)
> values(subID, compID, period,  curMonth,  'MONTH',
> firstDate, lastDate, LOCALTIMESTAMP, 'Admin',LOCALTIMESTAMP,
> 'Admin', langCursor.Language_Id);
> end if;
>
> 

Well, it says that the problem occurs on line 44, so what's on the
previous lines it's receiving?  Are you sending an unterminated query
prior to that?

Thom


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Albe Laurenz
Steve Rogerson wrote:
> Hi, this is wrong:
> 
> # select to_char('2016-01-20 00:00'::timestamp at time zone 'Europe/Lisbon',
> 'TZ');
>  to_char
> -
>  GMT
> (1 row)
> 
> 
> It should be WET, "Western European Time". Is there something I'm doing wrong?

That query will always give you your local timezone.

Here in Austria I get:

test=> select to_char('2016-01-20 00:00'::timestamp at time zone 
'Asia/Yerevan', 'TZ');
┌─┐
│ to_char │
├─┤
│ CET │
└─┘
(1 row)

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Steve Rogerson
On 20/01/16 12:53, Albe Laurenz wrote:
> Steve Rogerson wrote:
>> Hi, this is wrong:
>>
>> # select to_char('2016-01-20 00:00'::timestamp at time zone 'Europe/Lisbon',
>> 'TZ');
>>  to_char
>> -
>>  GMT
>> (1 row)
>>
>>
>> It should be WET, "Western European Time". Is there something I'm doing 
>> wrong?
> 
> That query will always give you your local timezone.
> 
> Here in Austria I get:
> us
> test=> select to_char('2016-01-20 00:00'::timestamp at time zone 
> 'Asia/Yerevan', 'TZ');
> ┌─┐
> │ to_char │
> ├─┤
> │ CET │
> └─┘
> (1 row)
> 
> Yours,
> Laurenz Albe
> 
That seems odd, but never mind. I'll ask the direct qn then given the above is
it possible to determine the short TZ, say WET in my example.

Thinking about it, probably not as I suspect that pg only stores the offset in
seconds(?) from UTC, so once it has parsed "2016- 'Europe/Lisbon'" it has
lost track of the origin TZ and in that case what else could "to_char( ...,
'TZ') mean then other than the current client TZ.

Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Pavel Stehule
2016-01-20 14:24 GMT+01:00 Steve Rogerson :

> On 20/01/16 12:53, Albe Laurenz wrote:
> > Steve Rogerson wrote:
> >> Hi, this is wrong:
> >>
> >> # select to_char('2016-01-20 00:00'::timestamp at time zone
> 'Europe/Lisbon',
> >> 'TZ');
> >>  to_char
> >> -
> >>  GMT
> >> (1 row)
> >>
> >>
> >> It should be WET, "Western European Time". Is there something I'm doing
> wrong?
> >
> > That query will always give you your local timezone.
> >
> > Here in Austria I get:
> > us
> > test=> select to_char('2016-01-20 00:00'::timestamp at time zone
> 'Asia/Yerevan', 'TZ');
> > ┌─┐
> > │ to_char │
> > ├─┤
> > │ CET │
> > └─┘
> > (1 row)
> >
> > Yours,
> > Laurenz Albe
> >
> That seems odd, but never mind. I'll ask the direct qn then given the
> above is
> it possible to determine the short TZ, say WET in my example.
>
> Thinking about it, probably not as I suspect that pg only stores the
> offset in
> seconds(?) from UTC, so once it has parsed "2016- 'Europe/Lisbon'" it
> has
> lost track of the origin TZ and in that case what else could "to_char( ...,
> 'TZ') mean then other than the current client TZ.
>

Postgres doesn't store original TZ. It does recalculation to local TZ. If
you need original TZ, you have to store it separetely.

Regards

Pavel


>
> Steve
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Error in Update and Set statement

2016-01-20 Thread Melvin Davidson
I believe the following will do what you want.

WITH poupd AS
(SELECT LINE_STATUS,
LINE_TYPE,
 PROMISE_DATE,
LEAD_TIME,
ITEM_NUMBER,
ORDER_UOM_CODE,
ORDER_QTY
   FROM ppo_master_detail ponum,
supplierdetail sd
  WHERE ponum.subscriber_id = 65
AND ponum.row_number=sd.row_number
AND ponum.po_number=sd.po_number
AND ponum.company_id=sd.company_id
)
UPDATE ppo_master_detail pmd
 SET LINE_STATUS= poupd.LINE_STATUS
 LINE_TYPE  = poupd.LINE_TYPE
 PROMISE_DATE   = poupd.PROMISE_DATE
 LEAD_TIME  = poupd.LEAD_TIME
 ITEM_NUMBER= poupd.ITEM_NUMBER
 ORDER_UOM_CODE = poupd.ORDER_UOM_CODE
 ORDER_QTY  = poupd.ORDER_QTY
 FROM pmd,
  supplierdetail sdu
 WHERE pmd.po_number =poupd.po_number
   and pmd.company_id=poupd.company_id
   and pmd.subscriber_id = 65
   and pmd.row_number=sdu.row_number;

NOTE: Please avoid using uppercase and camelcase objects as PostgreSQL will
convert them to lowercase
unless you quote them.

On Wed, Jan 20, 2016 at 5:47 AM, Pavel Stehule 
wrote:

> Hi
>
> 2016-01-20 11:45 GMT+01:00 Sachin Srivastava :
>
>> Boss !!
>>
>> I am using postgres 9.4, so how to handle this.
>>
>>
> I don't know Ora2Pg - try to find some option to generate in old format -
> or manually rewrite to supported syntax
>
> UPDATE tab SET a = x.a, ... FROM x
>
> Regards
>
> Pavel
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Syntax error for Function

2016-01-20 Thread Sachin Srivastava
Dear Thom,

Please find the complete code as below and suggest now.

--

-- Function: gen_budget_for_next_year(bigint, bigint, bigint)
-- DROP FUNCTION gen_budget_for_next_year(bigint, bigint, bigint);
CREATE OR REPLACE FUNCTION gen_budget_for_next_year(
subid bigint,
compid bigint,
formonth bigint)
  RETURNS void AS
$BODY$
DECLARE
--Version:
--2015.01 PM T15422-10- Generate budget period for coming years. 05/01/2015
cur1 CURSOR FOR SELECT distinct(language_id) from "
languaget...@repos.birchstreet.net";
sFound bigint := 0;
recCount bigint :=0;
period varchar(100);
firstDate varchar(100);
lastDate varchar(100);
curMonth varchar(100);
langCursor RECORD; --cur1%rowtype;

BEGIN
  loop
  open cur1;
  IF NOT FOUND THEN EXIT; END IF; -- apply on recCount >= forMonth;

select to_char(TO_DATE(LOCALTIMESTAMP) + recCount*'1
month'::interval,'MM') into period  ;
select to_date(period||'01','MMDD') into firstDate  ;
select TO_DATE(LOCALTIMESTAMP) + recCount*'1 month'::interval into
lastDate  ;
select to_char(TO_DATE(LOCALTIMESTAMP) + recCount*'1
month'::interval, ' MM MONTH') into curMonth ;
recCount :=recCount+1;
  loop
  fetch cur1 into langCursor;
  exit when cur1
select Count(0) into sFound  from budget_period t where
t.subscriber_id =subID
and t.period_number = period and
t.language_id=langCursor.Language_Id;
if(sFound = 0)then
insert into budget_period (subscriber_id, company_id,
period_number, period_name,
period_length_code, first_day, last_day,creation_date,
creation_user, update_date, update_user, language_id)
values(subID, compID, period,  curMonth,  'MONTH',
firstDate, lastDate, LOCALTIMESTAMP,
'Admin',LOCALTIMESTAMP, 'Admin', langCursor.Language_Id);
end if;
  end loop;
  close cur1;
  end loop;

commit;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE SECURITY DEFINER
  COST 100;
ALTER FUNCTION gen_budget_for_next_year(bigint, bigint, bigint)
  OWNER TO postgres;



On Wed, Jan 20, 2016 at 6:05 PM, Thom Brown  wrote:

> On 20 January 2016 at 12:15, Sachin Srivastava 
> wrote:
> > I am unable to find out the syntax error in below code, please suggest?
> >
> >
> >
> > ERROR:  syntax error at or near "select"
> > LINE 44: select Count(0) into sFound  from budget_period ...
> >  ^
> > ** Error **
> > ERROR: syntax error at or near "select"
> > SQL state: 42601
> > Character: 1190
> >
> > Code as below:
> > -
> >
> > select Count(0) into sFound  from budget_period t where t.subscriber_id
> > =subID
> > and t.period_number = period and
> > t.language_id=langCursor.Language_Id;
> > if(sFound = 0)then
> > insert into budget_period (subscriber_id, company_id,
> > period_number, period_name,
> > period_length_code, first_day, last_day,creation_date,
> > creation_user, update_date, update_user, language_id)
> > values(subID, compID, period,  curMonth,  'MONTH',
> > firstDate, lastDate, LOCALTIMESTAMP,
> 'Admin',LOCALTIMESTAMP,
> > 'Admin', langCursor.Language_Id);
> > end if;
> >
> > 
>
> Well, it says that the problem occurs on line 44, so what's on the
> previous lines it's receiving?  Are you sending an unterminated query
> prior to that?
>
> Thom
>


Re: [GENERAL] Postgres and timezones

2016-01-20 Thread Eelke Klein
2016-01-20 12:10 GMT+01:00 Steve Rogerson :

> Hi, this is wrong:
>
> # select to_char('2016-01-20 00:00'::timestamp at time zone
> 'Europe/Lisbon',
> 'TZ');
>  to_char
> -
>  GMT
> (1 row)
>
>
> It should be WET, "Western European Time". Is there something I'm doing
> wrong?
>
>
Actually your input is now interpretted as being timezone Europe/Lisbon but
to_char still converts it to a string for the timezone that is currently
set on your connection. When I do  (my connection is set to
Europe/Amsterdam)

select to_char('2016-01-20 00:00'::timestamp at time zone 'Europe/Lisbon',
'HH:MI TZ');

I get 01:00 CET because it has converted it from Europe/Lisbon to CET.

You can adjust the timezone which is used for display by to_char with

set timezone='Europe/Lisbon';


Regards,
Eelke