Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-29 Thread David G Johnston
Yogesh. Sharma wrote
> Dear David,
> 
>> Are you currently using PostgreSQL?
> Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
> Now we plan to update this to PostgreSQL 9.0 version with  RHEL6.5. As in
> verion 9.0 I found least Compatibilities.
> 
> So, please guide me.
> 
> Regards,

Guidance is why we write documentation. if you have specific questions or
concerns after reading the documentation you can ask here.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Help-related-to-Postgresql-for-RHEL-6-5-tp5816742p5816876.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-28 Thread Alan Hodgson
On Friday, August 29, 2014 04:14:35 AM Yogesh. Sharma wrote:
> Dear David,
> 
> > Are you currently using PostgreSQL?
> 
> Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
> Now we plan to update this to PostgreSQL 9.0 version with  RHEL6.5. As in
> verion 9.0 I found least Compatibilities.
> 

Any of the currently maintained PostgreSQL versions will run fine on RHEL 6.5 - 
that would be the latest release of any version from 9.0 up. 

Only you can test and find out if your application(s) will need changes to work 
with those versions.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-28 Thread Yogesh. Sharma
Dear David,

> Are you currently using PostgreSQL?
Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
Now we plan to update this to PostgreSQL 9.0 version with  RHEL6.5. As in 
verion 9.0 I found least Compatibilities.

So, please guide me.

Regards,
Yogesh

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David G Johnston
Sent: Friday, August 29, 2014 9:22 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Help related to Postgresql for RHEL 6.5

Yogesh. Sharma wrote
> Compatibility issues  of 9.3 are more.
> So, please guide which version is suitable.

Compatibility as in the multixact issues or does your software not work with
9.3 changes?

What about "any supported version" and the provided link is unclear?

There is no way for us to evaluate suitability for your specific need unless 
you provide lots more info.  If 9.3 scares you off then use 9.2

Are you currently using PostgreSQL?

If you are referring to distro-supported versions (which Debian uses in my
case) you should make that requirement specific.  The PostgreSQL community 
feels all their officially supported releases are stable.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Help-related-to-Postgresql-for-RHEL-6-5-tp5816742p5816840.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-28 Thread David G Johnston
Yogesh. Sharma wrote
> Compatibility issues  of 9.3 are more.
> So, please guide which version is suitable.

Compatibility as in the multixact issues or does your software not work with
9.3 changes?

What about "any supported version" and the provided link is unclear?

There is no way for us to evaluate suitability for your specific need unless
you provide lots more info.  If 9.3 scares you off then use 9.2

Are you currently using PostgreSQL?

If you are referring to distro-supported versions (which Debian uses in my
case) you should make that requirement specific.  The PostgreSQL community
feels all their officially supported releases are stable.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Help-related-to-Postgresql-for-RHEL-6-5-tp5816742p5816840.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-28 Thread Yogesh. Sharma
Dear All,

Compatibility issues  of 9.3 are more.
So, please guide which version is suitable.

Regards,
Yogesh  

-Original Message-
From: Devrim Gündüz [mailto:dev...@gunduz.org] 
Sent: Thursday, August 28, 2014 7:34 PM
To: Yogesh. Sharma
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Help related to Postgresql for RHEL 6.5


Hi,

On Thu, 2014-08-28 at 03:22 +, Yogesh. Sharma wrote:

> I want to upgrade RHEL 5.4 to RHEL 6.5.
> So, could you please let me know, which postgresql version is stable 
> for RHEL 6.5?

Any supported PostgreSQL version is available in the yum repository: 

http://yum.postgresql.org 

You can use 9.3, for example.

Regards,

--
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com 
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR




DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help related to Postgresql for RHEL 6.5

2014-08-28 Thread Devrim Gündüz

Hi,

On Thu, 2014-08-28 at 03:22 +, Yogesh. Sharma wrote:

> I want to upgrade RHEL 5.4 to RHEL 6.5.
> So, could you please let me know, which postgresql version is stable
> for RHEL 6.5?

Any supported PostgreSQL version is available in the yum repository: 

http://yum.postgresql.org 

You can use 9.3, for example.

Regards,

-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR



signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Help needed with postgres stats and math

2014-08-05 Thread Tim Smith
Hi Serge,

A million apologies for the delayed acknowledgement of your email.   The Yahoo 
webmail is doing weird things with conversations (your email was hiding in my 
sent box instead of inbox, tagged onto the end of my original email !).

But I digress.  I will take a look at your suggestions and get back to you over 
the next day or so.

Thank you

Tim


On Monday, 4 August 2014, 18:46, Serge Fonville  
wrote:
 


Or...

Do you mean to use windowing functions?
http://www.postgresql.org/docs/9.3/static/tutorial-window.html

Or both of course...



Kind regards/met vriendelijke groet,

Serge Fonville
http://www.sergefonville.nl


2014-08-04 19:43 GMT+02:00 Serge Fonville :

Hi,
>
>Perhaps a CTE would help?
>
>WITH NormCTE AS (
>    SELECT
>        delta - avg(delta))/stddev(delta) AS deltaNorm
>      , (echo - avg(echo))/stddev(echo) AS echoNorm
>      , (foxtrot - avg(foxtrot))/stddev(foxtrot) AS foxtrotNorm
>    FROM t_subs
>)
>SELECT
>    deltaNorm + echoNorm + foxtrotNorm AS normSum
>FROM NormCTE
>ORDER BY normSum DESC
>
>
>HTH
>
>
>
>Kind regards/met vriendelijke groet,
>
>
>Serge Fonville
>
>http://www.sergefonville.nl
>
>
>2014-08-03 13:20 GMT+02:00 Tim Smith :
>
>
>Hi,
>>
>>I'm on Postgres 9.3.5, however I think my knowledge of Postgres is not deep 
>>enough to help me with this challenge, so here I am reaching out to the 
>>community !
>>
>>Let's say I have a table as follows :
>>
>>create table t_subs (alpha text,bravo text,charlie numeric,delta numeric,echo 
>>numeric,foxtrot numeric);
>>
>>And let's say I have a view that does some basic filtering on that table
>>
>>create view v_subs as select alpha,delta,echo,foxtrot from t_subs where 
>>charlie>=5 and bravo not in ('this','that');
>>
>>What I need to do is order the output of the view based on normalised output 
>>of delta,echo and foxtrot.
>>
>>So, what I need to do is :
>>
>>1/ Calculate normalised values for each column and row
>>
>>deltaNorm = (delta - avg(delta))/stddev(delta)
>>echoNorm = (echo - avg(echo))/stddev(echo)
>>
>>foxtrotNorm = (foxtrot - avg(foxtrot))/stddev(foxtrot)
>>normSum = deltaNorm + echoNorm + foxtrotNorm
>>
>>2/ order desc on normSum
>>
>>The problem is I cannot seem to find a way to do this in one query.
>>
>>Thanks in advance for your help !
>>
>>Tim
>>
>>
>>--
>>Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>To make changes to your subscription:
>>http://www.postgresql.org/mailpref/pgsql-general
>>
>

Re: [GENERAL] Help needed with postgres stats and math

2014-08-04 Thread Serge Fonville
Or...

Do you mean to use windowing functions?
http://www.postgresql.org/docs/9.3/static/tutorial-window.html

Or both of course...

Kind regards/met vriendelijke groet,

Serge Fonville

http://www.sergefonville.nl


2014-08-04 19:43 GMT+02:00 Serge Fonville :

> Hi,
>
> Perhaps a CTE would help?
>
> WITH NormCTE AS (
> SELECT
> delta - avg(delta))/stddev(delta) AS deltaNorm
>   , (echo - avg(echo))/stddev(echo) AS echoNorm
>   , (foxtrot - avg(foxtrot))/stddev(foxtrot) AS foxtrotNorm
> FROM t_subs
> )
> SELECT
> deltaNorm + echoNorm + foxtrotNorm AS normSum
> FROM NormCTE
> ORDER BY normSum DESC
>
> HTH
>
> Kind regards/met vriendelijke groet,
>
> Serge Fonville
>
> http://www.sergefonville.nl
>
>
> 2014-08-03 13:20 GMT+02:00 Tim Smith :
>
>  Hi,
>>
>> I'm on Postgres 9.3.5, however I think my knowledge of Postgres is not
>> deep enough to help me with this challenge, so here I am reaching out to
>> the community !
>>
>> Let's say I have a table as follows :
>>
>> create table t_subs (alpha text,bravo text,charlie numeric,delta
>> numeric,echo numeric,foxtrot numeric);
>>
>> And let's say I have a view that does some basic filtering on that table
>>
>> create view v_subs as select alpha,delta,echo,foxtrot from t_subs where
>> charlie>=5 and bravo not in ('this','that');
>>
>> What I need to do is order the output of the view based on normalised
>> output of delta,echo and foxtrot.
>>
>> So, what I need to do is :
>>
>> 1/ Calculate normalised values for each column and row
>>
>> deltaNorm = (delta - avg(delta))/stddev(delta)
>> echoNorm = (echo - avg(echo))/stddev(echo)
>>
>> foxtrotNorm = (foxtrot - avg(foxtrot))/stddev(foxtrot)
>> normSum = deltaNorm + echoNorm + foxtrotNorm
>>
>> 2/ order desc on normSum
>>
>> The problem is I cannot seem to find a way to do this in one query.
>>
>> Thanks in advance for your help !
>>
>> Tim
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] Help needed with postgres stats and math

2014-08-04 Thread Serge Fonville
Hi,

Perhaps a CTE would help?

WITH NormCTE AS (
SELECT
delta - avg(delta))/stddev(delta) AS deltaNorm
  , (echo - avg(echo))/stddev(echo) AS echoNorm
  , (foxtrot - avg(foxtrot))/stddev(foxtrot) AS foxtrotNorm
FROM t_subs
)
SELECT
deltaNorm + echoNorm + foxtrotNorm AS normSum
FROM NormCTE
ORDER BY normSum DESC

HTH

Kind regards/met vriendelijke groet,

Serge Fonville

http://www.sergefonville.nl


2014-08-03 13:20 GMT+02:00 Tim Smith :

> Hi,
>
> I'm on Postgres 9.3.5, however I think my knowledge of Postgres is not
> deep enough to help me with this challenge, so here I am reaching out to
> the community !
>
> Let's say I have a table as follows :
>
> create table t_subs (alpha text,bravo text,charlie numeric,delta
> numeric,echo numeric,foxtrot numeric);
>
> And let's say I have a view that does some basic filtering on that table
>
> create view v_subs as select alpha,delta,echo,foxtrot from t_subs where
> charlie>=5 and bravo not in ('this','that');
>
> What I need to do is order the output of the view based on normalised
> output of delta,echo and foxtrot.
>
> So, what I need to do is :
>
> 1/ Calculate normalised values for each column and row
>
> deltaNorm = (delta - avg(delta))/stddev(delta)
> echoNorm = (echo - avg(echo))/stddev(echo)
>
> foxtrotNorm = (foxtrot - avg(foxtrot))/stddev(foxtrot)
> normSum = deltaNorm + echoNorm + foxtrotNorm
>
> 2/ order desc on normSum
>
> The problem is I cannot seem to find a way to do this in one query.
>
> Thanks in advance for your help !
>
> Tim
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] help with a procedure

2014-06-05 Thread Adrian Klaver

On 06/05/2014 09:45 AM, Carlos Carcamo wrote:

Just a little, I will read it again, thanks for your help.


In particular:

http://www.postgresql.org/docs/9.3/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with a procedure

2014-06-05 Thread David G Johnston
Carlos Carcamo wrote
> SELECT myProcedure(product_id, p_description, price, qty, store_id,
> store_description );
> 
> waiting for a response from procedure, maybe true or false.

Note that forcing the procedure to return false instead of simply throwing
an error is going to degrade performance.  If you can live with
silence=success; error=failure it will be much easier to program and will
achieve maximum performance.

David J.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/help-with-a-procedure-tp5806213p5806242.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with a procedure

2014-06-05 Thread Carlos Carcamo
Just a little, I will read it again, thanks for your help.


2014-06-05 10:39 GMT-06:00 David G Johnston :

>
> Have you read this chapter of the documentation?
>
> http://www.postgresql.org/docs/9.3/interactive/plpgsql.html
>
>
> Carlos Carcamo wrote
> > What I need is some help with the procedure in postgres, I have searched
> > in
> > google, but I dont know how to do it, the goal is perform the first
> insert
> > and then the second insert if the first one succeed in one procedure
> > instead of using two separate inserts.
> >
> > Do you have some code like that?
> >
> > 2014-06-05 10:14 GMT-06:00 Seref Arikan <
>
> > serefarikan@
>
> > >:
> >
> >> Sorry, I meant: "calling a stored procedure you'll write in postgres
> from
> >> php"
> >>
> >>
> >> On Thu, Jun 5, 2014 at 5:13 PM, Seref Arikan <
>
> > serefarikan@
>
> > >
> >> wrote:
> >>
> >>> Hi Carlos,
> >>> When you say procedures, do you mean calling a stored procedure you'll
> >>> write from php? Or executing the individual INSERT from php
> >>> sequentially?
> >>> For the first scenario, you'd need to write a postgresql stored
> >>> procedure
> >>> (I suggest you google: PL/pgSQL tutorial) and call the from php. For
> the
> >>> second, well, it is exactly what I said before: you'll need to open a
> >>> connection to postgres, execute your statements under a transaction and
> >>> commit. I'd suggest you either search for php and postgres or ask this
> >>> to a
> >>> php mail group. Your goal here appears to be understanding how to call
> >>> postgres from php.
> >>>
> >>> Regards
> >>> Seref
> >>>
> >>>
> >>>
> >>>
> >>> On Thu, Jun 5, 2014 at 5:03 PM, Carlos Carcamo <
>
> > eazyduiz@
>
> > >
> >>> wrote:
> >>>
>  Thanks for answering...
>  I'm using php with postgresql 9.1, I have never used procedures with
>  php, I'm new with postgresql
> 
> 
>  2014-06-05 9:45 GMT-06:00 Seref Arikan <
>
> > serefarikan@
>
> > >:
> 
>  Hi Carlos,
> > Unless I'm missing something here, your queries are probably being
> > called from a programming language (java/c# etc) and your database
> > access
> > api should support transactions. If you perform both operations under
> > the
> > same db transaction and commit your transaction things should be
> fine.
> > If
> > there is a problem with the first INSERT, your api should throw an
> > exception and you won't be able to commit the transaction (you
> may/may
> > not
> > need to call rollback in your catch block), so it'll either be both
> > calls
> > executed or none.
> >
> > You may want to read about how db transactions are handled in your
> > programming environment.
> >
> > Regards
> > Seref
> >
> >
> >
> > On Thu, Jun 5, 2014 at 4:36 PM, Carlos Carcamo <
>
> > eazyduiz@
>
> > >
> > wrote:
> >
> >>
> >> 2014-06-05 9:32 GMT-06:00 Carlos Carcamo <
>
> > eazyduiz@
>
> > >:
> >>
> >> Hi everyone, I wonder if you could help me with a procedure that I
> >>> would like to perform in postgresql.
> >>>
> >>> I have an insert query like this:
> >>>
> >>> INSERT INTO products (product_id, description, price, qty, ...)
> >>> values ('01', 'some description', 10.15, 5, ...)
> >>>
> >>> then if there is no problem, perform another query like:
> >>>
> >>> INSERT INTO store(store_id, description, price, qty, ...) values
> >>> ('02', 'some description', 10.15, 5, ...)
> >>>
> >>> So the second query depends of the first query, if the first one
> >>> succeed the second will perform the second insert
> >>>
> >>> I would like to do something like:
> >>>
> >>> SELECT myProcedure(product_id, p_description, price, qty, store_id,
> >>> store_description );
> >>>
> >>> waiting for a response from procedure, maybe true or false.
> >>>
> >>> Thanks in advance...
> >>>
> >>>
> >>>
> >>> --
> >>> "El desarrollo no es material es un estado de conciencia metal"
> >>>
> >>
> >> Sorry,  the second query looks like:
> >> INSERT INTO store(store_id, description, product_id, price, qty,
> ...)
> >> values ('02', 'some description', '01', 10.15, 5, ...)
> >>
> >> --
> >> "El desarrollo no es material es un estado de conciencia metal"
> >>
> >
> >
> 
> 
>  --
>  "El desarrollo no es material es un estado de conciencia metal"
> 
> >>>
> >>>
> >>
> >
> >
> > --
> > "El desarrollo no es material es un estado de conciencia metal"
>
>
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/help-with-a-procedure-tp5806213p5806230.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
"El desarrollo no es materia

Re: [GENERAL] help with a procedure

2014-06-05 Thread David G Johnston

Have you read this chapter of the documentation?

http://www.postgresql.org/docs/9.3/interactive/plpgsql.html


Carlos Carcamo wrote
> What I need is some help with the procedure in postgres, I have searched
> in
> google, but I dont know how to do it, the goal is perform the first insert
> and then the second insert if the first one succeed in one procedure
> instead of using two separate inserts.
> 
> Do you have some code like that?
> 
> 2014-06-05 10:14 GMT-06:00 Seref Arikan <

> serefarikan@

> >:
> 
>> Sorry, I meant: "calling a stored procedure you'll write in postgres from
>> php"
>>
>>
>> On Thu, Jun 5, 2014 at 5:13 PM, Seref Arikan <

> serefarikan@

> >
>> wrote:
>>
>>> Hi Carlos,
>>> When you say procedures, do you mean calling a stored procedure you'll
>>> write from php? Or executing the individual INSERT from php
>>> sequentially?
>>> For the first scenario, you'd need to write a postgresql stored
>>> procedure
>>> (I suggest you google: PL/pgSQL tutorial) and call the from php. For the
>>> second, well, it is exactly what I said before: you'll need to open a
>>> connection to postgres, execute your statements under a transaction and
>>> commit. I'd suggest you either search for php and postgres or ask this
>>> to a
>>> php mail group. Your goal here appears to be understanding how to call
>>> postgres from php.
>>>
>>> Regards
>>> Seref
>>>
>>>
>>>
>>>
>>> On Thu, Jun 5, 2014 at 5:03 PM, Carlos Carcamo <

> eazyduiz@

> >
>>> wrote:
>>>
 Thanks for answering...
 I'm using php with postgresql 9.1, I have never used procedures with
 php, I'm new with postgresql


 2014-06-05 9:45 GMT-06:00 Seref Arikan <

> serefarikan@

> >:

 Hi Carlos,
> Unless I'm missing something here, your queries are probably being
> called from a programming language (java/c# etc) and your database
> access
> api should support transactions. If you perform both operations under
> the
> same db transaction and commit your transaction things should be fine.
> If
> there is a problem with the first INSERT, your api should throw an
> exception and you won't be able to commit the transaction (you may/may
> not
> need to call rollback in your catch block), so it'll either be both
> calls
> executed or none.
>
> You may want to read about how db transactions are handled in your
> programming environment.
>
> Regards
> Seref
>
>
>
> On Thu, Jun 5, 2014 at 4:36 PM, Carlos Carcamo <

> eazyduiz@

> >
> wrote:
>
>>
>> 2014-06-05 9:32 GMT-06:00 Carlos Carcamo <

> eazyduiz@

> >:
>>
>> Hi everyone, I wonder if you could help me with a procedure that I
>>> would like to perform in postgresql.
>>>
>>> I have an insert query like this:
>>>
>>> INSERT INTO products (product_id, description, price, qty, ...)
>>> values ('01', 'some description', 10.15, 5, ...)
>>>
>>> then if there is no problem, perform another query like:
>>>
>>> INSERT INTO store(store_id, description, price, qty, ...) values
>>> ('02', 'some description', 10.15, 5, ...)
>>>
>>> So the second query depends of the first query, if the first one
>>> succeed the second will perform the second insert
>>>
>>> I would like to do something like:
>>>
>>> SELECT myProcedure(product_id, p_description, price, qty, store_id,
>>> store_description );
>>>
>>> waiting for a response from procedure, maybe true or false.
>>>
>>> Thanks in advance...
>>>
>>>
>>>
>>> --
>>> "El desarrollo no es material es un estado de conciencia metal"
>>>
>>
>> Sorry,  the second query looks like:
>> INSERT INTO store(store_id, description, product_id, price, qty, ...)
>> values ('02', 'some description', '01', 10.15, 5, ...)
>>
>> --
>> "El desarrollo no es material es un estado de conciencia metal"
>>
>
>


 --
 "El desarrollo no es material es un estado de conciencia metal"

>>>
>>>
>>
> 
> 
> -- 
> "El desarrollo no es material es un estado de conciencia metal"





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/help-with-a-procedure-tp5806213p5806230.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with a procedure

2014-06-05 Thread Carlos Carcamo
What I need is some help with the procedure in postgres, I have searched in
google, but I dont know how to do it, the goal is perform the first insert
and then the second insert if the first one succeed in one procedure
instead of using two separate inserts.

Do you have some code like that?

2014-06-05 10:14 GMT-06:00 Seref Arikan :

> Sorry, I meant: "calling a stored procedure you'll write in postgres from
> php"
>
>
> On Thu, Jun 5, 2014 at 5:13 PM, Seref Arikan 
> wrote:
>
>> Hi Carlos,
>> When you say procedures, do you mean calling a stored procedure you'll
>> write from php? Or executing the individual INSERT from php sequentially?
>> For the first scenario, you'd need to write a postgresql stored procedure
>> (I suggest you google: PL/pgSQL tutorial) and call the from php. For the
>> second, well, it is exactly what I said before: you'll need to open a
>> connection to postgres, execute your statements under a transaction and
>> commit. I'd suggest you either search for php and postgres or ask this to a
>> php mail group. Your goal here appears to be understanding how to call
>> postgres from php.
>>
>> Regards
>> Seref
>>
>>
>>
>>
>> On Thu, Jun 5, 2014 at 5:03 PM, Carlos Carcamo 
>> wrote:
>>
>>> Thanks for answering...
>>> I'm using php with postgresql 9.1, I have never used procedures with
>>> php, I'm new with postgresql
>>>
>>>
>>> 2014-06-05 9:45 GMT-06:00 Seref Arikan :
>>>
>>> Hi Carlos,
 Unless I'm missing something here, your queries are probably being
 called from a programming language (java/c# etc) and your database access
 api should support transactions. If you perform both operations under the
 same db transaction and commit your transaction things should be fine. If
 there is a problem with the first INSERT, your api should throw an
 exception and you won't be able to commit the transaction (you may/may not
 need to call rollback in your catch block), so it'll either be both calls
 executed or none.

 You may want to read about how db transactions are handled in your
 programming environment.

 Regards
 Seref



 On Thu, Jun 5, 2014 at 4:36 PM, Carlos Carcamo 
 wrote:

>
> 2014-06-05 9:32 GMT-06:00 Carlos Carcamo :
>
> Hi everyone, I wonder if you could help me with a procedure that I
>> would like to perform in postgresql.
>>
>> I have an insert query like this:
>>
>> INSERT INTO products (product_id, description, price, qty, ...)
>> values ('01', 'some description', 10.15, 5, ...)
>>
>> then if there is no problem, perform another query like:
>>
>> INSERT INTO store(store_id, description, price, qty, ...) values
>> ('02', 'some description', 10.15, 5, ...)
>>
>> So the second query depends of the first query, if the first one
>> succeed the second will perform the second insert
>>
>> I would like to do something like:
>>
>> SELECT myProcedure(product_id, p_description, price, qty, store_id,
>> store_description );
>>
>> waiting for a response from procedure, maybe true or false.
>>
>> Thanks in advance...
>>
>>
>>
>> --
>> "El desarrollo no es material es un estado de conciencia metal"
>>
>
> Sorry,  the second query looks like:
> INSERT INTO store(store_id, description, product_id, price, qty, ...)
> values ('02', 'some description', '01', 10.15, 5, ...)
>
> --
> "El desarrollo no es material es un estado de conciencia metal"
>


>>>
>>>
>>> --
>>> "El desarrollo no es material es un estado de conciencia metal"
>>>
>>
>>
>


-- 
"El desarrollo no es material es un estado de conciencia metal"


Re: [GENERAL] help with a procedure

2014-06-05 Thread Seref Arikan
Sorry, I meant: "calling a stored procedure you'll write in postgres from
php"


On Thu, Jun 5, 2014 at 5:13 PM, Seref Arikan  wrote:

> Hi Carlos,
> When you say procedures, do you mean calling a stored procedure you'll
> write from php? Or executing the individual INSERT from php sequentially?
> For the first scenario, you'd need to write a postgresql stored procedure
> (I suggest you google: PL/pgSQL tutorial) and call the from php. For the
> second, well, it is exactly what I said before: you'll need to open a
> connection to postgres, execute your statements under a transaction and
> commit. I'd suggest you either search for php and postgres or ask this to a
> php mail group. Your goal here appears to be understanding how to call
> postgres from php.
>
> Regards
> Seref
>
>
>
>
> On Thu, Jun 5, 2014 at 5:03 PM, Carlos Carcamo  wrote:
>
>> Thanks for answering...
>> I'm using php with postgresql 9.1, I have never used procedures with php,
>> I'm new with postgresql
>>
>>
>> 2014-06-05 9:45 GMT-06:00 Seref Arikan :
>>
>> Hi Carlos,
>>> Unless I'm missing something here, your queries are probably being
>>> called from a programming language (java/c# etc) and your database access
>>> api should support transactions. If you perform both operations under the
>>> same db transaction and commit your transaction things should be fine. If
>>> there is a problem with the first INSERT, your api should throw an
>>> exception and you won't be able to commit the transaction (you may/may not
>>> need to call rollback in your catch block), so it'll either be both calls
>>> executed or none.
>>>
>>> You may want to read about how db transactions are handled in your
>>> programming environment.
>>>
>>> Regards
>>> Seref
>>>
>>>
>>>
>>> On Thu, Jun 5, 2014 at 4:36 PM, Carlos Carcamo 
>>> wrote:
>>>

 2014-06-05 9:32 GMT-06:00 Carlos Carcamo :

 Hi everyone, I wonder if you could help me with a procedure that I
> would like to perform in postgresql.
>
> I have an insert query like this:
>
> INSERT INTO products (product_id, description, price, qty, ...) values
> ('01', 'some description', 10.15, 5, ...)
>
> then if there is no problem, perform another query like:
>
> INSERT INTO store(store_id, description, price, qty, ...) values
> ('02', 'some description', 10.15, 5, ...)
>
> So the second query depends of the first query, if the first one
> succeed the second will perform the second insert
>
> I would like to do something like:
>
> SELECT myProcedure(product_id, p_description, price, qty, store_id,
> store_description );
>
> waiting for a response from procedure, maybe true or false.
>
> Thanks in advance...
>
>
>
> --
> "El desarrollo no es material es un estado de conciencia metal"
>

 Sorry,  the second query looks like:
 INSERT INTO store(store_id, description, product_id, price, qty, ...)
 values ('02', 'some description', '01', 10.15, 5, ...)

 --
 "El desarrollo no es material es un estado de conciencia metal"

>>>
>>>
>>
>>
>> --
>> "El desarrollo no es material es un estado de conciencia metal"
>>
>
>


Re: [GENERAL] help with a procedure

2014-06-05 Thread Seref Arikan
Hi Carlos,
When you say procedures, do you mean calling a stored procedure you'll
write from php? Or executing the individual INSERT from php sequentially?
For the first scenario, you'd need to write a postgresql stored procedure
(I suggest you google: PL/pgSQL tutorial) and call the from php. For the
second, well, it is exactly what I said before: you'll need to open a
connection to postgres, execute your statements under a transaction and
commit. I'd suggest you either search for php and postgres or ask this to a
php mail group. Your goal here appears to be understanding how to call
postgres from php.

Regards
Seref




On Thu, Jun 5, 2014 at 5:03 PM, Carlos Carcamo  wrote:

> Thanks for answering...
> I'm using php with postgresql 9.1, I have never used procedures with php,
> I'm new with postgresql
>
>
> 2014-06-05 9:45 GMT-06:00 Seref Arikan :
>
> Hi Carlos,
>> Unless I'm missing something here, your queries are probably being called
>> from a programming language (java/c# etc) and your database access api
>> should support transactions. If you perform both operations under the same
>> db transaction and commit your transaction things should be fine. If there
>> is a problem with the first INSERT, your api should throw an exception and
>> you won't be able to commit the transaction (you may/may not need to call
>> rollback in your catch block), so it'll either be both calls executed or
>> none.
>>
>> You may want to read about how db transactions are handled in your
>> programming environment.
>>
>> Regards
>> Seref
>>
>>
>>
>> On Thu, Jun 5, 2014 at 4:36 PM, Carlos Carcamo 
>> wrote:
>>
>>>
>>> 2014-06-05 9:32 GMT-06:00 Carlos Carcamo :
>>>
>>> Hi everyone, I wonder if you could help me with a procedure that I would
 like to perform in postgresql.

 I have an insert query like this:

 INSERT INTO products (product_id, description, price, qty, ...) values
 ('01', 'some description', 10.15, 5, ...)

 then if there is no problem, perform another query like:

 INSERT INTO store(store_id, description, price, qty, ...) values ('02',
 'some description', 10.15, 5, ...)

 So the second query depends of the first query, if the first one
 succeed the second will perform the second insert

 I would like to do something like:

 SELECT myProcedure(product_id, p_description, price, qty, store_id,
 store_description );

 waiting for a response from procedure, maybe true or false.

 Thanks in advance...



 --
 "El desarrollo no es material es un estado de conciencia metal"

>>>
>>> Sorry,  the second query looks like:
>>> INSERT INTO store(store_id, description, product_id, price, qty, ...)
>>> values ('02', 'some description', '01', 10.15, 5, ...)
>>>
>>> --
>>> "El desarrollo no es material es un estado de conciencia metal"
>>>
>>
>>
>
>
> --
> "El desarrollo no es material es un estado de conciencia metal"
>


Re: [GENERAL] help with a procedure

2014-06-05 Thread Seref Arikan
Hi Carlos,
Unless I'm missing something here, your queries are probably being called
from a programming language (java/c# etc) and your database access api
should support transactions. If you perform both operations under the same
db transaction and commit your transaction things should be fine. If there
is a problem with the first INSERT, your api should throw an exception and
you won't be able to commit the transaction (you may/may not need to call
rollback in your catch block), so it'll either be both calls executed or
none.

You may want to read about how db transactions are handled in your
programming environment.

Regards
Seref



On Thu, Jun 5, 2014 at 4:36 PM, Carlos Carcamo  wrote:

>
> 2014-06-05 9:32 GMT-06:00 Carlos Carcamo :
>
> Hi everyone, I wonder if you could help me with a procedure that I would
>> like to perform in postgresql.
>>
>> I have an insert query like this:
>>
>> INSERT INTO products (product_id, description, price, qty, ...) values
>> ('01', 'some description', 10.15, 5, ...)
>>
>> then if there is no problem, perform another query like:
>>
>> INSERT INTO store(store_id, description, price, qty, ...) values ('02',
>> 'some description', 10.15, 5, ...)
>>
>> So the second query depends of the first query, if the first one succeed
>> the second will perform the second insert
>>
>> I would like to do something like:
>>
>> SELECT myProcedure(product_id, p_description, price, qty, store_id,
>> store_description );
>>
>> waiting for a response from procedure, maybe true or false.
>>
>> Thanks in advance...
>>
>>
>>
>> --
>> "El desarrollo no es material es un estado de conciencia metal"
>>
>
> Sorry,  the second query looks like:
> INSERT INTO store(store_id, description, product_id, price, qty, ...)
> values ('02', 'some description', '01', 10.15, 5, ...)
>
> --
> "El desarrollo no es material es un estado de conciencia metal"
>


Re: [GENERAL] help with a procedure

2014-06-05 Thread Carlos Carcamo
2014-06-05 9:32 GMT-06:00 Carlos Carcamo :

> Hi everyone, I wonder if you could help me with a procedure that I would
> like to perform in postgresql.
>
> I have an insert query like this:
>
> INSERT INTO products (product_id, description, price, qty, ...) values
> ('01', 'some description', 10.15, 5, ...)
>
> then if there is no problem, perform another query like:
>
> INSERT INTO store(store_id, description, price, qty, ...) values ('02',
> 'some description', 10.15, 5, ...)
>
> So the second query depends of the first query, if the first one succeed
> the second will perform the second insert
>
> I would like to do something like:
>
> SELECT myProcedure(product_id, p_description, price, qty, store_id,
> store_description );
>
> waiting for a response from procedure, maybe true or false.
>
> Thanks in advance...
>
>
>
> --
> "El desarrollo no es material es un estado de conciencia metal"
>

Sorry,  the second query looks like:
INSERT INTO store(store_id, description, product_id, price, qty, ...)
values ('02', 'some description', '01', 10.15, 5, ...)

-- 
"El desarrollo no es material es un estado de conciencia metal"


Re: [GENERAL] Help with exclusion constraint

2014-03-28 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Moshe Jacobson
Sent: Friday, March 28, 2014 10:31 AM
To: pgsql-general
Subject: [GENERAL] Help with exclusion constraint

Take the following table:
CREATE TABLE exclusion_example AS
(
 pk_col  integer primary key,
 fk_col integer not null references other_table,
 bool_col boolean not null
);
I want to ensure that for any given value of fk_col that there is a maximum of 
one row with bool_col = true. I wanted to write an exclusion constraint such as 
this:
alter table exclusion_example add exclude using btree ( fk_col with = , 
bool_col with and );
..
..
..
Moshe Jacobson
Manager of Systems Engineering, Nead Werx Inc.
2323 Cumberland Parkway * Suite 201 * Atlanta, GA 30339
"Quality is not an act, it is a habit." - Aristotle 

For this: 

"any given value of fk_col that there is a maximum of one row with bool_col = 
true."

why don't you (instead) create partial unique index:

CREATE UNIQUE INDEX on exclusion_example(fk_col, bool_col) WHERE bool_col IS 
TRUE;

Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with exclusion constraint

2014-03-28 Thread Moshe Jacobson
On Fri, Mar 28, 2014 at 12:21 PM, Igor Neyman wrote:

> For this:
> "any given value of fk_col that there is a maximum of one row with
> bool_col = true."
>
> why don't you (instead) create partial unique index:
> CREATE UNIQUE INDEX on exclusion_example(fk_col, bool_col) WHERE bool_col
> IS TRUE;
>

Ahh yes, why didn't I think of that? Thank you.


Moshe Jacobson
Manager of Systems Engineering, Nead Werx Inc. 
2323 Cumberland Parkway · Suite 201 · Atlanta, GA 30339

"Quality is not an act, it is a habit." -- Aristotle


Re: [GENERAL] Help with exclusion constraint

2014-03-28 Thread Moshe Jacobson
On Fri, Mar 28, 2014 at 12:45 PM,  wrote:

> > I want to ensure that for any given value of fk_col that there is a
> maximum
> > of one row with bool_col = true.
>
> This should be what you want:
>
> ALTER TABLE exclusion_example
> ADD CONSTRAINT ex
> EXCLUDE (fk_col WITH =) WHERE (bool_col);


Yes, that would do the trick!
I think I'll just set up a partial unique index as per Igor's suggestion,
however.

Thank you!

Moshe Jacobson
Manager of Systems Engineering, Nead Werx Inc. 
2323 Cumberland Parkway · Suite 201 · Atlanta, GA 30339

"Quality is not an act, it is a habit." -- Aristotle


Re: [GENERAL] Help with exclusion constraint

2014-03-28 Thread hari . fuchs
Moshe Jacobson  writes:

> Take the following table:
>
> CREATE TABLE exclusion_example AS
> (
>  pk_col  integer primary key,
>  fk_col integer not null references other_table,
>  bool_col boolean not null
> );
>
> I want to ensure that for any given value of fk_col that there is a maximum
> of one row with bool_col = true.

This should be what you want:

ALTER TABLE exclusion_example
ADD CONSTRAINT ex
EXCLUDE (fk_col WITH =) WHERE (bool_col);



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with connection issue - started today

2014-02-06 Thread Edson Richter

Em 06/02/2014 14:25, Bret Stern escreveu:

You checked pg_hba_conf for host 10.68.73.6?
Is there a matching log entry for the rejection?


Thanks for your effort, Bret!
Problem already solved (I've posted the solution in the list past days).
Problem was a "context.xml" with a wrong database server reference - the 
reason that made Tomcat to start considering that file is a mystery to 
me, but removing the file, it became normal again.


Regards,

Edson




On Wed, 2014-02-05 at 09:21 -0200, Edson Richter wrote:

Dear all, I need your advise. Found a tricky situation.

Without any changes in the configuration files, a **local** connection
to a local VPN IP address could not be established with the following error:

2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException:
ERROR: could not establish connection
Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master", database "master", SSL off

org.postgresql.util.PSQLException: ERROR: could not establish connection
Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user
"master", database "master", SSL off

  at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
  at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
  at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
  at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
  at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
  at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
  at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
  at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
  at
org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
  at br.com...

If I run from the Local Server:
[root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
   count
---
   9
(1 registro)


If I run from a Remote Server:

[root@myremoteserver ssh]# psql -h 10.68.73.6 -d master -U master
Senha para usuário master:
psql (9.2.6)
Digite "help" para ajuda.

master=> select count(*) from pessoa;
   count
---
   9
(1 registro)


So, seems that only JDBC is getting in trouble to connect to a local IP
address.
Current interfaces (and yes, the tunneling is running - otherwise the
remote server could not connect):

[root@mylocalserver logs]# ifconfig
eth0  Link encap:Ethernet  Endereço de HW XX:XX:XX:XX:XX:XX
inet end.: xxx.xx.xxx.85  Bcast:xxx.xx.xxx.87
Masc:255.255.255.248
endereço inet6: ::xxx:::/64 Escopo:Link
UP BROADCASTRUNNING MULTICAST  MTU:1500  Métrica:1
RX packets:172557 errors:0 dropped:0 overruns:0 frame:0
TX packets:293439 errors:0 dropped:0 overruns:0 carrier:0
colisões:0 txqueuelen:1000
RX bytes:24537681 (23.4 MiB)  TX bytes:227413210 (216.8 MiB)
IRQ:28 Memória:fb00-fb7f

loLink encap:Loopback Local
inet end.: 127.0.0.1  Masc:255.0.0.0
endereço inet6: ::1/128 Escopo:Máquina
UP LOOPBACKRUNNING  MTU:16436  Métrica:1
RX packets:138156 errors:0 dropped:0 overruns:0 frame:0
TX packets:138156 errors:0 dropped:0 overruns:0 carrier:0
colisões:0 txqueuelen:0
RX bytes:31993170 (30.5 MiB)  TX bytes:31993170 (30.5 MiB)

tun0  Link encap:Não Especificado  Endereço de HW
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet end.: 10.68.73.6  P-a-P:10.68.73.5 Masc:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Métrica:1
RX packets:149106 errors:0 dropped:0 overruns:0 frame:0
TX packets:274534 errors:0 dropped:199 overruns:0 carrier:0
colisões:0 txqueuelen:100
RX bytes:6291572 (6.0 MiB)  TX bytes:381732404 (364.0 MiB)














--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with connection issue - started today

2014-02-06 Thread Bret Stern
You checked pg_hba_conf for host 10.68.73.6?
Is there a matching log entry for the rejection?


On Wed, 2014-02-05 at 09:21 -0200, Edson Richter wrote:
> Dear all, I need your advise. Found a tricky situation.
> 
> Without any changes in the configuration files, a **local** connection 
> to a local VPN IP address could not be established with the following error:
> 
> 2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException: 
> ERROR: could not establish connection
>Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
> "master", database "master", SSL off
> 
> org.postgresql.util.PSQLException: ERROR: could not establish connection
>Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
> "master", database "master", SSL off
> 
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
>  at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
>  at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
>  at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
>  at 
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
>  at 
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
>  at 
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
>  at br.com...
> 
> If I run from the Local Server:
> [root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
> Senha para usuário master:
> psql (9.2.6)
> Digite "help" para ajuda.
> 
> master=> select count(*) from pessoa;
>   count
> ---
>   9
> (1 registro)
> 
> 
> If I run from a Remote Server:
> 
> [root@myremoteserver ssh]# psql -h 10.68.73.6 -d master -U master
> Senha para usuário master:
> psql (9.2.6)
> Digite "help" para ajuda.
> 
> master=> select count(*) from pessoa;
>   count
> ---
>   9
> (1 registro)
> 
> 
> So, seems that only JDBC is getting in trouble to connect to a local IP 
> address.
> Current interfaces (and yes, the tunneling is running - otherwise the 
> remote server could not connect):
> 
> [root@mylocalserver logs]# ifconfig
> eth0  Link encap:Ethernet  Endereço de HW XX:XX:XX:XX:XX:XX
>inet end.: xxx.xx.xxx.85  Bcast:xxx.xx.xxx.87 
> Masc:255.255.255.248
>endereço inet6: ::xxx:::/64 Escopo:Link
>UP BROADCASTRUNNING MULTICAST  MTU:1500  Métrica:1
>RX packets:172557 errors:0 dropped:0 overruns:0 frame:0
>TX packets:293439 errors:0 dropped:0 overruns:0 carrier:0
>colisões:0 txqueuelen:1000
>RX bytes:24537681 (23.4 MiB)  TX bytes:227413210 (216.8 MiB)
>IRQ:28 Memória:fb00-fb7f
> 
> loLink encap:Loopback Local
>inet end.: 127.0.0.1  Masc:255.0.0.0
>endereço inet6: ::1/128 Escopo:Máquina
>UP LOOPBACKRUNNING  MTU:16436  Métrica:1
>RX packets:138156 errors:0 dropped:0 overruns:0 frame:0
>TX packets:138156 errors:0 dropped:0 overruns:0 carrier:0
>colisões:0 txqueuelen:0
>RX bytes:31993170 (30.5 MiB)  TX bytes:31993170 (30.5 MiB)
> 
> tun0  Link encap:Não Especificado  Endereço de HW 
> 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
>inet end.: 10.68.73.6  P-a-P:10.68.73.5 Masc:255.255.255.255
>UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Métrica:1
>RX packets:149106 errors:0 dropped:0 overruns:0 frame:0
>TX packets:274534 errors:0 dropped:199 overruns:0 carrier:0
>colisões:0 txqueuelen:100
>RX bytes:6291572 (6.0 MiB)  TX bytes:381732404 (364.0 MiB)
> 
> 
> 
> 
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with connection issue - started today

2014-02-06 Thread Adrian Klaver

On 02/06/2014 07:29 AM, Merlin Moncure wrote:


The log is telling you exactly what's happening.  "pg_hba.conf" is a
database firewall of sorts that manages whom is allowed to connect to
the database and from where.  Fixing this is a matter of adding a rule
to that file.  The file is internally well documented but you should
also read this:
http://www.postgresql.org/docs/9.3/static/auth-pg-hba-conf.html

after changing the file, you have to "reload" or restart the database.


Actually I think this is the first try at what became this thread:

http://www.postgresql.org/message-id/blu0-smtp4003f13d23f226d5cdee60acf...@phx.gbl

It turned out to be a Tomcat issue.



merlin





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with connection issue - started today

2014-02-06 Thread Merlin Moncure
On Wed, Feb 5, 2014 at 5:21 AM, Edson Richter  wrote:
> Dear all, I need your advise. Found a tricky situation.
>
> Without any changes in the configuration files, a **local** connection to a
> local VPN IP address could not be established with the following error:
>
> 2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException: ERROR:
> could not establish connection
>   Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user "master",
> database "master", SSL off
>
> org.postgresql.util.PSQLException: ERROR: could not establish connection
>   Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user "master",
> database "master", SSL off
>
> at
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
> at
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
> at
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
> at
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
> at
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
> at
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
> at
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
> at
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
> at
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
> at br.com...
>
> If I run from the Local Server:
> [root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
> Senha para usuário master:
> psql (9.2.6)
> Digite "help" para ajuda.
>
> master=> select count(*) from pessoa;
>  count
> ---
>  9
> (1 registro)
>
>
> If I run from a Remote Server:
>
> [root@myremoteserver ssh]# psql -h 10.68.73.6 -d master -U master
> Senha para usuário master:
> psql (9.2.6)
> Digite "help" para ajuda.
>
> master=> select count(*) from pessoa;
>  count
> ---
>  9
> (1 registro)
>
>
> So, seems that only JDBC is getting in trouble to connect to a local IP
> address.
> Current interfaces (and yes, the tunneling is running - otherwise the remote
> server could not connect):
>
> [root@mylocalserver logs]# ifconfig
> eth0  Link encap:Ethernet  Endereço de HW XX:XX:XX:XX:XX:XX
>   inet end.: xxx.xx.xxx.85  Bcast:xxx.xx.xxx.87 Masc:255.255.255.248
>   endereço inet6: ::xxx:::/64 Escopo:Link
>   UP BROADCASTRUNNING MULTICAST  MTU:1500  Métrica:1
>   RX packets:172557 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:293439 errors:0 dropped:0 overruns:0 carrier:0
>   colisões:0 txqueuelen:1000
>   RX bytes:24537681 (23.4 MiB)  TX bytes:227413210 (216.8 MiB)
>   IRQ:28 Memória:fb00-fb7f
>
> loLink encap:Loopback Local
>   inet end.: 127.0.0.1  Masc:255.0.0.0
>   endereço inet6: ::1/128 Escopo:Máquina
>   UP LOOPBACKRUNNING  MTU:16436  Métrica:1
>   RX packets:138156 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:138156 errors:0 dropped:0 overruns:0 carrier:0
>   colisões:0 txqueuelen:0
>   RX bytes:31993170 (30.5 MiB)  TX bytes:31993170 (30.5 MiB)
>
> tun0  Link encap:Não Especificado  Endereço de HW
> 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
>   inet end.: 10.68.73.6  P-a-P:10.68.73.5 Masc:255.255.255.255
>   UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Métrica:1
>   RX packets:149106 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:274534 errors:0 dropped:199 overruns:0 carrier:0
>   colisões:0 txqueuelen:100
>   RX bytes:6291572 (6.0 MiB)  TX bytes:381732404 (364.0 MiB)

The log is telling you exactly what's happening.  "pg_hba.conf" is a
database firewall of sorts that manages whom is allowed to connect to
the database and from where.  Fixing this is a matter of adding a rule
to that file.  The file is internally well documented but you should
also read this:
http://www.postgresql.org/docs/9.3/static/auth-pg-hba-conf.html

after changing the file, you have to "reload" or restart the database.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with details of what happens when I create a constraint NOT VALID

2014-01-24 Thread Marti Raudsepp
On Thu, Jan 23, 2014 at 3:46 PM, Bill Moran  wrote:
> It was suggested that
> leaving the constraints as NOT VALID might affect the planner, causing
> it to use less optimal plans because it doesn't think it can trust
> the constraint.  Is this true?

AFAICT the planner doesn't currently rely on FOREIGN KEY constriants
for anything, so there's no downside to leaving those NOT VALID.

UNIQUE constraints affect the planner the most, but they must always
be valid anyway.

If you use table inheritance (partitioning), then valid CHECK
constraints are necessary to use that effectively.

> It has also been suggested that manually changing the status to valid
> in the catalog without going through the validation process could cause
> problems

It's unsupported: if you break something when manually messing with
the system catalog, you get to keep the pieces and people will just
tell you "we told you so". But if you know what you're doing, it's OK.
Just make sure you double-check the source code that you're not
missing something critical that it does.

Regards,
Marti


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with details of what happens when I create a constraint NOT VALID

2014-01-23 Thread Torsten Förtsch
On 23/01/14 14:46, Bill Moran wrote:
> 
> Some quickie background: I'm on a project to migrate a fairly large
> database from MySQL to PostgreSQL (~2T).  As a result of a number of
> factors, I have to do it in one shot and I have a limited time window
> in which things can be down while I switch it over.
> 
> As one of many, many things I'm considering to make this work, I'm
> looking at adding constraints after the data move using NOT VALID to
> allow them to be applied quickly.  This seems pretty straight forward,
> but I'm trying to understand if there are any troublesome side-effects
> to leaving the constraints unvalidated.
> 
> Because of the uptime requirements, there are some very large tables
> with may foreign keys that I will never be allowed to take a lock on
> long enough to validate all the constraints.  It was suggested that
> leaving the constraints as NOT VALID might affect the planner, causing
> it to use less optimal plans because it doesn't think it can trust
> the constraint.  Is this true?
> 
> It has also been suggested that manually changing the status to valid
> in the catalog without going through the validation process could cause
> problems, although I haven't found an explanation of what those
> problems might be.
> 
> I understand that the best way is to go through and do all the steps,
> but that may simply be impossible for me because of the lock it
> requires and the time involved.  Is there any negative effect to
> leaving the constraint unvalidated?  Is there any actual danger in
> manually flipping the value in the catalog (The constraint can be
> consider safe because it was previously enforced on the source
> database system)


I had a similar problem some time ago. The way I solved it is as
follows. First, add the constraint as NOT VALID. That prevents further
changes to violate it. Then make sure the constraint is met. Then update
pg_constraint.

UPDATE pg_constraint
   SET convalidated = true
 WHERE conrelid='schema.table'::regclass::oid
   AND conname='constraintname'

Not sure if that way can be recommended but it worked for me. In my case
it was a check constraint ensuring an interdependence between the
columns in a row.

Torsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help interpreting pg_stat_user_index view values

2014-01-05 Thread Erik Darling
You could also look into a filtered index that perhaps only covers dates
earlier than a certain point in time where regular performance wouldn't be
hindered. But Gavin is absolutely right otherwise.

On Jan 5, 2014 5:22 PM, "Sergey Konoplev"  wrote:
>
> On Sun, Jan 5, 2014 at 2:19 PM, Gavin Flower
>  wrote:
> > On 06/01/14 11:08, Sergey Konoplev wrote:
> > [...]
> >
> >> An index might be considered as useless when there were no idx scans
for
> >> the significantly long period. However it might be non-trivial to
define
> >> this period. Eg. one have a query building an annual report that uses
this
> >> index and the period here is one year.
> >
> > [...]
> >
> > An index only used by an annual report, should possibly be only created
> > prior to the report run & dropped immediately afterwards - why carry its
> > overhead for the bulk of the year?
>
> I fully agree. This is the matter of implementation.
>
> --
> Kind regards,
> Sergey Konoplev
> PostgreSQL Consultant and DBA
>
> http://www.linkedin.com/in/grayhemp
> +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
> gray...@gmail.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help interpreting pg_stat_user_index view values

2014-01-05 Thread Sergey Konoplev
On Sun, Jan 5, 2014 at 2:19 PM, Gavin Flower
 wrote:
> On 06/01/14 11:08, Sergey Konoplev wrote:
> [...]
>
>> An index might be considered as useless when there were no idx scans for
>> the significantly long period. However it might be non-trivial to define
>> this period. Eg. one have a query building an annual report that uses this
>> index and the period here is one year.
>
> [...]
>
> An index only used by an annual report, should possibly be only created
> prior to the report run & dropped immediately afterwards - why carry its
> overhead for the bulk of the year?

I fully agree. This is the matter of implementation.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help interpreting pg_stat_user_index view values

2014-01-05 Thread Gavin Flower

On 06/01/14 11:08, Sergey Konoplev wrote:
[...]
An index might be considered as useless when there were no idx scans 
for the significantly long period. However it might be non-trivial to 
define this period. Eg. one have a query building an annual report 
that uses this index and the period here is one year.

[...]

An index only used by an annual report, should possibly be only created 
prior to the report run & dropped immediately afterwards - why carry its 
overhead for the bulk of the year?


Cheers,
Gavin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help interpreting pg_stat_user_index view values

2014-01-05 Thread Sergey Konoplev
On Fri, Jan 3, 2014 at 10:53 PM,   wrote:
> Index name   idx_scan   idx_tup_read   idx_tup_fetch
> idx1   1000
> 0
> idx2   100  2000
> idx3   100  200   50
> idx4   1000
> 200
>
> Is idx1 a "useless" index?  Is it being scanned but nevering returns useful
> tuples because it doesn't point to any useful rows in the table? Or maybe
> the query planner looked at the index but decided to use a table scan
> instead?

No, it just tells us that no tuples matched the index conditions for a
statistics collecting period on queries where planner chose this
index. Probably in the future there will be such tuples.

An index might be considered as useless when there were no idx scans
for the significantly long period. However it might be non-trivial to
define this period. Eg. one have a query building an annual report
that uses this index and the period here is one year.

> Is idx2 a "useless" index?  Is this index being scanned but nevering returns
> useful tuples because it doesn't point to any useful rows in the table?

No, it is not. It tells us that there might be another statements in
the queries that prevent the read rows from fetching.

> For idx3 do it's values mean it's column specificity is not specific enough
> to be a relatively useful index?

No. The reason is the same as in the previous question. Eg OFFSET 150 LIMIT 50.

> I am assuming an index with values like idx4 could never exist, it is an
> impossible result.  Is that a correct assumption?

Yes, this is correct one.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help replacing expresion in plpgsql

2013-12-18 Thread Albe Laurenz
Juan Pablo L wrote:
> Hi, i have a function that receives a parameter which represents days:
> 
> FUNCTION aaa_recharge_account(expdays integer)
> 
> i want to add those days to the CURRENT_DATE, but i do not know how to do it, 
> i have tried several
> ways to replace that in an expresion like:
> 
> newexpdate := CURRENT_TIMESTAMP + interval '$1 days' using expdays;
> (newexpdate is declared as timestamp)
> 
> and many more but none work, can someone please help me to find out how can i 
> replace that parameter
> into an expression that i can add to CURRENT_TIMESTAMP or any other way that 
> i can accomplish what i
> need which is to add that parameter to the current timestamp. thanks!!!

There are many ways.
Two I can think of right away:

newexpdate := CURRENT_TIMESTAMP + CAST(expdays || ' days' AS interval);

newexpdate := CURRENT_TIMESTAMP + expdays * INTERVAL '1 days';

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help ! FAILOVER with Slony-I.

2013-12-02 Thread John R Pierce

On 12/2/2013 4:11 AM, Tobadao wrote:

I'm using Slony-I and Streaming Replication.


those are two completely different and incompatible replication 
methods.   a streaming replication MASTER could participate in slony 
table replication with a 3rd server, but afaik a streaming SLAVE 
can't/shouldn't run slony at all.


Slony probably has its own mailing lists which likely would be more 
suitable for answering your other questions.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help interpreting "explain analyze" output

2013-11-27 Thread hubert depesz lubaczewski
On Tue, Nov 26, 2013 at 12:24:08PM -0500, David Rysdam wrote:
> I'm not really looking for information on how to speed this query
> up. I'm just trying to interpret the output enough to tell me which step
> is slow:

You might want to read this:
http://www.depesz.com/tag/unexplainable/

Best regards,

depesz



signature.asc
Description: Digital signature


Re: [GENERAL] help interpreting "explain analyze" output

2013-11-27 Thread Vik Fearing
On 11/26/2013 06:24 PM, David Rysdam wrote:
> I'm not really looking for information on how to speed this query
> up. I'm just trying to interpret the output enough to tell me which step
> is slow:
>
>Seq Scan on mags  (cost=0.00..187700750.56  rows=47476  width=4) (actual 
> time=3004851.889..3004851.889  rows=0  loops=1)
> Filter:  ((signum IS NOT NULL) AND (NOT (SubPlan 1)))
> SubPlan 1
>   -> Materialize  (cost=0.00..3713.93  rows=95862  width=4) 
> (actual time=0.011..16.145  rows=48139  loops=94951)
>  -> Seq Scan on sigs (cost=0.00..2906.62 rows=95862 
> width=4) (actual time=0.010..674.201  rows=95862  loops=1)
>Total runtime: 3004852.005 ms
>
> It looks like the inner seq scan takes 674ms, then the materialize takes
> an additional 16ms? Or is that 16ms * 94951? Or 674 * 94951?
>
> And the outer seq scan takes 3004851-3004851 = 0ms?

The actual time shown is not a range.  The first number is how long it
took to produce the first row, and the second is how long it took
overall.  The Seq Scan on mags took 3004851.889ms to produce the first
(non-existent) row and then it was finished so that's also the overall time.

SubPlan 1 took a total of 674.201ms + 94951 * 16.145ms = 1533658.096ms
which means the Filter line actually took 1471193.793ms to determine
there were no matching rows.

That is, if I understood it correctly myself.

-- 
Vik



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Elliot

On 2013-11-18 04:37, Ken Tanzer wrote:
If the tables aren't huge, you're not concerned about optimization, 
and you just want to get your numbers, I think something like this 
would do the trick.  I haven't actually tried it 'cause I didn't have 
easy access to your tables:


SELECT
  a.product_id,
  a.product_name,
  b.initial_stock_sum,
  c.in_out_sum,
  c.in_sum,
  c.out_sum
FROM
  a
LEFT JOIN
  (SELECT
product_id,
SUM(initial_stock) AS initial_stock_sum
  FROM b
  GROUP BY product_id
  ) b USING (product_id)
LEFT JOIN
  (SELECT
product_id,
sum(CASE WHEN date < 'BEGINNING DATE' THEN in-out ELSE 0 END) AS 
in_out_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' THEN 
in ELSE 0 END) AS in_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' THEN 
out ELSE 0 END) AS out_sum

   FROM c
   GROUP BY product_id
   ) c USING (product_id)
WHERE a.supplier_id='XXX';

Cheers,
Ken


I'm a big fan of using LATERAL joins (9.3+) for this use case.


Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Robin St . Clair
In general, when I have to handle Ledger type data (which this problem 
is), I tend to hold data in 3 tables


1. Master Ledger ( Product ID, Name, etc)
2. Master Ledger Balances(Product ID, Fiscal_Year, Opening Balance,
   Net_Transaction_P1, Net_Transaction_P2, ...  etc)
3. Master Ledger Transactions(Product_ID, (Fiscal_Year), Date,
   Amount..)

I use Triggers and Stored Procedures to maintain consistency. This 
allows you to quickly navigate across the population of your data and 
drill down to the detailed transaction when required.


Careful manipulation of the Master Ledger Balances table lets you 
retrieve multiple different kinds of information at a single pass, ie 
This Year To Date Actual, Last Year To Date Actual and Budget This Year 
To Date. I usually create functions/SPs to do this even more rapidly.


If you have many bulk updates, it can be better to drop all indices 
prior to updating and then rebuilding them.


Robin St.Clair



On 18/11/2013 10:04, Hengky Liwandouw wrote:

Thanks a lot Ken,

I will try it soon.

But when the table becomes huge (how big 'huge'  in postgres ?), how 
to optimize such command ?


I have index on all important field like date, productid, supplierid, 
customerid and so on


Optimization is really an important thing as i plan to keep all 
transaction data as long as possible.



On Nov 18, 2013, at 5:37 PM, Ken Tanzer wrote:

If the tables aren't huge, you're not concerned about optimization, 
and you just want to get your numbers, I think something like this 
would do the trick.  I haven't actually tried it 'cause I didn't have 
easy access to your tables:


SELECT
  a.product_id,
a.product_name,
b.initial_stock_sum,
  c.in_out_sum,
  c.in_sum,
  c.out_sum
FROM
  a
LEFT JOIN
  (SELECT
product_id,
SUM(initial_stock) AS initial_stock_sum
  FROM b
  GROUP BY product_id
  ) b USING (product_id)
LEFT JOIN
  (SELECT
product_id,
sum(CASE WHEN date < 'BEGINNING DATE' THEN in-out ELSE 0 END) AS 
in_out_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' 
THEN in ELSE 0 END) AS in_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' 
THEN out ELSE 0 END) AS out_sum

   FROM c
   GROUP BY product_id
   ) c USING (product_id)
WHERE a.supplier_id='XXX';

Cheers,
Ken





Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Hengky Liwandouw
Thanks a lot Ken,

I will try it soon. 

But when the table becomes huge (how big 'huge'  in postgres ?), how to 
optimize such command ?

I have index on all important field like date, productid, supplierid, 
customerid and so on

Optimization is really an important thing as i plan to keep all transaction 
data as long as possible. 


On Nov 18, 2013, at 5:37 PM, Ken Tanzer wrote:

> If the tables aren't huge, you're not concerned about optimization, and you 
> just want to get your numbers, I think something like this would do the 
> trick.  I haven't actually tried it 'cause I didn't have easy access to your 
> tables:
> 
> SELECT 
>   a.product_id,
>   a.product_name,
>   b.initial_stock_sum,
>   c.in_out_sum,
>   c.in_sum,
>   c.out_sum
> FROM
>   a
> LEFT JOIN
>   (SELECT
> product_id,
> SUM(initial_stock) AS initial_stock_sum
>   FROM b
>   GROUP BY product_id
>   ) b USING (product_id)
> LEFT JOIN
>   (SELECT
> product_id,
> sum(CASE WHEN date < 'BEGINNING DATE' THEN in-out ELSE 0 END) AS 
> in_out_sum,
> sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' THEN in 
> ELSE 0 END) AS in_sum,
> sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' THEN out 
> ELSE 0 END) AS out_sum
>FROM c
>GROUP BY product_id
>) c USING (product_id)
> WHERE a.supplier_id='XXX';
> 
> Cheers,
> Ken
> 
> 
> On Mon, Nov 18, 2013 at 12:47 AM, Raymond O'Donnell  wrote:
> On 18/11/2013 02:16, Hengky Liwandouw wrote:
> > Dear Friends,
> >
> > Please help for the select command, as i had tried many times and
> > always can not display the result as what i want.
> >
> > I am looking for the solution on google but still can not found the
> > right answer to solve the problem.
> >
> > I have 3 tables :
> >
> > Table A ProductID ProductName SupplierID
> >
> > Table B ProductID InitialStock
> >
> > Table C ProductID Date In Out
> >
> > 1. I want to select all productID from Table A where
> > supplierID='XXX'.
> >
> > 2. Based on list from Step.1 : sum the initialstock from  Table B
> >
> > 3. Based on list from Step 1 : Sum (in-out) from Table C where date
> > <'BEGINNING DATE'
> >
> > 4. Based on list from Step 1 : Sum (in) and sum(out) from Table C
> > where date between 'BEGINNING DATE' and 'ENDING DATE'
> >
> > So the result will look like this :
> >
> > ProductID  ProductName  SumofIntialStock  sum(in-Out) > SumofIN  SumofOut    x  99
> > 99 99 99 
> > x  99   99
> > 99 99    x  99
> > 99 99 99 
> > x  99   99
> > 99 99
> 
> You could try using common table expressions, which let you build up to
> your final result in steps. Some reading:
> 
> http://www.postgresql.org/docs/9.3/static/queries-with.html
> 
> http://www.chesnok.com/daily/2013/11/12/how-i-write-queries-using-psql-ctes/
> 
> 
> Ray.
> 
> 
> --
> Raymond O'Donnell :: Galway :: Ireland
> r...@iol.ie
> 
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 
> 
> 
> -- 
> 
> AGENCY Software  
> A data system that puts you in control
> 100% Free Software
> http://agency-software.org/
> ken.tan...@agency-software.org
> (253) 245-3801
> 
> Subscribe to the mailing list to
> learn more about AGENCY or
> follow the discussion.



Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Hengky Liwandouw
Thanks all for your concern and help.

I have tried David suggestion and it works. As what you all said, there are so 
many important feature in PostgreSQL. I really have to spend time to study it.

Last time i use Windev to develop  front end application, and HyperfileSQL as 
the database server. This makes me headache :D 

Postgre really fast, has excellent feature, clear documentation, has great 
community and really so many plus point. 

Thanks you all.


On Nov 18, 2013, at 4:47 PM, Raymond O'Donnell wrote:

> On 18/11/2013 02:16, Hengky Liwandouw wrote:
>> Dear Friends,
>> 
>> Please help for the select command, as i had tried many times and
>> always can not display the result as what i want.
>> 
>> I am looking for the solution on google but still can not found the
>> right answer to solve the problem.
>> 
>> I have 3 tables :
>> 
>> Table A ProductID ProductName SupplierID
>> 
>> Table B ProductID InitialStock
>> 
>> Table C ProductID Date In Out
>> 
>> 1. I want to select all productID from Table A where
>> supplierID='XXX'.
>> 
>> 2. Based on list from Step.1 : sum the initialstock from  Table B
>> 
>> 3. Based on list from Step 1 : Sum (in-out) from Table C where date
>> <'BEGINNING DATE'
>> 
>> 4. Based on list from Step 1 : Sum (in) and sum(out) from Table C
>> where date between 'BEGINNING DATE' and 'ENDING DATE'
>> 
>> So the result will look like this :
>> 
>> ProductID  ProductName  SumofIntialStock  sum(in-Out)> SumofIN  SumofOut    x  99
>> 99 99 99 
>> x  99   99
>> 99 99    x  99
>> 99 99 99 
>> x  99   99
>> 99 99
> 
> You could try using common table expressions, which let you build up to
> your final result in steps. Some reading:
> 
> http://www.postgresql.org/docs/9.3/static/queries-with.html
> 
> http://www.chesnok.com/daily/2013/11/12/how-i-write-queries-using-psql-ctes/
> 
> 
> Ray.
> 
> 
> -- 
> Raymond O'Donnell :: Galway :: Ireland
> r...@iol.ie



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Ken Tanzer
If the tables aren't huge, you're not concerned about optimization, and you
just want to get your numbers, I think something like this would do the
trick.  I haven't actually tried it 'cause I didn't have easy access to
your tables:

SELECT
  a.product_id,
  a.product_name,
  b.initial_stock_sum,
  c.in_out_sum,
  c.in_sum,
  c.out_sum
FROM
  a
LEFT JOIN
  (SELECT
product_id,
SUM(initial_stock) AS initial_stock_sum
  FROM b
  GROUP BY product_id
  ) b USING (product_id)
LEFT JOIN
  (SELECT
product_id,
sum(CASE WHEN date < 'BEGINNING DATE' THEN in-out ELSE 0 END) AS
in_out_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' THEN in
ELSE 0 END) AS in_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' THEN out
ELSE 0 END) AS out_sum
   FROM c
   GROUP BY product_id
   ) c USING (product_id)
WHERE a.supplier_id='XXX';

Cheers,
Ken


On Mon, Nov 18, 2013 at 12:47 AM, Raymond O'Donnell  wrote:

> On 18/11/2013 02:16, Hengky Liwandouw wrote:
> > Dear Friends,
> >
> > Please help for the select command, as i had tried many times and
> > always can not display the result as what i want.
> >
> > I am looking for the solution on google but still can not found the
> > right answer to solve the problem.
> >
> > I have 3 tables :
> >
> > Table A ProductID ProductName SupplierID
> >
> > Table B ProductID InitialStock
> >
> > Table C ProductID Date In Out
> >
> > 1. I want to select all productID from Table A where
> > supplierID='XXX'.
> >
> > 2. Based on list from Step.1 : sum the initialstock from  Table B
> >
> > 3. Based on list from Step 1 : Sum (in-out) from Table C where date
> > <'BEGINNING DATE'
> >
> > 4. Based on list from Step 1 : Sum (in) and sum(out) from Table C
> > where date between 'BEGINNING DATE' and 'ENDING DATE'
> >
> > So the result will look like this :
> >
> > ProductID  ProductName  SumofIntialStock  sum(in-Out) > SumofIN  SumofOut    x  99
> > 99 99 99 
> > x  99   99
> > 99 99    x  99
> > 99 99 99 
> > x  99   99
> > 99 99
>
> You could try using common table expressions, which let you build up to
> your final result in steps. Some reading:
>
> http://www.postgresql.org/docs/9.3/static/queries-with.html
>
>
> http://www.chesnok.com/daily/2013/11/12/how-i-write-queries-using-psql-ctes/
>
>
> Ray.
>
>
> --
> Raymond O'Donnell :: Galway :: Ireland
> r...@iol.ie
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ *
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Raymond O'Donnell
On 18/11/2013 02:16, Hengky Liwandouw wrote:
> Dear Friends,
> 
> Please help for the select command, as i had tried many times and
> always can not display the result as what i want.
> 
> I am looking for the solution on google but still can not found the
> right answer to solve the problem.
> 
> I have 3 tables :
> 
> Table A ProductID ProductName SupplierID
> 
> Table B ProductID InitialStock
> 
> Table C ProductID Date In Out
> 
> 1. I want to select all productID from Table A where
> supplierID='XXX'.
> 
> 2. Based on list from Step.1 : sum the initialstock from  Table B
> 
> 3. Based on list from Step 1 : Sum (in-out) from Table C where date
> <'BEGINNING DATE'
> 
> 4. Based on list from Step 1 : Sum (in) and sum(out) from Table C
> where date between 'BEGINNING DATE' and 'ENDING DATE'
> 
> So the result will look like this :
> 
> ProductID  ProductName  SumofIntialStock  sum(in-Out) SumofIN  SumofOut    x  99
> 99 99 99 
> x  99   99
> 99 99    x  99
> 99 99 99 
> x  99   99
> 99 99

You could try using common table expressions, which let you build up to
your final result in steps. Some reading:

http://www.postgresql.org/docs/9.3/static/queries-with.html

http://www.chesnok.com/daily/2013/11/12/how-i-write-queries-using-psql-ctes/


Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Robin St . Clair

Hi

For decades, this type of problem has been the meat and vegetables of 
discussions about SQL programming and design.


One writer on this subject has stood out, thanks to his mental clarity 
and ability to set out complicated concepts in a readily comprehensible 
manner.


His name is Joe Celko . He has 
published several books, including SQL For Smarties 
 and SQL 
Puzzles & Answer 
s, you 
may even find them in .pdf format online.


Read some of what Joe has written and you will find answers to this sort 
of problem.


Incidentally, I can't remember a problem like this since we stored data 
on open reel tape systems.  We used to set multiway masterfile update 
problems for first year Information System students, before they had 
widespread access to ISAM and database hosted exercises. What you have 
is a 'batch' system, you might find it worthwhile rolling (virtually) A 
& B together and then grouping C. The trick will be in the way you 
handle grouping, Joe Cleko has an entire book on this subject - Thinking 
in Sets: Auxiliary, Temporal & Virtual Tables in SQL 
.


This blog (based on his 
solutions) is worth a look.


Robin St.Clair




On 18/11/2013 02:16, Hengky Liwandouw wrote:

Dear Friends,

Please help for the select command, as i had tried many times and always can 
not display the result as what i want.

I am looking for the solution on google but still can not found the right 
answer to solve the problem.

I have 3 tables :

Table A
ProductID
ProductName
SupplierID

Table B
ProductID
InitialStock

Table C
ProductID
Date
In
Out

1. I want to select all productID from Table A where supplierID='XXX'.

2. Based on list from Step.1 : sum the initialstock from  Table B

3. Based on list from Step 1 : Sum (in-out) from Table C where date <'BEGINNING 
DATE'

4. Based on list from Step 1 : Sum (in) and sum(out) from Table C where date 
between 'BEGINNING DATE' and 'ENDING DATE'

So the result will look like this :

ProductID  ProductName  SumofIntialStock  sum(in-Out)  








Re: [GENERAL] Help function to sort string

2013-10-18 Thread Vincent Veyron
Le mardi 15 octobre 2013 à 08:52 -0700, ginkgo36 a écrit :

> 1. I want to sort string follow anphabet and I used this query:
> select string_agg(x, ';') from (select
> trim(unnest(regexp_split_to_array('ECD FORM; BODY; PREDILUTED; CHROMO-GENIC;
> AUTO;RABBIT; FORMAT',';'))) x order by x) a;
> 
> -- result: AUTO; BODY; CHROMOGENIC; ECD FORM;  FORMAT; PREDILUTED;  RABBIT
> -->I expected this rusult
> 
> In my database I have a column with alot of rows data. I want that query
> become a function to more easy to using. But I can not write a function :(.
> please hepl me.
> For example, I have column "data_text" with data like this:
> Row 1: AUTO; BODY; PREDILUTED; ECD FORM; RABBIT; FORMAT; CHROMOGENIC
> Row 2: ECD FORM; BODY; PREDILUTED; CHROMO-GENIC; AUTO; RABBIT; FORMAT
> Row 3: FORMAT; ECD FORM; AUTO
> Row 3: ANHYDROUS; DENATURED; PREDILUTED; CHROMOGENIC
> 
> When I run funtion, the result:
> Row 1: AUTO; BODY; CHROMOGENIC; ECD.FORM; FORMAT; PREDILUTED; RABBIT
> Row 2: AUTO; BODY; CHROMO-GENIC; ECD FORM; FORMAT; PREDILUTED; RABBIT
> Row 3: AUTO; ECD FORM; FORMAT
> Row 4: ANHYDROUS; CHROMOGENIC; DENATURED; PREDILUTED

This works :

CREATE TABLE foo (id serial, data_text text);

INSERT INTO foo (data_text) values ('AUTO; BODY; PREDILUTED; ECD FORM;
RABBIT; FORMAT; CHROMOGENIC'), ('ECD FORM; BODY; PREDILUTED;
CHROMO-GENIC; AUTO; RABBIT; FORMAT'), ('FORMAT; ECD FORM; AUTO');

WITH t1 AS (SELECT id, unnest(string_to_array(data_text, '; ')) FROM foo
ORDER BY 1, 2),
t2 AS (SELECT id, array_agg(unnest) over (PARTITION BY id) AS reordered
FROM t1) 
SELECT t2.id, array_to_string(t2.reordered, '; ') FROM t2 GROUP BY id,
reordered;


Result :

 id |array_to_string 
+
  1 | AUTO; BODY; CHROMOGENIC; ECD FORM; FORMAT; PREDILUTED; RABBIT
  2 | AUTO; BODY; CHROMO-GENIC; ECD FORM; FORMAT; PREDILUTED; RABBIT
  3 | AUTO; ECD FORM; FORMAT


-- 
Salutations, Vincent Veyron

http://marica.fr/site/demonstration
Gestion des contentieux juridiques, des contrats et des sinistres d'assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Re: [GENERAL] Help on ṕerformance

2013-10-04 Thread Rémi Cura
Hey short trick :
to avoid to use the schema name multiple time (more readable and more easy
to re use).

You can use the
SET search_path gpstracking_device_tracks, public;

(see manual here :
http://www.postgresql.org/docs/current/static/sql-set.html)
Cheers,

Rémi-C


2013/10/2 Carlos Eduardo Sotelo Pinto 

> Thanks to all
>
> I have fix that refactoring the function
>
> BEGIN
> arr := regexp_split_to_array(_imeis, E'\\s+');
>  RETURN QUERY
> SELECT
> gpstracking_device_tracks.imei,
>  gpstracking_device_tracks.device_id,
> gpstracking_device_tracks.date_time_process,
> gpstracking_device_tracks.latitude,
>  gpstracking_device_tracks.longitude,
> gpstracking_device_tracks.course,
> gpstracking_device_tracks.speed,
>  gpstracking_device_tracks.mileage,
> gpstracking_device_tracks.gps_signal,
> gpstracking_device_tracks.gsm_signal,
>  gpstracking_device_tracks.alarm_status,
> gpstracking_device_tracks.gps_status,
> gpstracking_device_tracks.vehicle_status,
>  gpstracking_device_tracks.alarm_over_speed,
> gpstracking_device_tracks.other,
> gpstracking_device_tracks.address
>  FROM (
> SELECT
> gpstracking_device_tracks.imei,
>  gpstracking_device_tracks.device_id,
> gpstracking_device_tracks.date_time_process,
> gpstracking_device_tracks.latitude,
>  gpstracking_device_tracks.longitude,
> gpstracking_device_tracks.course,
> gpstracking_device_tracks.speed,
>  gpstracking_device_tracks.mileage,
> gpstracking_device_tracks.gps_signal,
> gpstracking_device_tracks.gsm_signal,
>  gpstracking_device_tracks.alarm_status,
> gpstracking_device_tracks.gps_status,
> gpstracking_device_tracks.vehicle_status,
>  gpstracking_device_tracks.alarm_over_speed,
> gpstracking_device_tracks.other,
> gpstracking_device_tracks.address,
>  ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY
> gpstracking_device_tracks.date_time_process DESC) as rnumber
> FROM gpstracking_device_tracks
>  WHERE gpstracking_device_tracks.imei = ANY(arr)
> AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
> now())
>  AND gpstracking_device_tracks.date_time_process <= NOW()
> ) AS gpstracking_device_tracks
> WHERE gpstracking_device_tracks.rnumber = 1;
> END;
>
>
> 2013/10/2 Merlin Moncure 
>
>> On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto
>>  wrote:
>> >
>> > I need a help on postgresql performance
>> >
>> > I have configurate my postgresql files for tunning my server, however
>> it is
>> > slow and cpu resources are highter than 120%
>> >
>> > I have no idea on how to solve this issue, I was trying to search more
>> infor
>> > on google but is not enough, I also have try autovacum sentences and
>> reindex
>> > db, but it continues beeing slow
>> >
>> > My app is a gps listener that insert more than 6000 records per minutes
>> > using a tcp server developed on python twisted, where there is no
>> problems,
>> > the problem is when I try to follow the gps devices on a map on a
>> relatime,
>> > I am doing queries each 6 seconds to my database from my django app, for
>> > request last position using a stored procedure, but the query get slow
>> on
>> > more than 50 devices and cpu start to using more than 120% of its
>> resources
>> >
>> > Django App connect the postgres database directly, and tcp listener
>> server
>> > for teh devices connect database on threaded way using pgbouncer, I
>> have not
>> > using my django web app on pgbouncer caause I dont want to crash gps
>> devices
>> > connection on the pgbouncer
>> >
>> > I hoe you could help on get a better performance
>> >
>> > I am attaching my store procedure, my conf files and my cpu, memory
>> > information
>> >
>> > **Stored procedure**
>> >
>> > CREATE OR REPLACE FUNCTION gps_get_live_location (
>> > _imeis varchar(8)
>> > )
>> > RETURNS TABLE (
>> > imei varchar,
>> > device_id integer,
>> > date_time_process timestamp with time zone,
>> > latitude double precision,
>> > longitude double precision,
>> > course smallint,
>> > speed smallint,
>> > mileage integer,
>> > gps_signal smallint,
>> > gsm_signal smallint,
>> > alarm_status boolean,
>> > gsm_status boolean,
>> > vehicle_status boolean,
>> > alarm_over_speed boolean,
>> > other text,
>> > address varchar
>> > ) AS $func$
>> > DECLARE
>> > arr varchar[];
>> > BEGIN
>> > arr := regexp_split_to_array(_imeis, E'\\s+');
>> > FOR i IN 1..array_length(arr, 1) LOOP
>> > RETURN QUERY
>> > SELECT
>> > gpstracking_device_tracks.imei,
>> > gpstracking_device_tracks.device_id,
>> > gpstracking_device_tracks.date_time_process,
>> > gpstracking_device_tracks.latitude,
>> > gpstracking_device_tracks.longitude,
>> > gpstracking_device_tracks.course,
>> > gpstracking_device_tracks.speed,
>> > gpstracking_device_tracks.mileage,
>> > gpstracking_device_tracks.gps_signal,
>> > gpstracking_device_tracks.gsm_signal,
>> > gpstracking_device_t

[GENERAL] Re: [GENERAL] Help on ṕerformance

2013-10-02 Thread Carlos Eduardo Sotelo Pinto
Thanks to all

I have fix that refactoring the function

BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
 RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM (
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address,
ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY
gpstracking_device_tracks.date_time_process DESC) as rnumber
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = ANY(arr)
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
) AS gpstracking_device_tracks
WHERE gpstracking_device_tracks.rnumber = 1;
END;


2013/10/2 Merlin Moncure 

> On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto
>  wrote:
> >
> > I need a help on postgresql performance
> >
> > I have configurate my postgresql files for tunning my server, however it
> is
> > slow and cpu resources are highter than 120%
> >
> > I have no idea on how to solve this issue, I was trying to search more
> infor
> > on google but is not enough, I also have try autovacum sentences and
> reindex
> > db, but it continues beeing slow
> >
> > My app is a gps listener that insert more than 6000 records per minutes
> > using a tcp server developed on python twisted, where there is no
> problems,
> > the problem is when I try to follow the gps devices on a map on a
> relatime,
> > I am doing queries each 6 seconds to my database from my django app, for
> > request last position using a stored procedure, but the query get slow on
> > more than 50 devices and cpu start to using more than 120% of its
> resources
> >
> > Django App connect the postgres database directly, and tcp listener
> server
> > for teh devices connect database on threaded way using pgbouncer, I have
> not
> > using my django web app on pgbouncer caause I dont want to crash gps
> devices
> > connection on the pgbouncer
> >
> > I hoe you could help on get a better performance
> >
> > I am attaching my store procedure, my conf files and my cpu, memory
> > information
> >
> > **Stored procedure**
> >
> > CREATE OR REPLACE FUNCTION gps_get_live_location (
> > _imeis varchar(8)
> > )
> > RETURNS TABLE (
> > imei varchar,
> > device_id integer,
> > date_time_process timestamp with time zone,
> > latitude double precision,
> > longitude double precision,
> > course smallint,
> > speed smallint,
> > mileage integer,
> > gps_signal smallint,
> > gsm_signal smallint,
> > alarm_status boolean,
> > gsm_status boolean,
> > vehicle_status boolean,
> > alarm_over_speed boolean,
> > other text,
> > address varchar
> > ) AS $func$
> > DECLARE
> > arr varchar[];
> > BEGIN
> > arr := regexp_split_to_array(_imeis, E'\\s+');
> > FOR i IN 1..array_length(arr, 1) LOOP
> > RETURN QUERY
> > SELECT
> > gpstracking_device_tracks.imei,
> > gpstracking_device_tracks.device_id,
> > gpstracking_device_tracks.date_time_process,
> > gpstracking_device_tracks.latitude,
> > gpstracking_device_tracks.longitude,
> > gpstracking_device_tracks.course,
> > gpstracking_device_tracks.speed,
> > gpstracking_device_tracks.mileage,
> > gpstracking_device_tracks.gps_signal,
> > gpstracking_device_tracks.gsm_signal,
> > gpstracking_device_tracks.alarm_status,
> > gpstracking_device_tracks.gps_status,
> > gpstracking_device_tracks.vehicle_status,
> > gpstracking_device_tracks.alarm_over_speed,
> > gpstracking_device_tracks.other,
> > gpstracking_device_tracks.address
> > FROM gpstracking_device_tracks
> > WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
> > AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
> > now())
> > AND gpstracking_device_tracks.date_time_process <= NOW()
> >

[GENERAL] Re: [GENERAL] Help on ṕerformance

2013-10-02 Thread Merlin Moncure
On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto
 wrote:
>
> I need a help on postgresql performance
>
> I have configurate my postgresql files for tunning my server, however it is
> slow and cpu resources are highter than 120%
>
> I have no idea on how to solve this issue, I was trying to search more infor
> on google but is not enough, I also have try autovacum sentences and reindex
> db, but it continues beeing slow
>
> My app is a gps listener that insert more than 6000 records per minutes
> using a tcp server developed on python twisted, where there is no problems,
> the problem is when I try to follow the gps devices on a map on a relatime,
> I am doing queries each 6 seconds to my database from my django app, for
> request last position using a stored procedure, but the query get slow on
> more than 50 devices and cpu start to using more than 120% of its resources
>
> Django App connect the postgres database directly, and tcp listener server
> for teh devices connect database on threaded way using pgbouncer, I have not
> using my django web app on pgbouncer caause I dont want to crash gps devices
> connection on the pgbouncer
>
> I hoe you could help on get a better performance
>
> I am attaching my store procedure, my conf files and my cpu, memory
> information
>
> **Stored procedure**
>
> CREATE OR REPLACE FUNCTION gps_get_live_location (
> _imeis varchar(8)
> )
> RETURNS TABLE (
> imei varchar,
> device_id integer,
> date_time_process timestamp with time zone,
> latitude double precision,
> longitude double precision,
> course smallint,
> speed smallint,
> mileage integer,
> gps_signal smallint,
> gsm_signal smallint,
> alarm_status boolean,
> gsm_status boolean,
> vehicle_status boolean,
> alarm_over_speed boolean,
> other text,
> address varchar
> ) AS $func$
> DECLARE
> arr varchar[];
> BEGIN
> arr := regexp_split_to_array(_imeis, E'\\s+');
> FOR i IN 1..array_length(arr, 1) LOOP
> RETURN QUERY
> SELECT
> gpstracking_device_tracks.imei,
> gpstracking_device_tracks.device_id,
> gpstracking_device_tracks.date_time_process,
> gpstracking_device_tracks.latitude,
> gpstracking_device_tracks.longitude,
> gpstracking_device_tracks.course,
> gpstracking_device_tracks.speed,
> gpstracking_device_tracks.mileage,
> gpstracking_device_tracks.gps_signal,
> gpstracking_device_tracks.gsm_signal,
> gpstracking_device_tracks.alarm_status,
> gpstracking_device_tracks.gps_status,
> gpstracking_device_tracks.vehicle_status,
> gpstracking_device_tracks.alarm_over_speed,
> gpstracking_device_tracks.other,
> gpstracking_device_tracks.address
> FROM gpstracking_device_tracks
> WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
> AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
> now())
> AND gpstracking_device_tracks.date_time_process <= NOW()
> ORDER BY gpstracking_device_tracks.date_time_process DESC
> LIMIT 1;
> END LOOP;
> RETURN;
> END;
> $func$
> LANGUAGE plpgsql VOLATILE SECURITY DEFINER;


Why are you doing this in a loop?  What's the point of the LIMIT 1?
You can almost certainly refactor this procedure into a vanilla query.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-15 Thread Chris Curvey
On Sun, Sep 15, 2013 at 7:49 PM, Jeff Janes  wrote:

> On Tue, Sep 10, 2013 at 10:37 AM, Chris Curvey  > wrote:
>
>>
>>
>>
>>
>> Great thought.  Looking through the logs, it appears that all my failures
>> are on a CREATE INDEX.  Usually on my biggest table, but often on another
>> table.
>>
>>
>>
>> 2013-09-10 10:09:46 EDT ERROR:  canceling autovacuum task
>>
>> 2013-09-10 10:09:46 EDT CONTEXT:  automatic analyze of table
>> "certified_mail_ccc2.public.cm_status_history"
>>
>> 2013-09-10 10:15:13 EDT LOG:  server process (PID 14386) was terminated
>> by signal 11: Segmentation fault
>>
>> 2013-09-10 10:15:13 EDT DETAIL:  Failed process was running: CREATE INDEX
>> cm_envelope_tracking_number ON cm_envelope USING btree (tracking_number);
>>
>>
>>
>>
>>
>>
>>
>> 2013-09-10 10:15:13 EDT LOG:  terminating any other active server
>> processes
>>
>> 2013-09-10 10:15:13 EDT WARNING:  terminating connection because of crash
>> of another server process
>>
>> 2013-09-10 10:15:13 EDT DETAIL:  The postmaster has commanded this server
>> process to roll back the current transaction and exit, because another
>> server process exited abnormally and possibly corrupted shared memory.
>>
>>
>>
>> I cannot square this with the fact that when I echo the commands, the
>> last echoed command is about setting privileges.
>>
>
> A backend is crashing, and taking down the entire PostgreSQL system.  The
> commands you see being echoed are from a different process from the one
> that triggered the crash, so it is just an innocent bystander which has no
> useful information.  Are you using parallel restore?  (If not, why is there
> someone indexing your biggest table during the restore?)
>

I'm the only person doing anything, and the only thing going on is the
restore.  And I'm not using parallel restore (I thought that might be part
of the issue.)


>
> You will want to get the backtrace of the coredump generated by the
> crashed backend, not of the running process. Have you tried taking a bt
> with gdb?  You said you couldn't find the symbols, but have you tried it
> anyway?  On CentOS and openSuse I often get warnings about some symbols not
> being found, but all the symbols I actually need to interpret the backtrace
> end up being there.
>

I can try, but the instructions said that installing the -dev package by
itself was not sufficient, so I stopped at that point.  But as they say in
the lottery ads, "Hey, you never know!"


> Cheers,
>
> Jeff
>

Many thanks!


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-15 Thread Jeff Janes
On Tue, Sep 10, 2013 at 10:37 AM, Chris Curvey
wrote:

>
>
>
>
> Great thought.  Looking through the logs, it appears that all my failures
> are on a CREATE INDEX.  Usually on my biggest table, but often on another
> table.
>
>
>
> 2013-09-10 10:09:46 EDT ERROR:  canceling autovacuum task
>
> 2013-09-10 10:09:46 EDT CONTEXT:  automatic analyze of table
> "certified_mail_ccc2.public.cm_status_history"
>
> 2013-09-10 10:15:13 EDT LOG:  server process (PID 14386) was terminated by
> signal 11: Segmentation fault
>
> 2013-09-10 10:15:13 EDT DETAIL:  Failed process was running: CREATE INDEX
> cm_envelope_tracking_number ON cm_envelope USING btree (tracking_number);
>
>
>
>
>
>
>
> 2013-09-10 10:15:13 EDT LOG:  terminating any other active server processes
>
> 2013-09-10 10:15:13 EDT WARNING:  terminating connection because of crash
> of another server process
>
> 2013-09-10 10:15:13 EDT DETAIL:  The postmaster has commanded this server
> process to roll back the current transaction and exit, because another
> server process exited abnormally and possibly corrupted shared memory.
>
>
>
> I cannot square this with the fact that when I echo the commands, the last
> echoed command is about setting privileges.
>

A backend is crashing, and taking down the entire PostgreSQL system.  The
commands you see being echoed are from a different process from the one
that triggered the crash, so it is just an innocent bystander which has no
useful information.  Are you using parallel restore?  (If not, why is there
someone indexing your biggest table during the restore?)

You will want to get the backtrace of the coredump generated by the crashed
backend, not of the running process. Have you tried taking a bt with gdb?
 You said you couldn't find the symbols, but have you tried it anyway?  On
CentOS and openSuse I often get warnings about some symbols not being
found, but all the symbols I actually need to interpret the backtrace end
up being there.

Cheers,

Jeff


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-10 Thread Adrian Klaver

On 09/10/2013 10:37 AM, Chris Curvey wrote:



Another development (possibly unrelated):  I tried **dumping** with
–no-privileges –no-tablespace –no-owner, and the restore went fine.



Probably has to do with whether you are dumping plain text or custom format:

http://www.postgresql.org/docs/9.2/interactive/app-pgdump.html

-O
--no-owner
Do not output commands to set ownership of objects to match the original 
database. By default, pg_dump issues ALTER OWNER or SET SESSION 
AUTHORIZATION statements to set ownership of created database objects. 
These statements will fail when the script is run unless it is started 
by a superuser (or the same user that owns all of the objects in the 
script). To make a script that can be restored by any user, but will 
give that user ownership of all the objects, specify -O.


This option is only meaningful for the plain-text format. For the 
archive formats, you can specify the option when you call pg_restore.




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-10 Thread Jeff Janes
On Tue, Sep 10, 2013 at 6:57 AM, Chris Curvey 
wrote:
> From: Marcin Mańk [mailto:marcin.m...@gmail.com]
>
>
> > Is it crashing on a specific database object? pg_restore -v will tell
you
> > how far it went. Then try to restore only that object. Is it perhaps
> > crashing on a specific row?
> >
>
>
>
> Good advice.  I turned on –verbose, and got a ton of output, ending with:
>
>
>
> pg_restore: setting owner and privileges for FK CONSTRAINT
> user_id_refs_id_7ceef80f
>
> pg_restore: setting owner and privileges for FK CONSTRAINT
> user_id_refs_id_dfbab7d
>
> pg_restore: [archiver (db)] could not execute query: no connection to the
> server
>
> Command was: -- Completed on 2013-09-09 11:35:16 EDT
>

What does the server log say?  It should tell you why the server is
restarting.

Cheers,

Jeff


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-10 Thread Chris Curvey


From: Jeff Janes [mailto:jeff.ja...@gmail.com]
Sent: Tuesday, September 10, 2013 1:26 PM
To: Chris Curvey
Cc: Marcin Mańk; pgsql-general@postgresql.org
Subject: Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

On Tue, Sep 10, 2013 at 6:57 AM, Chris Curvey 
mailto:ccur...@zuckergoldberg.com>> wrote:
> From: Marcin Mańk [mailto:marcin.m...@gmail.com<mailto:marcin.m...@gmail.com>]
>
>
> > Is it crashing on a specific database object? pg_restore -v will tell you
> > how far it went. Then try to restore only that object. Is it perhaps
> > crashing on a specific row?
> >
>
>
>
> Good advice.  I turned on –verbose, and got a ton of output, ending with:
>
>
>
> pg_restore: setting owner and privileges for FK CONSTRAINT
> user_id_refs_id_7ceef80f
>
> pg_restore: setting owner and privileges for FK CONSTRAINT
> user_id_refs_id_dfbab7d
>
> pg_restore: [archiver (db)] could not execute query: no connection to the
> server
>
> Command was: -- Completed on 2013-09-09 11:35:16 EDT
>
What does the server log say?  It should tell you why the server is restarting.

Cheers,

Jeff

Great thought.  Looking through the logs, it appears that all my failures are 
on a CREATE INDEX.  Usually on my biggest table, but often on another table.

2013-09-10 10:09:46 EDT ERROR:  canceling autovacuum task
2013-09-10 10:09:46 EDT CONTEXT:  automatic analyze of table 
"certified_mail_ccc2.public.cm_status_history"
2013-09-10 10:15:13 EDT LOG:  server process (PID 14386) was terminated by 
signal 11: Segmentation fault
2013-09-10 10:15:13 EDT DETAIL:  Failed process was running: CREATE INDEX 
cm_envelope_tracking_number ON cm_envelope USING btree (tracking_number);



2013-09-10 10:15:13 EDT LOG:  terminating any other active server processes
2013-09-10 10:15:13 EDT WARNING:  terminating connection because of crash of 
another server process
2013-09-10 10:15:13 EDT DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another server 
process exited abnormally and possibly corrupted shared memory.

I cannot square this with the fact that when I echo the commands, the last 
echoed command is about setting privileges.

Another development (possibly unrelated):  I tried *dumping* with 
–no-privileges –no-tablespace –no-owner, and the restore went fine.


Disclaimer

THIS IS A CONFIDENTIAL COMMUNICATION. The information contained in this e-mail 
is attorney-client privileged and confidential, intended only for the use of 
the individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are here by notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please notify us immediately by 
reply e-mail or by telephone (908) 233-8500.Thank you..


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-10 Thread Chris Curvey
> >
> > Good advice.  I turned on –verbose, and got a ton of output, ending with:
> >
> > pg_restore: setting owner and privileges for FK CONSTRAINT
> > user_id_refs_id_7ceef80f
> >
> > pg_restore: setting owner and privileges for FK CONSTRAINT
> > user_id_refs_id_dfbab7d
> >
> > pg_restore: [archiver (db)] could not execute query: no connection to
> > the server
> >
> >  Command was: -- Completed on 2013-09-09 11:35:16 EDT
> >
> > pg_restore: [archiver (db)] could not execute query: no connection to
> > the server

This was a red herring.  That last constraint that was listed is the last thing 
in the dump/restore (at least according to --list).  Perhaps at the end of 
pg_restore, it's trying to send the comment to the database ("completed on...") 
and that's confusing things?   It does appear that the entire database has been 
restored, but the cluster restart is disconcerting.

>
>
> At this point I would be more worried about the above, 'no connection to
> server'.

Yep, that would be the point where the cluster is restarting.

> --no-owner does not mean that ownership is not set, just that the ownership
> from the source database is not carried over.
>
> http://www.postgresql.org/docs/9.2/interactive/app-pgrestore.html
>
> -O
> --no-owner
> Do not output commands to set ownership of objects to match the original
> database. By default, pg_restore issues ALTER OWNER or SET SESSION
> AUTHORIZATION statements to set ownership of created schema elements.
> These statements will fail unless the initial connection to the database is
> made by a superuser (or the same user that owns all of the objects in the
> script). With -O, any user name can be used for the initial connection, and
> this user will own all the created objects.

Does this mean that the --no-owner is a command flag for pg_dump, but is 
ignored by pg_restore?   Let me run a test and see...

Disclaimer

THIS IS A CONFIDENTIAL COMMUNICATION. The information contained in this e-mail 
is attorney-client privileged and confidential, intended only for the use of 
the individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are here by notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please notify us immediately by 
reply e-mail or by telephone (908) 233-8500.Thank you..

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-10 Thread Adrian Klaver

On 09/10/2013 06:57 AM, Chris Curvey wrote:

*From:*Marcin Mańk [mailto:marcin.m...@gmail.com]
*Sent:* Monday, September 09, 2013 8:30 PM
*To:* Chris Curvey
*Cc:* pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

On Mon, Sep 9, 2013 at 4:00 PM, Chris Curvey mailto:ccur...@zuckergoldberg.com>> wrote:

But I'm having troubles with the 9.2 server crashing when I'm
restoring the dump.  I'm using the 9.2 version of pg_dump.  I've
tried restoring a custom-format dump with pg_restore, and I've tried
restoring  a text-format dump with pqsl, and both of them are
crashing on me.

The data is too sensitive for me to submit a database dump to the
community, but I'd like to submit a stack trace, in the hopes that
someone might be able to figure out what's going on.  But I'm having
some trouble getting this done.

Is it crashing on a specific database object? pg_restore -v will tell
you how far it went. Then try to restore only that object. Is it perhaps
crashing on a specific row?

Try producing a self contained test case (like only the culprit table,
anonymized).

Regards

Marcin Mańk

Good advice.  I turned on –verbose, and got a ton of output, ending with:

pg_restore: setting owner and privileges for FK CONSTRAINT
user_id_refs_id_7ceef80f

pg_restore: setting owner and privileges for FK CONSTRAINT
user_id_refs_id_dfbab7d

pg_restore: [archiver (db)] could not execute query: no connection to
the server

 Command was: -- Completed on 2013-09-09 11:35:16 EDT

pg_restore: [archiver (db)] could not execute query: no connection to
the server



At this point I would be more worried about the above, 'no connection to 
server'.




 Command was: --

-- PostgreSQL database dump complete

–

Which I find really odd, because I specified –no-owner –no-privileges
–no-tablespace


--no-owner does not mean that ownership is not set, just that the 
ownership from the source database is not carried over.


http://www.postgresql.org/docs/9.2/interactive/app-pgrestore.html

-O
--no-owner
Do not output commands to set ownership of objects to match the original 
database. By default, pg_restore issues ALTER OWNER or SET SESSION 
AUTHORIZATION statements to set ownership of created schema elements. 
These statements will fail unless the initial connection to the database 
is made by a superuser (or the same user that owns all of the objects in 
the script). With -O, any user name can be used for the initial 
connection, and this user will own all the created objects.




chris@mu:/sdb$ pg_restore --dbname=certified_mail_ccc2 --format=c
--verbose --clean --no-owner --no-privileges --no-tablespaces -h mu -p
5434 cm_Mon.backup

So now I’m up to three questions.  (Why the crash?  How to get
backtrace?  Why are we applying permissions when I said not to?)  I
guess that’s the nature of the universe. Let me see if I can figure out
which table that is and try to create a test case.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-10 Thread Chris Curvey
From: Marcin Mańk [mailto:marcin.m...@gmail.com]
Sent: Monday, September 09, 2013 8:30 PM
To: Chris Curvey
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

On Mon, Sep 9, 2013 at 4:00 PM, Chris Curvey 
mailto:ccur...@zuckergoldberg.com>> wrote:
But I'm having troubles with the 9.2 server crashing when I'm restoring the 
dump.  I'm using the 9.2 version of pg_dump.  I've tried restoring a 
custom-format dump with pg_restore, and I've tried restoring  a text-format 
dump with pqsl, and both of them are crashing on me.

The data is too sensitive for me to submit a database dump to the community, 
but I'd like to submit a stack trace, in the hopes that someone might be able 
to figure out what's going on.  But I'm having some trouble getting this done.

Is it crashing on a specific database object? pg_restore -v will tell you how 
far it went. Then try to restore only that object. Is it perhaps crashing on a 
specific row?

Try producing a self contained test case (like only the culprit table, 
anonymized).

Regards
Marcin Mańk

Good advice.  I turned on –verbose, and got a ton of output, ending with:

pg_restore: setting owner and privileges for FK CONSTRAINT 
user_id_refs_id_7ceef80f
pg_restore: setting owner and privileges for FK CONSTRAINT 
user_id_refs_id_dfbab7d
pg_restore: [archiver (db)] could not execute query: no connection to the server
Command was: -- Completed on 2013-09-09 11:35:16 EDT


pg_restore: [archiver (db)] could not execute query: no connection to the server
Command was: --
-- PostgreSQL database dump complete
–

Which I find really odd, because I specified –no-owner –no-privileges 
–no-tablespace
chris@mu:/sdb$ pg_restore --dbname=certified_mail_ccc2 --format=c --verbose 
--clean --no-owner --no-privileges --no-tablespaces -h mu -p 5434 cm_Mon.backup

So now I’m up to three questions.  (Why the crash?  How to get backtrace?  Why 
are we applying permissions when I said not to?)  I guess that’s the nature of 
the universe. Let me see if I can figure out which table that is and try to 
create a test case.

Disclaimer

THIS IS A CONFIDENTIAL COMMUNICATION. The information contained in this e-mail 
is attorney-client privileged and confidential, intended only for the use of 
the individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are here by notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please notify us immediately by 
reply e-mail or by telephone (908) 233-8500.Thank you..


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-09 Thread Marcin Mańk
On Mon, Sep 9, 2013 at 4:00 PM, Chris Curvey wrote:

> But I'm having troubles with the 9.2 server crashing when I'm restoring
> the dump.  I'm using the 9.2 version of pg_dump.  I've tried restoring a
> custom-format dump with pg_restore, and I've tried restoring  a text-format
> dump with pqsl, and both of them are crashing on me.
>
> The data is too sensitive for me to submit a database dump to the
> community, but I'd like to submit a stack trace, in the hopes that someone
> might be able to figure out what's going on.  But I'm having some trouble
> getting this done.
>

Is it crashing on a specific database object? pg_restore -v will tell you
how far it went. Then try to restore only that object. Is it perhaps
crashing on a specific row?

Try producing a self contained test case (like only the culprit table,
anonymized).

Regards
Marcin Mańk


Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-09 Thread John R Pierce

On 9/9/2013 4:53 PM, John R Pierce wrote:

On 9/9/2013 7:00 AM, Chris Curvey wrote:

any idea where I go from here?


don't develop on a newer version of the database than you are 
deploying on.


*maybe* you can use the pgdump from 8.4 to connect to and dump the 9.2 
database, but the 9.2 dump is NOT guaranteed to generate 8.4 
compatible SQL, in fact its highly likely it won't, as you can already 
attest to.


oops, I re-read yur request, you're going the other way.

use the pg_dump from 9.2 to dump the 8.4 database, via either a direct 
network connection, or an ssh tunnel, or whatever works best.   this is 
guaranteed to work




--
john r pierce  37N 122W
somewhere on the middle of the left coast



Re: [GENERAL] help getting a backtrace from 9.2 on Ubuntu 13.04?

2013-09-09 Thread John R Pierce

On 9/9/2013 7:00 AM, Chris Curvey wrote:

any idea where I go from here?


don't develop on a newer version of the database than you are deploying on.

*maybe* you can use the pgdump from 8.4 to connect to and dump the 9.2 
database, but the 9.2 dump is NOT guaranteed to generate 8.4 compatible 
SQL, in fact its highly likely it won't, as you can already attest to.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help, How to start the server??

2013-05-29 Thread YuChi
wooo, thanks for all of your advice

it seems to be ok now!



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Help-How-to-start-the-server-tp5756725p5757271.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help, How to start the server??

2013-05-24 Thread Raymond O'Donnell
On 24/05/2013 08:35, YuChi wrote:
> i use postgresql-9.2.4 install on ubuntu11.04(natty) 
> and i use the command: 
> 
> *postgres@ubuntu:/$ postgres -D /usr/local/pgsql/data*
> 
> or use
> 
> *postgres@ubuntu:/$ /usr/local/pgsql/bin/postmaster -D
> /usr/local/pgsql/data*
> 
> to start the server
> but it's stopping at the message:
> 
> *LOG:  database system was shut down at 2013-05-24 15:07:34 CST
> LOG:  database system is ready to accept connections
> LOG:  autovacuum launcher started*
> 
> for a long time
> it seems never continue again
> and i don't know how to do next...
> can anyone tell me is there anything wrong? or how to do next?

It's just waiting for connections. Try opening a different console
window, and connect to it from there:

  psql -U  

However, you probably want it to run as a daemon; I don't know about
Ubuntu, but on debian it's set up for you already, if you install using
apt-get:

  /etc/init.d/postgresql start

In fact, it's probably already running after being installed:

  ps ax | grep postgres

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help, How to start the server??

2013-05-24 Thread Adrian Klaver

On 05/24/2013 12:35 AM, YuChi wrote:

i use postgresql-9.2.4 install on ubuntu11.04(natty)
and i use the command:

*postgres@ubuntu:/$ postgres -D /usr/local/pgsql/data*

or use

*postgres@ubuntu:/$ /usr/local/pgsql/bin/postmaster -D
/usr/local/pgsql/data*

to start the server
but it's stopping at the message:

*LOG:  database system was shut down at 2013-05-24 15:07:34 CST


The above is the previous shut down.


LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started*


The database is now up.



for a long time
it seems never continue again
and i don't know how to do next...
can anyone tell me is there anything wrong? or how to do next?


Start a client connection to test.




thank you



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Help-How-to-start-the-server-tp5756725.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help, How to start the server??

2013-05-24 Thread Ian Lawrence Barwick
2013/5/24 YuChi :
> i use postgresql-9.2.4 install on ubuntu11.04(natty)
> and i use the command:
>
> *postgres@ubuntu:/$ postgres -D /usr/local/pgsql/data*
>
> or use
>
> *postgres@ubuntu:/$ /usr/local/pgsql/bin/postmaster -D
> /usr/local/pgsql/data*
>
> to start the server
> but it's stopping at the message:
>
> *LOG:  database system was shut down at 2013-05-24 15:07:34 CST
> LOG:  database system is ready to accept connections
> LOG:  autovacuum launcher started*
>
> for a long time
> it seems never continue again
> and i don't know how to do next...
> can anyone tell me is there anything wrong? or how to do next?

That's perfectly normal log output after a server start. Have you tried
connecting with psql?

Ian Barwick


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with log entries during restart

2013-05-07 Thread Takashi Ohnishi
Hi, Davis.

I guess that the database was restarted by [pg_ctl restart -m fast] but
there was a continuous request from remote servers.

> (1) 
> Does this prove or strongly indicate that somebody did a [pg_ctl_restart]
?
> 

Yes. It seems that someone did [pg_ctl restart -m fast].
When the option [-m fast] is set, the postmaster send all the backend
process SIGTERM.
So, any active transactions becomes aborted.

> (2) 
> Can you explain/speculate the meaning of the ~100 "PID 26897 in cancel
request did not match any process" entries, 
> logged between
> "the database system is shutting down"
> and
> "database system is shut down" ?
> 

It means that the postmaster send SIGTERM to the backend process with pid
26897, but the backend did not exist at the time.
I guess the backend process had already been closed from the client side.

> (3) 
> Can you explain/speculate the meaning of the ~30 "PID 26897 in cancel
request did not match any process" entries, 
> logged between
> "database system is shut down"
> and
> "database system was shut down at 17:19:05 EDT" ?
> 

Hmm.
> "database system is shut down"
is the message from checkpoint at shutdown, and
> "database system was shut down at 17:19:05 EDT" ?
is the message from checkpoint at startup.

Perhaps, there remained requests from remote servers.

> (4) 
> Can you explain/speculate the meaning of the 3 "FATAL:  the database
system is starting up" entries, 
> logged between
> "database system was shut down at 17:19:05 EDT"
> and
> "database system is ready to accept connections" ?

It means that the request from user2 comes during the postgres is
initializing and is not ready to accept a request.


Regard,


Takashi Ohnishi 


From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Sahagian, David
Sent: Wednesday, May 08, 2013 6:03 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] help with log entries during restart 


log_line_prefix = '%m %a %u %c %v %x '
  # %m  Time stamp with milliseconds
  # %a  Application name
  # %u  User name
  # %c  emits a quasi-unique Session ID, 
  #   consisting of two 4-byte hexadecimal numbers (without leading
zeros) separated by a dot. 
  #   The numbers are the "Process start time" and the "Process ID".
  # %v  Virtual transaction ID (backendID/localXID)
  # %x  Transaction ID (0 if none is assigned)

This was logged by a 9.1.8 Postgres server in a testing-environment:

17:19:01.825 EDT   517a97eb.5081  0 LOG:  received fast shutdown request
17:19:01.825 EDT   517a97eb.5081  0 LOG:  aborting any active transactions
. . . . . . . . . . . . . 
17:19:03.137 EDT   517a97f1.5084  0 LOG:  shutting down
17:19:03.839 EDT [unknown] user1 51881e47.6e6f  0 FATAL:  the database
system is shutting down
17:19:05.187 EDT [unknown] [unknown] 51881e49.6e70  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.193 EDT [unknown] [unknown] 51881e49.6e71  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.198 EDT [unknown] [unknown] 51881e49.6e72  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.202 EDT [unknown] [unknown] 51881e49.6e73  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.207 EDT [unknown] [unknown] 51881e49.6e74  0 LOG:  PID 26897 in
cancel request did not match any process
. . . about 85 more . . .
17:19:05.952 EDT [unknown] [unknown] 51881e49.6ecb  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.959 EDT [unknown] [unknown] 51881e49.6ecc  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.964 EDT [unknown] [unknown] 51881e49.6ecd  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.970 EDT [unknown] [unknown] 51881e49.6ece  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.975 EDT [unknown] [unknown] 51881e49.6ecf  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.977 EDT   517a97f1.5084  0 LOG:  database system is shut down
17:19:05.980 EDT [unknown] [unknown] 51881e49.6ed0  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.985 EDT [unknown] [unknown] 51881e49.6ed1  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:05.994 EDT [unknown] [unknown] 51881e49.6ed2  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:06.000 EDT [unknown] [unknown] 51881e4a.6ed3  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:06.023 EDT [unknown] [unknown] 51881e4a.6ed4  0 LOG:  PID 26897 in
cancel request did not match any process
. . . about 20 more . . .
17:19:06.225 EDT [unknown] [unknown] 51881e4a.6eea  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:06.232 EDT [unknown] [unknown] 51881e4a.6eeb  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:06.240 EDT [unknown] [unknown] 51881e4a.6eec  0 LOG:  PID 26897 in
cancel request did not match any process
17:19:06.245 EDT [unknown] [unknown]

Re: [GENERAL] help me to clear postgres problem

2013-03-26 Thread dinesh kumar
IIRC, service failure creates some event logs information in windows. And
also you can verify the bit rock installer log files from %TEMP% location.

Regards,
Dinesh
manojadinesh.blogspot.com

On Mon, Mar 25, 2013 at 5:05 PM, jayaram s <123jaya...@gmail.com> wrote:

> Hello
> I have installed PostgreSQL 8.4.1 in my PC. For the requirement of data
> migration I again want to install "PostgreSQL enterprise DB  9.2".
> I couldn't install it because
> I have select option "postgresql compatible" on "configuration mode". So
> prompt wants me to enter "password". I have enter my existing postgres
> password "postgres'. But I couldn't install. An error message displayed as
> * "service user account 'postgres' couldnot be created". Please help me
> to clear the problem*
>
> --
> *With Regards
>
> Jayaram
>
> *
>
>


Re: [GENERAL] help me to clear postgres problem

2013-03-25 Thread Guy Rouillier

On 3/25/2013 7:35 AM, jayaram s wrote:

Hello
I have installed PostgreSQL 8.4.1 in my PC. For the requirement of data
migration I again want to install "PostgreSQL enterprise DB  9.2".
I couldn't install it because
I have select option "postgresql compatible" on "configuration mode". So
prompt wants me to enter "password". I have enter my existing postgres
password "postgres'. But I couldn't install. An error message displayed
as*"service user account 'postgres' couldnot be created". Please help me
to clear the problem*


Are you intentionally trying to install PostgresPlus Advanced Server? 
If you are working just on your PC, you should be able to use the 
PostgreSQL installer: 
http://www.enterprisedb.com/products-services-training/pgdownload#windows


The password the PPAS installer is asking you for is the password to 
your EnterpriseDB account, not a local Windows account.  You need to 
register an EnterpriseDB account before you can install PPAS.


--
Guy Rouillier


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with upgrade from 9.1 to 9.2

2012-11-07 Thread Bruce Momjian
On Thu, Oct 25, 2012 at 02:39:09AM -0200, Aníbal Pacheco wrote:
> I could after some work, what I want to ask now is this:
> In the middle of the pg_restore process I had to stop it (^Z) and remove one
> problematic and not needed database from the generated pg_upgrade_dump_db.sql
> file and then continue the process with fg, of course it failed BUT: I started
> the new server and checked for my only needed database and it seems to be ok,
> can I be sure that this database was restored correctly? I think that is very
> probably that the answer is yes because the pg_restore process probably
> restores the databases in sequence like transactions, I'm right? thanks!

Pg_upgrade is quite complex.  I would not trust this as a valid upgrade.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with upgrade from 9.1 to 9.2

2012-10-24 Thread Aníbal Pacheco
I could after some work, what I want to ask now is this:
In the middle of the pg_restore process I had to stop it (^Z) and remove
one problematic and not needed database from the
generated pg_upgrade_dump_db.sql file and then continue the process with
fg, of course it failed BUT: I started the new server and checked for my
only needed database and it seems to be ok, can I be sure that this
database was restored correctly? I think that is very probably that the
answer is yes because the pg_restore process probably restores the
databases in sequence like transactions, I'm right? thanks!


On Thu, Oct 25, 2012 at 12:18 AM, Aníbal Pacheco wrote:

> Hi, I'm trying to upgrade from 9.1 to 9.2 but lc_collate issue is very
> difficult to solve, How I know the old lc_collate? I've only could grepped
> old settings lc_numeric, lc_time, lc_messages and lc_monetary all them =
> 'C' but I don't know what lc_collate was or how to find it. please help.
> Thanks
>
> (sorry if this message gets dupplicated, I've sent a copy just a sec
> before confirm the list membership)
>


Re: [GENERAL] Help estimating database and WAL size

2012-10-19 Thread Frank Lanitz

Am 2012-10-15 23:13, schrieb John R Pierce:

On 10/15/12 2:03 PM, Daniel Serodio (lists) wrote:

John R Pierce wrote:

On 10/08/12 1:39 PM, Daniel Serodio (lists) wrote:

3) Estimate the size of the transaction log
** We've got no idea how to estimate this, need advice **


postgres doesn't have a 'transaction log', it has the WAL 
(Write-Ahead Logs).  These are typically 16MB each.  on databases 
with a really heavy write load, I might bump the checkpoint_segments 
as high as 60, which seems to result in about 120 of them being 
created, 2GB total.  these files get reused, unless you are archiving 
them to implement a continuous realtime backup system (which enables 
"PITR", Point in Time Recovery)
Thanks, I was using the term "transaction log" as a synonym for WAL. 
We're planning on enabling PITR; how can we calculate the WAL size and 
the WAL archive size in this case?



its based on how much data you're writing to the database.   Wheen
you write tuples (rows) to the database, they are stored in 8K
pages/blocks which are written to the current WAL file as they are
committed, when that WAL file fills up, or the checkpoint_timeout is
reached (the default is 30 seconds, I believe) , the WAL file is
written to the archive.

To be able to utilize PITR, you need a complete base backup of the
file system, and /all/ the archived WAL files since that base backup
was taken.


In huge number of cases you will also write these files to some kind of 
network storage via e.g. CIFS or NFS so you have access to them via your 
warm-standby-machines. I want to say: this is taken some storage but can 
be reviewed kind of independent from database itself.


Cheers,
Frank





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help estimating database and WAL size

2012-10-19 Thread Jasen Betts
On 2012-10-15, Daniel Serodio (lists)  wrote:
>>
>> OID is optional, IIRC PGXID is not
> I hadn't heard of PGXID, I've just searched Google but found no 
> reference to this term except for this e-mail thread and some source 
> code. What is PGXID? Where can I learn more about hit?

That was the wrong name there's some sort of ID associated with the row 
it's part of the fixed per-row overhead, it may vary between different
versions of the row.

-- 
⚂⚃ 100% natural



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help for this situation

2012-10-17 Thread Albe Laurenz
GMAIL wrote:
> i describe the situation:
> i have two pc with postgressql server:
> - a main pc, with ip 192.168.5.1 turned on
> - a "backup" pc, with ip 192.168.5.1 turned off
> 
> i want that the main pc saves the database in local hard drive and on
a
> nas real-time. when the main pc has a failure, i turn on, manually,
the
> secondary pc, that reads the database from the nas
> 
> how i can do that?

What do you want to guard against? Hardware outage?
Software bugs?

I don't think that the scenario you describe is possible.

There are other, maybe better ways:

1) Have the database on NAS and use storage mirroring.
   NAS in that case means NFS (hard mount!), *not CIFS*.
   That will help against hardware outage, but not against
   filesystem or database corruption.

2) Use a standby database with streaming replication.
   That will also help with filesystem and database
   corruption.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help estimating database and WAL size

2012-10-15 Thread John R Pierce

On 10/15/12 2:03 PM, Daniel Serodio (lists) wrote:

John R Pierce wrote:

On 10/08/12 1:39 PM, Daniel Serodio (lists) wrote:

3) Estimate the size of the transaction log
** We've got no idea how to estimate this, need advice ** 


postgres doesn't have a 'transaction log', it has the WAL 
(Write-Ahead Logs).  These are typically 16MB each.  on databases 
with a really heavy write load, I might bump the checkpoint_segments 
as high as 60, which seems to result in about 120 of them being 
created, 2GB total.  these files get reused, unless you are archiving 
them to implement a continuous realtime backup system (which enables 
"PITR", Point in Time Recovery)
Thanks, I was using the term "transaction log" as a synonym for WAL. 
We're planning on enabling PITR; how can we calculate the WAL size and 
the WAL archive size in this case? 



its based on how much data you're writing to the database.   Wheen you 
write tuples (rows) to the database, they are stored in 8K pages/blocks 
which are written to the current WAL file as they are committed, when 
that WAL file fills up, or the checkpoint_timeout is reached (the 
default is 30 seconds, I believe) , the WAL file is written to the archive.


To be able to utilize PITR, you need a complete base backup of the file 
system, and /all/ the archived WAL files since that base backup was taken.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help estimating database and WAL size

2012-10-15 Thread Daniel Serodio (lists)

John R Pierce wrote:

On 10/08/12 1:39 PM, Daniel Serodio (lists) wrote:

3) Estimate the size of the transaction log
** We've got no idea how to estimate this, need advice ** 


postgres doesn't have a 'transaction log', it has the WAL (Write-Ahead 
Logs).  These are typically 16MB each.  on databases with a really 
heavy write load, I might bump the checkpoint_segments as high as 60, 
which seems to result in about 120 of them being created, 2GB total.  
these files get reused, unless you are archiving them to implement a 
continuous realtime backup system (which enables "PITR", Point in Time 
Recovery)
Thanks, I was using the term "transaction log" as a synonym for WAL. 
We're planning on enabling PITR; how can we calculate the WAL size and 
the WAL archive size in this case?


Regards,
Daniel Serodio


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help estimating database and WAL size

2012-10-15 Thread Daniel Serodio (lists)

Jasen Betts wrote:

On 2012-10-08, Daniel Serodio (lists)  wrote:

We are preparing a PostgreSQL database for production usage and we need
to estimate the storage size for this database. We're a team of
developers with low expertise on database administration, so we are
doing research, reading manuals and using our general IT knowledge to
achieve this.

We have actual data to migrate to this database and some rough
estimations of growth. For the sake of the example, let's say we have a
estimation of growth of 50% per year.

The point is: what's the general proper technique for doing a good size
estimation?

We are estimating the storage usage by the following rules. Topics where
we need advice are marked with ** asterisks **. Feedback on the whole
process is more than welcome.

1) Estimate the size of each table
  1.1) Discover the actual size of each row.
  - For fields with a fixed size (like bigint, char, etc) we used
the sizes described in the documentation
  - For fields with a dynamic size (like text) we estimated the
string length and used the function select pg_column_size('expected text
here'::text)


long text is subject to compression, pg_column_size doesn't seem to
test compression, compression is some sort of LZ..
Interesting, I didn't know about automatic compression. I've just read 
the section on TOAST and haven't been able to answer this either: Is 
there any way to check for the compressed size?

  - We added 4 more bytes for the OID that PostgreSQL uses internally


OID is optional, IIRC PGXID is not
I hadn't heard of PGXID, I've just searched Google but found no 
reference to this term except for this e-mail thread and some source 
code. What is PGXID? Where can I learn more about hit?

  1.2) Multiply the size of each row by the number of estimated rows
  ** Do I need to consider any overhead here, like row or table
metadata? **


page size 8K
column overhead 1 byte per not-NULL column, NULLs are free,


2) Estimate the size of each table index
  ** Don't know how to estimate this, need advice here **


IIRC
( data being indexed + 8 bytes ) / fill factor


3) Estimate the size of the transaction log
  ** We've got no idea how to estimate this, need advice **


how big are your transactions?

Very short, a couple of statements each.

4) Estimate the size of the backups (full and incremental)
  ** Don't know how to estimate this, need advice here **


depends on the format you use, backups tend to compress well.


5) Sum all the estimates for the actual minimum size


no, you get estimated size.

Thanks a lot for the response.

Regards,
Daniel Serodio

6) Apply a factor of 1.5x (the 50% growth) to the sum of the estimates
1, 2 and 4 for the minimum size after 1 year

7) Apply an overall factor of 1.2 ~ 1.4 (20% to 40% more) to estimates 5
and 6 for a good safety margin

I know the rules got pretty extensive, please let me know if you need
more data or examples for a better understanding.

We've also posted this question to
http://dba.stackexchange.com/q/25617/10166


Re: [GENERAL] Help estimating database and WAL size

2012-10-12 Thread John R Pierce

On 10/08/12 1:39 PM, Daniel Serodio (lists) wrote:

3) Estimate the size of the transaction log
** We've got no idea how to estimate this, need advice ** 


postgres doesn't have a 'transaction log', it has the WAL (Write-Ahead 
Logs).  These are typically 16MB each.  on databases with a really heavy 
write load, I might bump the checkpoint_segments as high as 60, which 
seems to result in about 120 of them being created, 2GB total.  these 
files get reused, unless you are archiving them to implement a 
continuous realtime backup system (which enables "PITR", Point in Time 
Recovery)




--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help estimating database and WAL size

2012-10-12 Thread Jasen Betts
On 2012-10-08, Daniel Serodio (lists)  wrote:
> We are preparing a PostgreSQL database for production usage and we need 
> to estimate the storage size for this database. We're a team of 
> developers with low expertise on database administration, so we are 
> doing research, reading manuals and using our general IT knowledge to 
> achieve this.
>
> We have actual data to migrate to this database and some rough 
> estimations of growth. For the sake of the example, let's say we have a 
> estimation of growth of 50% per year.
>
> The point is: what's the general proper technique for doing a good size 
> estimation?
>
> We are estimating the storage usage by the following rules. Topics where 
> we need advice are marked with ** asterisks **. Feedback on the whole 
> process is more than welcome.
>
> 1) Estimate the size of each table
>  1.1) Discover the actual size of each row.
>  - For fields with a fixed size (like bigint, char, etc) we used 
> the sizes described in the documentation
>  - For fields with a dynamic size (like text) we estimated the 
> string length and used the function select pg_column_size('expected text 
> here'::text)

long text is subject to compression, pg_column_size doesn't seem to 
test compression, compression is some sort of LZ..

>  - We added 4 more bytes for the OID that PostgreSQL uses internally

OID is optional, IIRC PGXID is not 

>  1.2) Multiply the size of each row by the number of estimated rows
>  ** Do I need to consider any overhead here, like row or table 
> metadata? **

page size 8K
column overhead 1 byte per not-NULL column, NULLs are free,

> 2) Estimate the size of each table index
>  ** Don't know how to estimate this, need advice here **

IIRC
( data being indexed + 8 bytes ) / fill factor


> 3) Estimate the size of the transaction log
>  ** We've got no idea how to estimate this, need advice **

how big are your transactions?

> 4) Estimate the size of the backups (full and incremental)
>  ** Don't know how to estimate this, need advice here **

depends on the format you use, backups tend to compress well.

> 5) Sum all the estimates for the actual minimum size

no, you get estimated size.

> 6) Apply a factor of 1.5x (the 50% growth) to the sum of the estimates 
> 1, 2 and 4 for the minimum size after 1 year
>
> 7) Apply an overall factor of 1.2 ~ 1.4 (20% to 40% more) to estimates 5 
> and 6 for a good safety margin
>
> I know the rules got pretty extensive, please let me know if you need 
> more data or examples for a better understanding.
>
> We've also posted this question to 
> http://dba.stackexchange.com/q/25617/10166
>
> Thanks in advance,
> Daniel Serodio
>
>


-- 
⚂⚃ 100% natural



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help estimating database and WAL size

2012-10-08 Thread Gavin Flower

On 09/10/12 09:39, Daniel Serodio (lists) wrote:
We are preparing a PostgreSQL database for production usage and we 
need to estimate the storage size for this database. We're a team of 
developers with low expertise on database administration, so we are 
doing research, reading manuals and using our general IT knowledge to 
achieve this.


We have actual data to migrate to this database and some rough 
estimations of growth. For the sake of the example, let's say we have 
a estimation of growth of 50% per year.


The point is: what's the general proper technique for doing a good 
size estimation?


We are estimating the storage usage by the following rules. Topics 
where we need advice are marked with ** asterisks **. Feedback on the 
whole process is more than welcome.


1) Estimate the size of each table
1.1) Discover the actual size of each row.
- For fields with a fixed size (like bigint, char, etc) we 
used the sizes described in the documentation
- For fields with a dynamic size (like text) we estimated the 
string length and used the function select pg_column_size('expected 
text here'::text)
- We added 4 more bytes for the OID that PostgreSQL uses 
internally

1.2) Multiply the size of each row by the number of estimated rows
** Do I need to consider any overhead here, like row or table 
metadata? **


2) Estimate the size of each table index
** Don't know how to estimate this, need advice here **

3) Estimate the size of the transaction log
** We've got no idea how to estimate this, need advice **

4) Estimate the size of the backups (full and incremental)
** Don't know how to estimate this, need advice here **

5) Sum all the estimates for the actual minimum size

6) Apply a factor of 1.5x (the 50% growth) to the sum of the estimates 
1, 2 and 4 for the minimum size after 1 year


7) Apply an overall factor of 1.2 ~ 1.4 (20% to 40% more) to estimates 
5 and 6 for a good safety margin


I know the rules got pretty extensive, please let me know if you need 
more data or examples for a better understanding.


We've also posted this question to 
http://dba.stackexchange.com/q/25617/10166


Thanks in advance,
Daniel Serodio



You also have to allow for table & index bloat.

When a record is DELETEd or UPDATEd, the space used on the disk is not 
automatically reclaimed. So in a very volatile database, the size of the 
data files could be several times bigger than the actual data storage 
requires. There are automatic and manual procedures (look up VACUUM) 
that can keep this under control. However, you will still need to 
account for bloat. The extent of bloat depends very much on your usage 
patterns.



Cheers,
Gavin



Re: [GENERAL] Help with query timeout

2012-10-07 Thread Edson Richter

Em 07/10/2012 02:46, Andreas Kretschmer escreveu:

Edson Richter  wrote:


Dear friends,

I'm using Java 1.6 with Jdbc 4 driver with PostgreSQL 9.1.5 on Windows
64 and Linux 64.
Trying to use setQueryTimeout(int), I get the following stack trace:
Internal Exception: org.postgresql.util.PSQLException: Method
org.postgresql.jdbc4.Jdbc4PreparedStatement.setQueryTimeout(int) not yet
implemented.
(free translation of the equivalent Portuguese message).

My question is: if setQueryTimeout is not implemented, how can I set a
query timeout parameter (per query or per session) using PostgreSQL? Is
there any "set ..." or connection parameter I can use?

Yeah:

statement_timeout



Andreas

Yes, I see.

But there is a little problem then: I'm using JPA. And I can only obtain 
a java.sql.Connection from inside a transaction (JPA 2 limitation).


I managed to make it works, and I would like to share with the community:

I've a BrokerUtil class with several JPA utility methods. I've one 
method to get EntityManager, and one to close EntityManager.

Then I've created the method setTimeOut and resetTimeout as:


  public static void setTimeout(EntityManager em, int segundos) throws 
SQLException {

EntityTransaction et = em.getTransaction();

if(!et.isActive()) {
  et.begin();
}

java.sql.Connection cn = em.unwrap(java.sql.Connection.class);
cn.createStatement().execute("SET statement_timeout TO 
"+(segundos*1000));

  }

  public static void resetTimeout(EntityManager em) {
try {
  java.sql.Connection cn = em.unwrap(java.sql.Connection.class);
  cn.createStatement().execute("RESET statement_timeout");
} catch (Exception e) {
  Util.debug(e);
}
  }


In my application, before issuing the query, I just call

BrokerUtil.setTimeout(em, 60); // set query timeout to 60 seconds

and before closing the EntityManager (or better, inside method that 
closes EntityManager), just call:


BrokerUtil.resetTimeout(em); // reset to default configuration before 
closing



I hope this helps others in the future. This would go to PostgreSQL 
Wiki, since is solves the missing part of Jdbc timeout.


Regards,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with query timeout

2012-10-06 Thread Andreas Kretschmer
Edson Richter  wrote:

> Dear friends,
>
> I'm using Java 1.6 with Jdbc 4 driver with PostgreSQL 9.1.5 on Windows  
> 64 and Linux 64.
> Trying to use setQueryTimeout(int), I get the following stack trace:
> Internal Exception: org.postgresql.util.PSQLException: Method  
> org.postgresql.jdbc4.Jdbc4PreparedStatement.setQueryTimeout(int) not yet  
> implemented.
> (free translation of the equivalent Portuguese message).
>
> My question is: if setQueryTimeout is not implemented, how can I set a  
> query timeout parameter (per query or per session) using PostgreSQL? Is  
> there any "set ..." or connection parameter I can use?

Yeah:

statement_timeout



Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me follow...

2012-07-14 Thread Perry Smith

On Jul 14, 2012, at 2:19 PM, Peter Geoghegan wrote:

> On 14 July 2012 18:56, Tom Lane  wrote:
>> I can no longer reproduce a failure when building out-of-tree from a
>> tarball on my Fedora 16 box.  That's still make 3.82, but Fedora is
>> carrying half a dozen patches to the upstream tarball:
>> http://pkgs.fedoraproject.org/gitweb/?p=make.git;a=tree;h=refs/heads/f16;hb=f16
>> It's not immediately obvious which of those might be related, but
>> I think this was a bug in gmake and the fix is in there somewhere.
> 
> Agreed. Since Fedora 14 is now EOL, I'm not inclined to worry about it.

I'm on AIX 6.1.  Using gmake 3.82.  PostgreSQL 9.0.4

Its definitely not a show stopper but from the email traffic, it looked like 
someone was going to make the Makefiles "less cutting edge" :-)

Thank you for your time,
Perry


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me follow...

2012-07-14 Thread Peter Geoghegan
On 14 July 2012 18:56, Tom Lane  wrote:
> I can no longer reproduce a failure when building out-of-tree from a
> tarball on my Fedora 16 box.  That's still make 3.82, but Fedora is
> carrying half a dozen patches to the upstream tarball:
> http://pkgs.fedoraproject.org/gitweb/?p=make.git;a=tree;h=refs/heads/f16;hb=f16
> It's not immediately obvious which of those might be related, but
> I think this was a bug in gmake and the fix is in there somewhere.

Agreed. Since Fedora 14 is now EOL, I'm not inclined to worry about it.

-- 
Peter Geoghegan   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me follow...

2012-07-14 Thread Tom Lane
Peter Geoghegan  writes:
> On 14 July 2012 16:57, Perry Smith  wrote:
>> I hit "bug 5665".  I see threads like this:
>> 
>> http://archives.postgresql.org/pgsql-bugs/2011-01/msg00241.php
>> 
>> Where / how can I find the diff that went in to fix this?  Mostly I'm 
>> curious.  I can work around the problem but wanted to create a patch file 
>> for myself to make deploying to other servers easier.

> You can search through commits here:
> http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary
> Sometimes a reference to the bug number may appear within the commit
> message, but this won't always happen. If you can't find the commit
> based on that, try the original reporter's name. We ordinarily credit
> bug reporters in commit messages that fix the bug.

I'm not sure we ever did fix it; I see no references to that bug in the
commit history, nor to the similar bug #6059, nor does it look like we
changed anything in the relevant make rules.

I can no longer reproduce a failure when building out-of-tree from a
tarball on my Fedora 16 box.  That's still make 3.82, but Fedora is
carrying half a dozen patches to the upstream tarball:
http://pkgs.fedoraproject.org/gitweb/?p=make.git;a=tree;h=refs/heads/f16;hb=f16
It's not immediately obvious which of those might be related, but
I think this was a bug in gmake and the fix is in there somewhere.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help me follow...

2012-07-14 Thread Peter Geoghegan
On 14 July 2012 16:57, Perry Smith  wrote:
> I hit "bug 5665".  I see threads like this:
>
> http://archives.postgresql.org/pgsql-bugs/2011-01/msg00241.php
>
> Where / how can I find the diff that went in to fix this?  Mostly I'm 
> curious.  I can work around the problem but wanted to create a patch file for 
> myself to make deploying to other servers easier.

You can search through commits here:

http://git.postgresql.org/gitweb/?p=postgresql.git;a=summary

Sometimes a reference to the bug number may appear within the commit
message, but this won't always happen. If you can't find the commit
based on that, try the original reporter's name. We ordinarily credit
bug reporters in commit messages that fix the bug.

-- 
Peter Geoghegan   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with sql

2012-07-07 Thread Chris Angelico
On Sat, Jul 7, 2012 at 11:48 PM, Perry Smith  wrote:
> The database is mostly static.  I run through a very lengthy process to 
> populate the database maybe once a month and then it is 99% read-only.

Do you run an ANALYZE on the table after populating it? Postgres needs
up-to-date statistics for best results. I'd recommend doing an
explicit 'VACUUM ANALYZE' once your data's loaded, and then try your
queries after that - it might not do much, but it also might give a
massive improvement.

ChrisA

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with sql

2012-07-07 Thread Perry Smith

On Jul 6, 2012, at 7:56 PM, Steve Crawford wrote:

> On 07/06/2012 02:34 PM, Perry Smith wrote:
>> Hi Guys,
>> 
>> This isn't a PostgreSQL specific question but just a SQL question.  If this 
>> is not an appropriate question for this list, please let me know.
>> 
>> It is also, perhaps, a really silly question.
>> 
>> This query (without the 'explain' keyword) , when executed takes forever and 
>> a day:
>> 
>>> condor_development=> explain select id from filesets where id not in ( 
>>> select fileset_id from service_pack_fileset_maps );
>>>   QUERY PLAN
>>> --
>>>  Seq Scan on filesets  (cost=0.00..71937742.00 rows=26088 width=4)
>>>Filter: (NOT (SubPlan 1))
>>>SubPlan 1
>>>  ->  Materialize  (cost=0.00..2517.78 rows=95852 width=4)
>>>->  Seq Scan on service_pack_fileset_maps  (cost=0.00..1663.52 
>>> rows=95852 width=4)
>>> (5 rows)
>> This query returns within a second:
>> 
>>> condor_development=> explain select id from filesets where id not in ( 
>>> select distinct fileset_id from service_pack_fileset_maps );
>>>   QUERY PLAN
>>> --
>>>  Seq Scan on filesets  (cost=2102.31..3153.53 rows=26088 width=4)
>>>Filter: (NOT (hashed SubPlan 1))
>>>SubPlan 1
>>>  ->  HashAggregate  (cost=1903.15..2062.48 rows=15933 width=4)
>>>->  Seq Scan on service_pack_fileset_maps  (cost=0.00..1663.52 
>>> rows=95852 width=4)
>>> (5 rows)
>> The difference is the "distinct" keyword in the inner select.
>> 
>> What I'm confused about is why isn't the "distinct" implicit?  I thought the 
>> construct "blah in ( select ... )" was using "sets" and an item (I thought) 
>> can not be in a set more than once.
>> 
>> Perhaps my question is the opposite really?  Why would you not always use 
>> "distinct" in the inner select when the operator is "in" or "not in" ?
>> 
>> And if I can throw in another question on top: is there a different method 
>> other than "not in" that would work better?
> Actually it is *very* PostgreSQL specific. In fact, it may even be PostgreSQL 
> *version* specific as you are delving into how the planner decides how to 
> handle a query.
> 
> It appears that the planner is assuming, based on collected stats and 
> available indexes, that there will be roughly 1/6 the records returned by the 
> "distinct" query and thus chose a different method to join the records. One 
> useful piece of information would be the indexes on the two tables.
> 
> As to other methods, you can use:
> ... where not exists (select 1 from service_pack_fileset_maps where 
> fileset_id = filesets.id)...
> (Note: as alluded to above, ...not in... works better in some releases and 
> ...not exists... better in others due to improvements over time.)
> 
> Still another method:
> select id from filesets except select fileset_id from 
> service_pack_fileset_maps;

Thanks guys.

Small side note: I thought I saw "set difference" in the documentation but I 
couldn't find it.  It appears "EXCEPT" is set difference.  Thank you for that 
tidbit.

The database is mostly static.  I run through a very lengthy process to 
populate the database maybe once a month and then it is 99% read-only.  By far, 
most of the accesses are via a view that I have that is rather long and ugly so 
I won't paste it in.  I've tried to make this particular view as fast as 
possible so the indexes, etc are what I think will help that out.

The version is psql (PostgreSQL) 9.0.4

Each table has a key of "id" -- this database was created by / used by Ruby on 
Rails and that is how it likes to do things.

> condor_development=> \d service_pack_fileset_maps
> Table 
> "public.service_pack_fileset_maps"
>  Column  |Type |  
>  Modifiers
> -+-+
>  id  | integer | not null default 
> nextval('service_pack_fileset_maps_id_seq'::regclass)
>  service_pack_id | integer | not null
>  fileset_id  | integer | not null
>  created_at  | timestamp without time zone | 
>  updated_at  | timestamp without time zone | 
> Indexes:
> "service_pack_fileset_maps_pkey" PRIMARY KEY, btree (id)
> "service_pack_fileset_maps_service_pack_id_key" UNIQUE, btree 
> (service_pack_id, fileset_id)
> "index_service_pack_fileset_maps_on_fileset_id" btree (fileset_id)
> Foreign-key constraints:
> "service_pack_fileset_maps_fileset_id_fkey" FOREIGN KEY (fileset_id) 
> REFERENCES filesets(id) ON DELETE CASCADE DEFERRABLE
> "service_pack_fileset_maps_servi

Re: [GENERAL] Help with sql

2012-07-06 Thread Steve Crawford

On 07/06/2012 02:34 PM, Perry Smith wrote:

Hi Guys,

This isn't a PostgreSQL specific question but just a SQL question.  If this is 
not an appropriate question for this list, please let me know.

It is also, perhaps, a really silly question.

This query (without the 'explain' keyword) , when executed takes forever and a 
day:


condor_development=> explain select id from filesets where id not in ( select 
fileset_id from service_pack_fileset_maps );
   QUERY PLAN
--
  Seq Scan on filesets  (cost=0.00..71937742.00 rows=26088 width=4)
Filter: (NOT (SubPlan 1))
SubPlan 1
  ->  Materialize  (cost=0.00..2517.78 rows=95852 width=4)
->  Seq Scan on service_pack_fileset_maps  (cost=0.00..1663.52 
rows=95852 width=4)
(5 rows)

This query returns within a second:


condor_development=> explain select id from filesets where id not in ( select 
distinct fileset_id from service_pack_fileset_maps );
   QUERY PLAN
--
  Seq Scan on filesets  (cost=2102.31..3153.53 rows=26088 width=4)
Filter: (NOT (hashed SubPlan 1))
SubPlan 1
  ->  HashAggregate  (cost=1903.15..2062.48 rows=15933 width=4)
->  Seq Scan on service_pack_fileset_maps  (cost=0.00..1663.52 
rows=95852 width=4)
(5 rows)

The difference is the "distinct" keyword in the inner select.

What I'm confused about is why isn't the "distinct" implicit?  I thought the construct "blah 
in ( select ... )" was using "sets" and an item (I thought) can not be in a set more than once.

Perhaps my question is the opposite really?  Why would you not always use "distinct" in the inner 
select when the operator is "in" or "not in" ?

And if I can throw in another question on top: is there a different method other than 
"not in" that would work better?
Actually it is *very* PostgreSQL specific. In fact, it may even be 
PostgreSQL *version* specific as you are delving into how the planner 
decides how to handle a query.


It appears that the planner is assuming, based on collected stats and 
available indexes, that there will be roughly 1/6 the records returned 
by the "distinct" query and thus chose a different method to join the 
records. One useful piece of information would be the indexes on the two 
tables.


As to other methods, you can use:
... where not exists (select 1 from service_pack_fileset_maps where 
fileset_id = filesets.id)...
(Note: as alluded to above, ...not in... works better in some releases 
and ...not exists... better in others due to improvements over time.)


Still another method:
select id from filesets except select fileset_id from 
service_pack_fileset_maps;


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with sql

2012-07-06 Thread Rob Sargent

On 07/06/2012 03:34 PM, Perry Smith wrote:

Hi Guys,

This isn't a PostgreSQL specific question but just a SQL question.  If this is 
not an appropriate question for this list, please let me know.

It is also, perhaps, a really silly question.

This query (without the 'explain' keyword) , when executed takes forever and a 
day:


condor_development=> explain select id from filesets where id not in ( select 
fileset_id from service_pack_fileset_maps );
   QUERY PLAN
--
  Seq Scan on filesets  (cost=0.00..71937742.00 rows=26088 width=4)
Filter: (NOT (SubPlan 1))
SubPlan 1
  ->  Materialize  (cost=0.00..2517.78 rows=95852 width=4)
->  Seq Scan on service_pack_fileset_maps  (cost=0.00..1663.52 
rows=95852 width=4)
(5 rows)


This query returns within a second:


condor_development=> explain select id from filesets where id not in ( select 
distinct fileset_id from service_pack_fileset_maps );
   QUERY PLAN
--
  Seq Scan on filesets  (cost=2102.31..3153.53 rows=26088 width=4)
Filter: (NOT (hashed SubPlan 1))
SubPlan 1
  ->  HashAggregate  (cost=1903.15..2062.48 rows=15933 width=4)
->  Seq Scan on service_pack_fileset_maps  (cost=0.00..1663.52 
rows=95852 width=4)
(5 rows)


The difference is the "distinct" keyword in the inner select.

What I'm confused about is why isn't the "distinct" implicit?  I thought the construct "blah 
in ( select ... )" was using "sets" and an item (I thought) can not be in a set more than once.

Perhaps my question is the opposite really?  Why would you not always use "distinct" in the inner 
select when the operator is "in" or "not in" ?

And if I can throw in another question on top: is there a different method other than 
"not in" that would work better?

Thank you guys for the help and a really awesome database.
pedz
(this is *suppose* to be sent as plain text... I hope my mailer does what it is 
told)





Well they are distinct records, they just may have the same values.  And 
I'm not trying to be flippant.   We don't see the structure of those 
table: are all the id fields involved primary keys or with unique index 
coverage? Does "not exists ( select fileset.id = fileset_id from 
service_pack_fileset_map) change the behaviour?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help, server doesn't start

2012-06-25 Thread Craig Ringer

On 06/25/2012 09:27 PM, Alban Hertroys wrote:

Yes, pg_dump output from 8.3 should restore fine to 9.1.

Pardon my ignorance if this changed in recent versions, but shouldn't that read:

Yes, pg_dump 9.1 output from 8.3 should restore fine to 9.1?
Very good point and an important catch. Thank-you. Always dump with 
pg_dump from the newer database.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help, server doesn't start

2012-06-25 Thread Alban Hertroys
> Yes, pg_dump output from 8.3 should restore fine to 9.1.

Pardon my ignorance if this changed in recent versions, but shouldn't that read:

Yes, pg_dump 9.1 output from 8.3 should restore fine to 9.1?

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help, server doesn't start

2012-06-25 Thread Craig Ringer

On 06/25/2012 08:02 PM, Anibal David Acosta wrote:

  We plan to upgrade to version 9.1, do you know if file result of the 
pg_dump in 8.3 can be restored in 9.1?



Yes, pg_dump output from 8.3 should restore fine to 9.1. PLEASE READ THE 
RELEASE NOTES for the intermediate major versions (8.4 and 9.0) for 
upgrade advice and caveats.


http://www.postgresql.org/docs/current/static/release-8-4.html 

http://www.postgresql.org/docs/ 
current 
/static/release-9-0.html 

http://www.postgresql.org/docs/ 
current 
/static/release-9-1.html 



Do not just try to upgrade your production database. Do a test with a 
test copy first, and test your application against it; don't just assume 
it'll all work.


--
Craig Ringer


Re: [GENERAL] Help, server doesn't start

2012-06-25 Thread Anibal David Acosta
Some possible scenario of my last problem.

 

1.   Everything was working fine

2.   Server get out of space on the Postgres installation disk

3.   Some body try to stop and restart the Postgres due to
inconvenientes with related services (ignoring the out of space problem)

4.   Postgres server doesn't respond to the windows service STOP command
and this people kill postgres processes

5.   The "Some body" try to start again the postgres service, Postgres
maybe try to repair last abrupt shutdown but has no space to do anything

6.   Steps 4 & 5 maybe was repeated many times

 

We plan to upgrade to version 9.1, do you know if file result of the pg_dump
in 8.3 can be restored in 9.1?

 

 

Thanks!

 

De: Craig Ringer [mailto:ring...@ringerc.id.au] 
Enviado el: lunes, 25 de junio de 2012 12:54 a.m.
Para: Anibal David Acosta
CC: 'Raghavendra'; pgsql-general@postgresql.org
Asunto: Re: [GENERAL] Help, server doesn't start

 

On 06/25/2012 12:40 PM, Anibal David Acosta wrote:

Yes, we must upgrade. 

The value of the shared_preload_libraries is 

  

shared_preload_libraries = '$libdir/plugins/plugin_debugger.dll'
# (change requires restart) 

Change that to:

shared_preload_libraries = ''

(two single quotes, not a double quote)

It's unlikely that the PL/PgSQL debugger plugin is the issue, but since it's
repeated in your logs, it's worth a go.



 When I login into the server the disk used by Postgres installation was
without space (0Bytes available). 

  

OK, so you probably do just have xlogs that're cut short. So long as nobody
tried to "fix" the problem by deleting things out of the PostgreSQL data
directory I expect you'll be OK.

AFAIK Pg is supposed to recover gracefully from out-of-disk situations, but
this _is_ quite an old version. 

I wonder if there are any out-of-disk tests in the Pg unit tests? It'd be
somewhat tricky to automate testing for, but really good to do if it's
practical. Something for my bored-weekend TODO I guess, even if it's just a
standalone test set.




Right now I am making a file-level copy of the entire postgres folder in
order to run some corruption recover method 

Great.

--
Craig Ringer



Re: [GENERAL] Help, server doesn't start

2012-06-25 Thread Anibal David Acosta
shared_preload_libraries changed to '', no more errors about this issue but
server doesn't start

 

So, reading some information I found that pg_resetxlog maybe solve the
problem, and maybe some data will lose

 

I execute the pg_resetxlog and now everything is working, just god will know
what I lose J, but due te importance of the service we decide this instead
of a long service down

 

Thanks!

 

 

 

De: Craig Ringer [mailto:ring...@ringerc.id.au] 
Enviado el: lunes, 25 de junio de 2012 12:54 a.m.
Para: Anibal David Acosta
CC: 'Raghavendra'; pgsql-general@postgresql.org
Asunto: Re: [GENERAL] Help, server doesn't start

 

On 06/25/2012 12:40 PM, Anibal David Acosta wrote:

Yes, we must upgrade. 

The value of the shared_preload_libraries is 

  

shared_preload_libraries = '$libdir/plugins/plugin_debugger.dll'
# (change requires restart) 

Change that to:

shared_preload_libraries = ''

(two single quotes, not a double quote)

It's unlikely that the PL/PgSQL debugger plugin is the issue, but since it's
repeated in your logs, it's worth a go.



 When I login into the server the disk used by Postgres installation was
without space (0Bytes available). 

  

OK, so you probably do just have xlogs that're cut short. So long as nobody
tried to "fix" the problem by deleting things out of the PostgreSQL data
directory I expect you'll be OK.

AFAIK Pg is supposed to recover gracefully from out-of-disk situations, but
this _is_ quite an old version. 

I wonder if there are any out-of-disk tests in the Pg unit tests? It'd be
somewhat tricky to automate testing for, but really good to do if it's
practical. Something for my bored-weekend TODO I guess, even if it's just a
standalone test set.




Right now I am making a file-level copy of the entire postgres folder in
order to run some corruption recover method 

Great.

--
Craig Ringer



Re: [GENERAL] Help, server doesn't start

2012-06-24 Thread Craig Ringer

On 06/25/2012 12:40 PM, Anibal David Acosta wrote:


Yes, we must upgrade.

The value of the shared_preload_libraries is

shared_preload_libraries = '$libdir/plugins/plugin_debugger.dll' # 
(change requires restart)



Change that to:

shared_preload_libraries = ''

(two single quotes, not a double quote)

It's unlikely that the PL/PgSQL debugger plugin is the issue, but since 
it's repeated in your logs, it's worth a go.


 When I login into the server the disk used by Postgres installation 
was without space (0Bytes available).



OK, so you probably do just have xlogs that're cut short. So long as 
nobody tried to "fix" the problem by deleting things out of the 
PostgreSQL data directory I expect you'll be OK.


AFAIK Pg is supposed to recover gracefully from out-of-disk situations, 
but this _is_ quite an old version.


I wonder if there are any out-of-disk tests in the Pg unit tests? It'd 
be somewhat tricky to automate testing for, but really good to do if 
it's practical. Something for my bored-weekend TODO I guess, even if 
it's just a standalone test set.


Right now I am making a file-level copy of the entire postgres folder 
in order to run some corruption recover method

Great.

--
Craig Ringer


Re: [GENERAL] Help, server doesn't start

2012-06-24 Thread Raghavendra
On Mon, Jun 25, 2012 at 10:10 AM, Anibal David Acosta wrote:

> Yes, we must upgrade.
>
> The value of the shared_preload_libraries is
>
> ** **
>
> shared_preload_libraries =
> '$libdir/plugins/plugin_debugger.dll'   #
> (change requires restart)
>
> **
>

First and most important is take complete backup of the $PGDATA before
giving any tries.
Can you put blank in and try restarting .
Eg:-
shared_preload_libraries=''


 **
>
> An additional information….
>
> When I login into the server the disk used by Postgres installation was
> without space (0Bytes available).
>
> **
>

Thats strange... Seems some disk issue.

 **
>
> Also is possible that someone (I am checking now) restart the Postgres
> service in the middle of this situation
>

Not sure...


>  **
>
> Right now I am making a file-level copy of the entire postgres folder in
> order to run some corruption recover method
>
> **
>
Sounds good.

--Raghav


Re: [GENERAL] Help, server doesn't start

2012-06-24 Thread Anibal David Acosta
Yes, we must upgrade.

The value of the shared_preload_libraries is

 

shared_preload_libraries = '$libdir/plugins/plugin_debugger.dll'
# (change requires restart)

 

An additional information..

When I login into the server the disk used by Postgres installation was
without space (0Bytes available).

 

Also is possible that someone (I am checking now) restart the Postgres
service in the middle of this situation

 

Right now I am making a file-level copy of the entire postgres folder in
order to run some corruption recover method

 

Thanks!

 

 

De: Raghavendra [mailto:raghavendra@enterprisedb.com] 
Enviado el: lunes, 25 de junio de 2012 12:31 a.m.
Para: Anibal David Acosta
CC: pgsql-general@postgresql.org
Asunto: Re: [GENERAL] Help, server doesn't start

 

 

On Mon, Jun 25, 2012 at 8:23 AM, Anibal David Acosta 
wrote:

We have an old postgres installation (8.3) running on windows 2003 

The server run perfect for many years but today it crash L

 

We have a very old backup so I'll try desperate to repair.

If I can't repair I want to skip the error and start server with only
healthy data, no matter if I lose some table page, entire table or even
entire database because the server has many databases inside.

 

Side-by, you should also consider upgrading PG versions to latest. 

 

 

Currently the database is not starting, when I try to start I got the in the
log:

 

2012-06-25 02:22:33 GMT LOG:  database system was interrupted while in
recovery at 2012-06-25 02:22:26 GMT

2012-06-25 02:22:33 GMT HINT:  This probably means that some data is
corrupted and you will have to use the last backup for recovery.

2012-06-25 02:22:33 GMT LOG:  database system was not properly shut down;
automatic recovery in progress

2012-06-25 02:22:33 GMT LOG:  redo starts at 17C/E1C42040

2012-06-25 02:22:33 GMT LOG:  loaded library
"$libdir/plugins/plugin_debugger.dll"

2012-06-25 02:22:33 GMT FATAL:  the database system is starting up

 

[LAST TWO LINES REPEAT MANY TIMES]

 

2012-06-25 02:22:36 GMT LOG:  startup process (PID 1120) was terminated by
exception 0xC00D

2012-06-25 02:22:36 GMT HINT:  See C include file "ntstatus.h" for a
description of the hexadecimal value.

2012-06-25 02:22:36 GMT LOG:  aborting startup due to startup process
failure

2012-06-25 02:22:36 GMT FATAL:  the database system is starting up

2012-06-25 02:22:36 GMT LOG:  loaded library
"$libdir/plugins/plugin_debugger.dll"

2012-06-25 02:22:36 GMT FATAL:  the database system is starting up

 

 

Can you also paste whats there in shared_preload_libraries setting from
$PGDATA/postgresql.conf ?

 

---

Regards,

Raghavendra

EnterpriseDB Corporation

Blog: http://raghavt.blogspot.com/



Re: [GENERAL] Help, server doesn't start

2012-06-24 Thread René Romero Benavides

El 24/06/2012 11:30 p.m., René Romero Benavides escribió:

El 24/06/2012 09:53 p.m., Anibal David Acosta escribió:


We have an old postgres installation (8.3) running on windows 2003

The server run perfect for many years but today it crash L

We have a very old backup so I'll try desperate to repair.

If I can't repair I want to skip the error and start server with only 
healthy data, no matter if I lose some table page, entire table or 
even entire database because the server has many databases inside.


Currently the database is not starting, when I try to start I got the 
in the log:


2012-06-25 02:22:33 GMT LOG:  database system was interrupted while 
in recovery at 2012-06-25 02:22:26 GMT


2012-06-25 02:22:33 GMT HINT:  This probably means that some data is 
corrupted and you will have to use the last backup for recovery.


2012-06-25 02:22:33 GMT LOG:  database system was not properly shut 
down; automatic recovery in progress


2012-06-25 02:22:33 GMT LOG:  redo starts at 17C/E1C42040

2012-06-25 02:22:33 GMT LOG:  loaded library 
"$libdir/plugins/plugin_debugger.dll"


2012-06-25 02:22:33 GMT FATAL:  the database system is starting up

[LAST TWO LINES REPEAT MANY TIMES]

2012-06-25 02:22:36 GMT LOG:  startup process (PID 1120) was 
terminated by exception 0xC00D


2012-06-25 02:22:36 GMT HINT:  See C include file "ntstatus.h" for a 
description of the hexadecimal value.


2012-06-25 02:22:36 GMT LOG:  aborting startup due to startup process 
failure


2012-06-25 02:22:36 GMT FATAL:  the database system is starting up

2012-06-25 02:22:36 GMT LOG:  loaded library 
"$libdir/plugins/plugin_debugger.dll"


2012-06-25 02:22:36 GMT FATAL:  the database system is starting up



The first advice here is:

  * backup your data files before any attempt to fix the issue.

It might be a hardware failure that corrupted your data (specifically 
your transaction logs).
It might be a bug on that specific version, what is your minor release 
number?


  * As a last resort you could use pg_resetlog, it's a binary that
resets the information in the control file and transaction logs.

http://www.postgresql.org/docs/8.3/static/app-pgresetxlog.html
but please backup your data directory first.
After that, if the server starts up successfully, perform a pg_dump / 
pg_dumpall A.S.A.P and reload your data on a new data cluster.


I'm on twitter @iCodeiExist

and please, check and double check that the server process is not 
active, if it's not and pg_resetxlog complains that it is, check again 
and then delete the postmaster.pid file.


-


Re: [GENERAL] Help, server doesn't start

2012-06-24 Thread Raghavendra
On Mon, Jun 25, 2012 at 8:23 AM, Anibal David Acosta wrote:

> We have an old postgres installation (8.3) running on windows 2003 
>
> The server run perfect for many years but today it crash L
>
> ** **
>
> We have a very old backup so I’ll try desperate to repair.
>
> If I can’t repair I want to skip the error and start server with only
> healthy data, no matter if I lose some table page, entire table or even
> entire database because the server has many databases inside.
>
> **
>
Side-by, you should also consider upgrading PG versions to latest.


> **
>
> ** **
>
> Currently the database is not starting, when I try to start I got the in
> the log:
>
> ** **
>
> 2012-06-25 02:22:33 GMT LOG:  database system was interrupted while in
> recovery at 2012-06-25 02:22:26 GMT
>
> 2012-06-25 02:22:33 GMT HINT:  This probably means that some data is
> corrupted and you will have to use the last backup for recovery.
>
> 2012-06-25 02:22:33 GMT LOG:  database system was not properly shut down;
> automatic recovery in progress
>
> 2012-06-25 02:22:33 GMT LOG:  redo starts at 17C/E1C42040
>
> 2012-06-25 02:22:33 GMT LOG:  loaded library
> "$libdir/plugins/plugin_debugger.dll"
>
> 2012-06-25 02:22:33 GMT FATAL:  the database system is starting up
>
> ** **
>
> [LAST TWO LINES REPEAT MANY TIMES]
>
> ** **
>
> 2012-06-25 02:22:36 GMT LOG:  startup process (PID 1120) was terminated by
> exception 0xC00D
>
> 2012-06-25 02:22:36 GMT HINT:  See C include file "ntstatus.h" for a
> description of the hexadecimal value.
>
> 2012-06-25 02:22:36 GMT LOG:  aborting startup due to startup process
> failure
>
> 2012-06-25 02:22:36 GMT FATAL:  the database system is starting up
>
> 2012-06-25 02:22:36 GMT LOG:  loaded library
> "$libdir/plugins/plugin_debugger.dll"
>
> 2012-06-25 02:22:36 GMT FATAL:  the database system is starting up
>
> ** **
>
> **
>
Can you also paste whats there in shared_preload_libraries setting from
$PGDATA/postgresql.conf ?

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


Re: [GENERAL] Help, server doesn't start

2012-06-24 Thread René Romero Benavides

El 24/06/2012 09:53 p.m., Anibal David Acosta escribió:


We have an old postgres installation (8.3) running on windows 2003

The server run perfect for many years but today it crash L

We have a very old backup so I'll try desperate to repair.

If I can't repair I want to skip the error and start server with only 
healthy data, no matter if I lose some table page, entire table or 
even entire database because the server has many databases inside.


Currently the database is not starting, when I try to start I got the 
in the log:


2012-06-25 02:22:33 GMT LOG:  database system was interrupted while in 
recovery at 2012-06-25 02:22:26 GMT


2012-06-25 02:22:33 GMT HINT:  This probably means that some data is 
corrupted and you will have to use the last backup for recovery.


2012-06-25 02:22:33 GMT LOG:  database system was not properly shut 
down; automatic recovery in progress


2012-06-25 02:22:33 GMT LOG:  redo starts at 17C/E1C42040

2012-06-25 02:22:33 GMT LOG:  loaded library 
"$libdir/plugins/plugin_debugger.dll"


2012-06-25 02:22:33 GMT FATAL:  the database system is starting up

[LAST TWO LINES REPEAT MANY TIMES]

2012-06-25 02:22:36 GMT LOG:  startup process (PID 1120) was 
terminated by exception 0xC00D


2012-06-25 02:22:36 GMT HINT:  See C include file "ntstatus.h" for a 
description of the hexadecimal value.


2012-06-25 02:22:36 GMT LOG:  aborting startup due to startup process 
failure


2012-06-25 02:22:36 GMT FATAL:  the database system is starting up

2012-06-25 02:22:36 GMT LOG:  loaded library 
"$libdir/plugins/plugin_debugger.dll"


2012-06-25 02:22:36 GMT FATAL:  the database system is starting up



The first advice here is:

 * backup your data files before any attempt to fix the issue.

It might be a hardware failure that corrupted your data (specifically 
your transaction logs).
It might be a bug on that specific version, what is your minor release 
number?


 * As a last resort you could use pg_resetlog, it's a binary that
   resets the information in the control file and transaction logs.

http://www.postgresql.org/docs/8.3/static/app-pgresetxlog.html
but please backup your data directory first.
After that, if the server starts up successfully, perform a pg_dump / 
pg_dumpall A.S.A.P and reload your data on a new data cluster.


I'm on twitter @iCodeiExist



Re: [GENERAL] help understanding the bitmap heap scan costs

2012-05-21 Thread Cédric Villemain
Le lundi 21 mai 2012 15:35:55, Luca Ferrari a écrit :
> Hi all,
> I don't fully understand how is the cost of a bitmap heap scan
> computed. For instance when the explain output node is similar to the
> following:
> 
>  Bitmap Heap Scan on test  (cost=17376.49..48595.93 rows=566707 width=6)
>Recheck Cond: ((text1 = 'A'::text) OR (text1 = 'C'::text))
>Filter: (num1 > 1)
> 
> how is the cost of the node (48595.93 - 17376.49) computed? I think it
> should be something like:
> (reltuples * ( index_filtering_factor_A + index_filtering_factor_B) )
> * (cpu_tuple_cost + cpu_operator_cost)
> + (reltuples * ( index_filtering_factor_A + index_filtering_factor_B)
> ) / tuples_per_pages
> 
> but this does not equal the optimizer cost, so I guess I'm doing
> something wrong. Suggestions?

the random_page_cost is evaluated with random_page_cost and seq_page_cost, it 
depends of the number of rows fetch and the number of relpages.

Read ./src/backend/optimizer/path/costsize.c

/*
 * For small numbers of pages we should charge spc_random_page_cost
 * apiece, while if nearly all the table's pages are being read, it's more
 * appropriate to charge spc_seq_page_cost apiece.  The effect is
 * nonlinear, too. For lack of a better idea, interpolate like this to
 * determine the cost per page.
 */
if (pages_fetched >= 2.0)
cost_per_page = spc_random_page_cost -
(spc_random_page_cost - spc_seq_page_cost)
* sqrt(pages_fetched / T);
else
cost_per_page = spc_random_page_cost;


-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Help needed to mount a dmp file

2012-04-09 Thread John R Pierce

On 04/09/12 12:40 PM, François Beausoleil wrote:

It depends on whether you have a SQL dump file or a custom archive type. I 
believe pg_restore can restore both kinds.


no, pg_restore can only restore from custom archives (made with pg_dump 
-Fc) .   straight SQL dumps are fed to psql



--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help needed to mount a dmp file

2012-04-09 Thread François Beausoleil


Le lundi 9 avril 2012 à 12:47, Lux a écrit :

> Hi,
> I am new to Postgresql and have no clue. The first task given to me was to 
> try to mount this dmp file on Postgresql which has been downloaded from Jive. 
> I am not sure how to go about and where to start. Can someone please help me.
> Thanks & Regards,
> Lux.  

You'll need either http://www.postgresql.org/docs/9.1/static/app-psql.html or 
http://www.postgresql.org/docs/9.1/static/app-pgrestore.html  

It depends on whether you have a SQL dump file or a custom archive type. I 
believe pg_restore can restore both kinds.

You should also determine what version of PostgreSQL you're using, since that 
will help you with the manual links. I liked to the 9.1 versions above.

Welcome to PostgreSQL. Hope that helps!
François


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


<    1   2   3   4   5   6   7   8   9   >