Re: [GENERAL] Can't start postgresql server

2014-06-24 Thread Michael Paquier
On Wed, Jun 25, 2014 at 5:37 AM, ChoonSoo Park  wrote:

> 30022 & 30023 are symbolic links under data/pg_tblspc folder. Their target
> directories are lost somehow. Does it cause this problem? I tried to delete
> those 2 symbolic links but postgresql still can't start up.
>
Just wonder what is causing the postgresql to throw an error.
>
> >> HINT:  If this has occurred more than once some data might be corrupted
> and you might need to choose an earlier recovery target.
>
If those tablespaces got *lost*, you have a problem with your
infrastructure and should try to recover them, in your case recovery is
trying to apply WAL changes to those relations and cannot find them,
showing the failure you are seeing here. The best thing you could do is to
recover your cluster with a fresh backup, backup that includes those
tablespaces as well.

Where can I specify recovery target?
>
Here are more details about recovery targets via recovery.conf:
http://www.postgresql.org/docs/devel/static/recovery-target-settings.html
-- 
Michael


Re: [GENERAL] Getting "cache lookup failed for aggregate" error

2014-06-24 Thread Patrick Krecker
On Tue, Jun 24, 2014 at 4:25 PM, Tom Lane  wrote:

> Patrick Krecker  writes:
> > Hello everyone -- We received a strange error today on our production
> write
> > master. During a routine maintenance script, we got the following error:
>
> > "ERROR:  cache lookup failed for aggregate 5953992"
>
> > I think I found the culprit. We have a cron script that (among other
> > things) recreates the aggregate function array_cat_aggregate()once every
> > minute.
>
> Um.  Why's it do that?
>

Well, it's admittedly a hack. The cron script uses the function just after
creating it. It was an easy way of guaranteeing that the database will have
the function when it's necessary. However, I suppose that assumption is
wrong because you have confirmed that aggregate functions are not part of
the snapshot provided by beginning a transaction.


>
> > My guess is that a) transactions do not guarantee a snapshot of
> > custom functions and b) we got unlucky and the aggregate was deleted
> during
> > the execution of the UPDATE.
>
> Yeah, there's no guarantee that dropping the aggregate wouldn't leave a
> window for this type of failure.
>
> 9.4 might be a little better about this because it doesn't use SnapshotNow
> for catalog fetches anymore, but I think you'd still be at some risk.
>
> > Here is the query that produced the error:
>
> It'd be more interesting to see what the cron script was doing to the
> aggregate definition.
>

FWIW the SQL is

DROP AGGREGATE IF EXISTS array_cat_aggregate(anyarray);
CREATE AGGREGATE array_cat_aggregate(anyarray)  (
SFUNC = array_cat,
STYPE = anyarray,
INITCOND  = '{}'
);

Followed by the other statement given in my previous email. But, I think
you've thoroughly answered by question. Thanks!


>
> regards, tom lane
>


Re: [GENERAL] Re: Error When Trying to Use Npgsql to COPY into a PostgreSQL Database

2014-06-24 Thread Adrian Klaver

On 06/24/2014 01:37 PM, Altec103 wrote:

Adrian, I actually figured it out. It was a really silly mistake - basically
I was testing something earlier and edited my ODBC connection string. I was
actually connecting to the wrong database and it was an empty database, so
ODBCDataReader was trying to pull data from an empty table. Anyways, this is
fixed. However, I have a new problem.

I occasionally get an ERROR: 22021: invalid byte sequence for encoding
"UTF8": 0x92 when using the COPY query and this stops the whole process. It
occurs here in my code...

var raw = Encoding.UTF8.GetBytes(string.Concat(dataEntry, "\n"));
copy.CopyStream.Write(raw, 0, raw.Length);

dataEntry is a string. I have tried a few things to try and remove any
non-UTF8 characters from the string, but it has failed so far. Any ideas?



A more on point example:

http://npgsql.projects.pgfoundry.org/docs/manual/UserManual.html

Fast bulk data copy into a table

The second example show how to deal with different client/server encodings.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Error-When-Trying-to-Use-Npgsql-to-COPY-into-a-PostgreSQL-Database-tp5808954p5808982.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting "cache lookup failed for aggregate" error

2014-06-24 Thread Tom Lane
Patrick Krecker  writes:
> Hello everyone -- We received a strange error today on our production write
> master. During a routine maintenance script, we got the following error:

> "ERROR:  cache lookup failed for aggregate 5953992"

> I think I found the culprit. We have a cron script that (among other
> things) recreates the aggregate function array_cat_aggregate()once every
> minute.

Um.  Why's it do that?

> My guess is that a) transactions do not guarantee a snapshot of
> custom functions and b) we got unlucky and the aggregate was deleted during
> the execution of the UPDATE.

Yeah, there's no guarantee that dropping the aggregate wouldn't leave a
window for this type of failure.

9.4 might be a little better about this because it doesn't use SnapshotNow
for catalog fetches anymore, but I think you'd still be at some risk.

> Here is the query that produced the error:

It'd be more interesting to see what the cron script was doing to the
aggregate definition.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Error When Trying to Use Npgsql to COPY into a PostgreSQL Database

2014-06-24 Thread Adrian Klaver

On 06/24/2014 01:37 PM, Altec103 wrote:

Adrian, I actually figured it out. It was a really silly mistake - basically
I was testing something earlier and edited my ODBC connection string. I was
actually connecting to the wrong database and it was an empty database, so
ODBCDataReader was trying to pull data from an empty table. Anyways, this is
fixed. However, I have a new problem.

I occasionally get an ERROR: 22021: invalid byte sequence for encoding
"UTF8": 0x92 when using the COPY query and this stops the whole process. It
occurs here in my code...

var raw = Encoding.UTF8.GetBytes(string.Concat(dataEntry, "\n"));
copy.CopyStream.Write(raw, 0, raw.Length);

dataEntry is a string. I have tried a few things to try and remove any
non-UTF8 characters from the string, but it has failed so far. Any ideas?





Some further digging found this:

https://github.com/npgsql/Npgsql/issues/124

Seems to address the same issue.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Getting "cache lookup failed for aggregate" error

2014-06-24 Thread Patrick Krecker
Hello everyone -- We received a strange error today on our production write
master. During a routine maintenance script, we got the following error:

"ERROR:  cache lookup failed for aggregate 5953992"

I think I found the culprit. We have a cron script that (among other
things) recreates the aggregate function array_cat_aggregate()once every
minute. My guess is that a) transactions do not guarantee a snapshot of
custom functions and b) we got unlucky and the aggregate was deleted during
the execution of the UPDATE.

I was just wondering if my analysis is correct, so I can make the necessary
changes to production to prevent this from happening again.

Here is the query that produced the error:

UPDATE marbury_case SET
  components_vector = (
SELECT array_cat_aggregate(component) FROM (
  SELECT ARRAY[
id,
type_id,
"offset",
length,
internal_id,
parent_id,
right_sibling_id] AS component
  FROM marbury_component WHERE case_id = marbury_case.id ORDER BY id)
AS foo),
  attributes_json = (
SELECT array_to_json(array_agg(attributes || hstore('_ind',
ind::text))) FROM (
  SELECT (rank() OVER (ORDER BY id)) - 1 AS ind, attributes
  FROM marbury_component
  WHERE case_id = marbury_case.id ORDER BY id)
AS foo WHERE attributes IS NOT NULL
 AND array_length(akeys(attributes), 1) > 0),
  vectors_updated = timeofday()::timestamp
WHERE id = 71865


Re: [GENERAL] JSON Indexes

2014-06-24 Thread Vik Fearing
On 06/24/2014 10:15 PM, CS_DBA wrote:
> I added a PK constraint on the id column and created this json index:
> 
> create index mytest_json_col_idx on mytest ((task->'name'));
> 
> However the planner never uses the index...
> 
> 
> EXPLAIN SELECT (mytest.task->>'name') as name,
> 
> COUNT((mytest.task->>'name')) AS task_count
> 
> FROM mytest
> 
> GROUP BY (mytest.task->>'name')
> 
> ORDER BY 2  DESC;
> 
> 
> Am I missing something?

Yes, you're querying task->>'name' but the index is on task->'name'.
-- 
Vik


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON Indexes

2014-06-24 Thread Merlin Moncure
On Tue, Jun 24, 2014 at 3:15 PM, CS_DBA  wrote:
> Hi All;
>
> We're struggling to get the planner to use a json index.
>
> I have this table :
>
>   Table "public.mytest"
>
>  Column |   Type| Modifiers
>
> +---+-
>
>  id | bigint| not null default
> nextval('events_id_seq'::regclass)
>
>  task  | json  |
>
>
> I added a PK constraint on the id column and created this json index:
>
> create index mytest_json_col_idx on mytest ((task->'name'));
>
> However the planner never uses the index...
>
>
> EXPLAIN SELECT (mytest.task->>'name') as name,
>
> COUNT((mytest.task->>'name')) AS task_count
>
> FROM mytest
>
> GROUP BY (mytest.task->>'name')
>
> ORDER BY 2  DESC;
>
>
>QUERY PLAN
>
> -
>
>  Sort  (cost=155097.84..155098.34 rows=200 width=32)
>
>Sort Key: (count(((task ->> 'name'::text
>
>->  HashAggregate  (cost=155087.70..155090.20 rows=200 width=32)
>
>  ->  Seq Scan on mytab  (cost=0.00..149796.94 rows=705435 width=32)
>
>
> Am I missing something?

yes.

first of all, your create index doesn't work for me:  I get:

postgres=# create index mytest_json_col_idx on mytest ((task->'name'));
ERROR:  data type json has no default operator class for access method "btree"

now, if you change it to:
create index mytest_json_col_idx on mytest ((task->>'name'));

it works.  Next, try disabling seq_scan if you want to force an index
scan.  It is not a given that a full table count/group by is better
done via an index.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Error When Trying to Use Npgsql to COPY into a PostgreSQL Database

2014-06-24 Thread Adrian Klaver

On 06/24/2014 01:37 PM, Altec103 wrote:

Adrian, I actually figured it out. It was a really silly mistake - basically
I was testing something earlier and edited my ODBC connection string. I was
actually connecting to the wrong database and it was an empty database, so
ODBCDataReader was trying to pull data from an empty table. Anyways, this is
fixed. However, I have a new problem.

I occasionally get an ERROR: 22021: invalid byte sequence for encoding
"UTF8": 0x92 when using the COPY query and this stops the whole process. It
occurs here in my code...

var raw = Encoding.UTF8.GetBytes(string.Concat(dataEntry, "\n"));
copy.CopyStream.Write(raw, 0, raw.Length);

dataEntry is a string. I have tried a few things to try and remove any
non-UTF8 characters from the string, but it has failed so far. Any ideas?


Look at this page:

http://www.postgresql.org/docs/9.3/interactive/multibyte.html

Postgres recognizes and can convert quite a few character sets. If you 
know what character set you are working with you can use SET 
CLIENT_ENCODING. See bottom of above page for examples.










--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Error When Trying to Use Npgsql to COPY into a PostgreSQL Database

2014-06-24 Thread Altec103
Adrian, I actually figured it out. It was a really silly mistake - basically
I was testing something earlier and edited my ODBC connection string. I was
actually connecting to the wrong database and it was an empty database, so
ODBCDataReader was trying to pull data from an empty table. Anyways, this is
fixed. However, I have a new problem.

I occasionally get an ERROR: 22021: invalid byte sequence for encoding
"UTF8": 0x92 when using the COPY query and this stops the whole process. It
occurs here in my code...

var raw = Encoding.UTF8.GetBytes(string.Concat(dataEntry, "\n"));
copy.CopyStream.Write(raw, 0, raw.Length);

dataEntry is a string. I have tried a few things to try and remove any
non-UTF8 characters from the string, but it has failed so far. Any ideas?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Error-When-Trying-to-Use-Npgsql-to-COPY-into-a-PostgreSQL-Database-tp5808954p5808982.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can't start postgresql server

2014-06-24 Thread ChoonSoo Park
Hi,

>From the pg_log folder, I can locate the following errors:

LOG:  database system was interrupted while in recovery at log time
2014-06-06 22:35:38 UTC
HINT:  If this has occurred more than once some data might be corrupted and
you might need to choose an earlier recovery target.
LOG:  could not open tablespace directory
"pg_tblspc/30022/PG_9.1_201105231": No such file or directory
LOG:  could not open tablespace directory
"pg_tblspc/30023/PG_9.1_201105231": No such file or directory
LOG:  entering standby mode
LOG:  redo starts at 3/AD20
FATAL:  online backup was canceled, recovery cannot continue
CONTEXT:  xlog redo checkpoint: redo 3/AD044D38; tli 26; xid 0/156687; oid
249448; multi 9; offset 17; oldest xid 1791 in DB 1; oldest running xid 0;
shutdown
LOG:  startup process (PID 22536) exited with exit code 1
LOG:  terminating any other active server processes
LOG:  database system was interrupted while in recovery at log time
2014-06-06 22:35:38 UTC
HINT:  If this has occurred more than once some data might be corrupted and
you might need to choose an earlier recovery target.
LOG:  could not open tablespace directory
"pg_tblspc/30022/PG_9.1_201105231": No such file or directory
LOG:  could not open tablespace directory
"pg_tblspc/30023/PG_9.1_201105231": No such file or directory
LOG:  entering standby mode
LOG:  redo starts at 3/AD20
FATAL:  online backup was canceled, recovery cannot continue

Same errors repeat.

30022 & 30023 are symbolic links under data/pg_tblspc folder. Their target
directories are lost somehow. Does it cause this problem? I tried to delete
those 2 symbolic links but postgresql still can't start up.

Just wonder what is causing the postgresql to throw an error.

>> HINT:  If this has occurred more than once some data might be corrupted
and you might need to choose an earlier recovery target.

Where can I specify recovery target?

Thank you,
Choon Park


Re: [GENERAL] Error When Trying to Use Npgsql to COPY into a PostgreSQL Database

2014-06-24 Thread Adrian Klaver

On 06/24/2014 10:47 AM, Altec103 wrote:

I am currently in the process of using an ODBC Connection and a Npgsql
Connection to transfer a database from a Sybase/Advantage platform to a
PostgreSQL platform using C# and the .NET framework.

Below is the code I have come up with to transfer the data itself...

NpgsqlCommand copyCommand = new NpgsqlCommand("COPY \"2009info.adt\" FROM
STDIN", connectionTest);

 string query = "SELECT * FROM \"2009info.adt\"";
 OdbcCommand test = new OdbcCommand(query, myConnection);
 string dataEntry = "";

 NpgsqlCopyIn copy = new NpgsqlCopyIn(copyCommand,
connectionTest);

 copy.Start();

 OdbcDataReader reader = test.ExecuteReader();

 int rowCount = reader.FieldCount;

 while (reader.Read())
 {


 for (int i = 0; i < rowCount; i++)
 {
 dataEntry = dataEntry + reader[i].ToString() + "|";
 }

 dataEntry = dataEntry.Trim().Substring(0, dataEntry.Length -
1);
 dataEntry = dataEntry.Replace("\r",
string.Empty).Replace("\n", string.Empty);

 var raw = Encoding.UTF8.GetBytes(dataEntry);
 copy.CopyStream.Write(raw, 0, raw.Length);

 dataEntry = "";
 }

 copy.End();

However, nothing happens when this code compiles. And when I look at the log
files I get the following errors.

2014-06-24 13:22:58 EDT CONTEXT:  COPY 2009info.adt, line 1
2014-06-24 13:22:58 EDT STATEMENT:  COPY "2009info.adt" FROM STDIN
2014-06-24 13:22:58 EDT ERROR:  unexpected EOF on client connection with an
open transaction
2014-06-24 13:22:58 EDT CONTEXT:  COPY 2009info.adt, line 1
2014-06-24 13:22:58 EDT STATEMENT:  COPY "2009info.adt" FROM STDIN
2014-06-24 13:22:58 EDT LOG:  could not send data to client: No connection
could be made because the target machine actively refused it.

Anyone have any ideas why this is happening?


AFAICT this:

COPY \"2009info.adt\" FROM STDIN"

Is 2009info.adt supposed to be a table name or a file?
It is in the table position in the command.

If it is a file you cannot COPY from both a file and STDIN at the same 
time.


Otherwise where is the data for STDIN coming from?

For more information see here:

http://www.postgresql.org/docs/9.3/interactive/sql-copy.html







--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Error-When-Trying-to-Use-Npgsql-to-COPY-into-a-PostgreSQL-Database-tp5808954.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] JSON Indexes

2014-06-24 Thread CS_DBA

Hi All;

We're struggling to get the planner to use a json index.

I have this table :

  Table "public.mytest"

 Column |   Type| Modifiers

+---+-

 id | bigint| not null default 
nextval('events_id_seq'::regclass)


 task  | json  |


I added a PK constraint on the id column and created this json index:

create index mytest_json_col_idx on mytest ((task->'name'));

However the planner never uses the index...


EXPLAIN SELECT (mytest.task->>'name') as name,

COUNT((mytest.task->>'name')) AS task_count

FROM mytest

GROUP BY (mytest.task->>'name')

ORDER BY 2  DESC;


   QUERY PLAN

-

 Sort  (cost=155097.84..155098.34 rows=200 width=32)

   Sort Key: (count(((task ->> 'name'::text

   ->  HashAggregate  (cost=155087.70..155090.20 rows=200 width=32)

 ->  Seq Scan on mytab  (cost=0.00..149796.94 rows=705435 width=32)


Am I missing something?


Thanks in advance...







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error When Trying to Use Npgsql to COPY into a PostgreSQL Database

2014-06-24 Thread Altec103
I am currently in the process of using an ODBC Connection and a Npgsql
Connection to transfer a database from a Sybase/Advantage platform to a
PostgreSQL platform using C# and the .NET framework.

Below is the code I have come up with to transfer the data itself...

   NpgsqlCommand copyCommand = new NpgsqlCommand("COPY \"2009info.adt\" FROM
STDIN", connectionTest);

string query = "SELECT * FROM \"2009info.adt\"";
OdbcCommand test = new OdbcCommand(query, myConnection);
string dataEntry = "";

NpgsqlCopyIn copy = new NpgsqlCopyIn(copyCommand,
connectionTest);

copy.Start();

OdbcDataReader reader = test.ExecuteReader();

int rowCount = reader.FieldCount;

while (reader.Read())
{


for (int i = 0; i < rowCount; i++)
{
dataEntry = dataEntry + reader[i].ToString() + "|";
}

dataEntry = dataEntry.Trim().Substring(0, dataEntry.Length -
1);
dataEntry = dataEntry.Replace("\r",
string.Empty).Replace("\n", string.Empty);

var raw = Encoding.UTF8.GetBytes(dataEntry);
copy.CopyStream.Write(raw, 0, raw.Length);

dataEntry = "";
}

copy.End();

However, nothing happens when this code compiles. And when I look at the log
files I get the following errors.

2014-06-24 13:22:58 EDT CONTEXT:  COPY 2009info.adt, line 1
2014-06-24 13:22:58 EDT STATEMENT:  COPY "2009info.adt" FROM STDIN
2014-06-24 13:22:58 EDT ERROR:  unexpected EOF on client connection with an
open transaction
2014-06-24 13:22:58 EDT CONTEXT:  COPY 2009info.adt, line 1
2014-06-24 13:22:58 EDT STATEMENT:  COPY "2009info.adt" FROM STDIN
2014-06-24 13:22:58 EDT LOG:  could not send data to client: No connection
could be made because the target machine actively refused it.

Anyone have any ideas why this is happening?





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Error-When-Trying-to-Use-Npgsql-to-COPY-into-a-PostgreSQL-Database-tp5808954.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird error when setting up streaming replication

2014-06-24 Thread pgdude
I get the same "weird" errors (WAL file is from different database system)
too with Ubuntu and Postgresql 9.3 when setting up a slave using rsync.

1. I installed postgresql on the slave (which automatically does the
initdb):
   sudo apt-get install postgresql-9.3

2. Modified my postgresql.conf file
(/etc/postgresql/9.3/main/postgresql.conf) to make it a slave.  Did the same
thing for pg_hba.conf adding my replication user in there.

3. Stopped both master and slave.

4. Did the rsync from the master to the slave excluding pg_xlog (thereby
leaving the existing pg_xlog contents on the slave intact).

Then I get the same errors (WAL file is from different database system).

Now if I delete everything from the data directory on the slave, including
the pg_xlog directory, and then do the rsync excluding the pg_xlog
directory, the cluster won't start because the pg_xlog directory is not
there.

But if I rsync with the pg_xlog directory, then I do not get any more
messages in the log file, whether I had the installation data directory in
place, or I deleted everything from the data directory before the rsync.


So it seems in this version of Postgresql 9.3 on Ubuntu, you should NOT
exclude pg_xlog when rsyncin' the stuff over.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Weird-error-when-setting-up-streaming-replication-tp5766888p5808923.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] if row has property X, find all rows that has property X

2014-06-24 Thread Merlin Moncure
On Tue, Jun 24, 2014 at 8:40 AM, David G Johnston
 wrote:
> Дмитрий Голубь wrote
>> For example I have table addresses and usually I want 2 things:
>> 1. Find id of bad addresses.
>> 2. Check if this address is good or bad.
>>
>> For this I write two plpgsql functions
>> 1. find_all_bad_addresses
>> 2. is_bad_address(id)
>>
>> These functions will duplicate logic of each other. How to not repeat
>> myself?
>
> You can call other functions while inside a function..
>
> CREATE FUNCTION do_a() ... $$ do_something; $$
> CREATE FUNCTION do_b() ... $$ do_a(); $$

I'd consider making 'bad address' a view:

CREATE VIEW bad_address AS
  SELECT * FROM Property
  WHERE ...

Then just reference that in relevant code.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] if row has property X, find all rows that has property X

2014-06-24 Thread David G Johnston
Дмитрий Голубь wrote
> For example I have table addresses and usually I want 2 things:
> 1. Find id of bad addresses.
> 2. Check if this address is good or bad.
> 
> For this I write two plpgsql functions
> 1. find_all_bad_addresses
> 2. is_bad_address(id)
> 
> These functions will duplicate logic of each other. How to not repeat
> myself?

You can call other functions while inside a function..

CREATE FUNCTION do_a() ... $$ do_something; $$
CREATE FUNCTION do_b() ... $$ do_a(); $$

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/if-row-has-property-X-find-all-rows-that-has-property-X-tp5808885p580.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] if row has property X, find all rows that has property X

2014-06-24 Thread Дмитрий Голубь
For example I have table addresses and usually I want 2 things:
1. Find id of bad addresses.
2. Check if this address is good or bad.

For this I write two plpgsql functions
1. find_all_bad_addresses
2. is_bad_address(id)

These functions will duplicate logic of each other. How to not repeat myself?
Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general