[GENERAL] How to check whether a data type can be cast to another

2009-10-23 Thread Kareem Sedki
Hello all,

I am trying to write a function that takes one source and one target
data type identifiers, each as a '/text/' variable, as arguments and
return true if the cast can be made, false otherwise.

I consulted the pg_cast and pg_type tables, however, I found no direct
casts from 'int4' to 'text'. Even though, you SELECT CAST(4::int4 AS
text) The possibility of indirect casts that use multiple intermediate
casts is slim - to me. Is an 'int4' cast to 'text' by means of multiple
intermediate casts from 'int4' to 'char' and eventually to 'text'?

I would prefer to have a function written in PG/PLSQL for now as we are
prototyping the system.  Could someone help me write such a function,
please?

Thank you in advance.


Re: [GENERAL] postgres doesn't start after crash

2009-10-23 Thread Craig Ringer
Sam Jas wrote:
> 
> Did you find what the reason of crash was?
> 
> Log seems that the data is corrupted as the system was crashed.

The "permission denied" error would suggest that if anything was
corrupted, it was file-system metadata. That shouldn't happen on NTFS,
though.

It's more likely that for some reason (as previously suggested: quite
possibly AV software) the WAL file creation hit disk, but the result of
the system call setting its permissions never did.

It's quite likely that if you get "properties" on the postgresql data
directory and use the security tab to grant "full control" to the
postgresql user account then apply that change recursively you may be
able to start PostgreSQL.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL error "could not find function "alt_to_mic" in file cyrillic_and_mic.so

2009-10-23 Thread Jayaraman, Rajaram (STSD)
Thanks Tom,

I replaced the file conversion_create.sql and the db got created.

In the mean while I got into a different problem, I guess this is also a 
settings related problem.

When I try to connect to the DB and perform insert/update/delete/select 
operations from the command line all seems to be working fine, but when I try 
to use my application to connect to the DB I get the following message in the 
log file and the connection aborts.

LOG:  getsockname() failed: Invalid argument
LOG:  incomplete startup packet

Can you please help me fix the problem.

Thanks once again for your help.

Regards

Rajaram J
We all make mistakes, but it's our responses to said mistakes that separate 
true leaders from the rest of the pack.

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Thursday, October 22, 2009 8:13 PM
To: Jayaraman, Rajaram (STSD)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] FATAL error "could not find function "alt_to_mic" in 
file cyrillic_and_mic.so 

"Jayaraman, Rajaram (STSD)"  writes:
> I have compiled PostgreSQL version 8.4.1 on HPUX 11iV3 (HPUX 11.31) on a ia64 
> hp server BL870c. When I try to start the database it creates the data 
> directory and while initializing it gets till creating conversions and then 
> throws a FATAL error "could not find function "alt_to_mic" in file 
> "/opt/psb/db/pgsql/lib/cyrillic_and_mic.so" ".

Well, the error is perfectly legitimate, because there is no such function
in that file, nor anywhere else in Postgres 8.4.  It did exist back in
8.0 or so, which makes me think initdb is reading some ancient copy of
conversion_create.sql.  You need to straighten out the apparent mixup
of files from different releases.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL error "could not find function "alt_to_mic" in file cyrillic_and_mic.so

2009-10-23 Thread Jayaraman, Rajaram (STSD)
Hi Just 

Wanted to add we are using PQconnectdb to connect to the database.

The contents in pg_hba.conf, just in case it helps.

# TYPE  DATABASEUSERIP-ADDRESSIP-MASK   METHOD

local   all allmd5
# IPv4-style local connections:
hostall all 127.0.0.1 255.255.255.255  md5
# IPv6-style local connections:
hostall all ::1   
:::::::   md5

Regards

Rajaram J
We all make mistakes, but it's our responses to said mistakes that separate 
true leaders from the rest of the pack.


-Original Message-
From: Jayaraman, Rajaram (STSD) 
Sent: Friday, October 23, 2009 4:10 PM
To: 'Tom Lane'
Cc: pgsql-general@postgresql.org
Subject: RE: [GENERAL] FATAL error "could not find function "alt_to_mic" in 
file cyrillic_and_mic.so 

Thanks Tom,

I replaced the file conversion_create.sql and the db got created.

In the mean while I got into a different problem, I guess this is also a 
settings related problem.

When I try to connect to the DB and perform insert/update/delete/select 
operations from the command line all seems to be working fine, but when I try 
to use my application to connect to the DB I get the following message in the 
log file and the connection aborts.

LOG:  getsockname() failed: Invalid argument
LOG:  incomplete startup packet

Can you please help me fix the problem.

Thanks once again for your help.

Regards

Rajaram J
We all make mistakes, but it's our responses to said mistakes that separate 
true leaders from the rest of the pack.

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Thursday, October 22, 2009 8:13 PM
To: Jayaraman, Rajaram (STSD)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] FATAL error "could not find function "alt_to_mic" in 
file cyrillic_and_mic.so 

"Jayaraman, Rajaram (STSD)"  writes:
> I have compiled PostgreSQL version 8.4.1 on HPUX 11iV3 (HPUX 11.31) on a ia64 
> hp server BL870c. When I try to start the database it creates the data 
> directory and while initializing it gets till creating conversions and then 
> throws a FATAL error "could not find function "alt_to_mic" in file 
> "/opt/psb/db/pgsql/lib/cyrillic_and_mic.so" ".

Well, the error is perfectly legitimate, because there is no such function
in that file, nor anywhere else in Postgres 8.4.  It did exist back in
8.0 or so, which makes me think initdb is reading some ancient copy of
conversion_create.sql.  You need to straighten out the apparent mixup
of files from different releases.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incompatible library : Missing Magic Block

2009-10-23 Thread Craig Ringer
Massa, Harald Armin wrote:
> Ambarsih,
> 
>> // add.cpp : Defines the entry point for the DLL application.
> 
> are you sure that you using the C-Compiler and not the c++ compiler?
> As much as I know about defaults, that will be a C++ compiled
> function, which is something totally different then a C compiled
> function. (and, to my knowledge, a C++ function is not easily callable
> by PostgreSQL)

So long as the function uses C linkage (extern "C") it'll be callable
directly from C++ using dlopen(), LoadLibrary, etc.

A C++ function is callable from C via function pointer even if it has
C++ linkage, though the C calling convention declared must handle stack
management, parameter passing and return value popping in the same way
the C++ calling convention on that compiler does. That means that it's
unsafe to call a __thiscall C++ member function from C, but otherwise
it's generally fine so long as the C and C++ compilers default to the
same calling convention or you explicitly specify it on both ends.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL error could not find function "alt_to_mic" in file cyrillic_and_mic.so

2009-10-23 Thread Craig Ringer
Jayaraman, Rajaram (STSD) wrote:

> I have compiled PostgreSQL version 8.4.1 on HPUX 11iV3 (HPUX 11.31) on a
> ia64 hp server BL870c. When I try to start the database it creates the
> data directory and while initializing it gets till creating conversions
> and then throws a FATAL error “could not find function "alt_to_mic" in
> file "/opt/psb/db/pgsql/lib/cyrillic_and_mic.so" “.

Is there any chance you had a prior version of PostgreSQL installed in
/opt/psb/db/pgsql/ ?

If so, try building PostgreSQL with a new prefix (say
/opt/psb/db/pgsql84/ ).

> *creating conversions ... FATAL:  could not find function "alt_to_mic"
> in file "/opt/psb/db/pgsql/lib/cyrillic_and_mic.so"*

You might also want to examine cyrillic_and_mic.so by listing its
symbols. HP-UX uses the "nm" command for this, so:

nm /opt/psb/db/pgsql/lib/cyrillic_and_mic.so

might be informative.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto-filling a field on insert

2009-10-23 Thread Craig Ringer
semi-ambivalent wrote:

> At first blush that looks good but I need an index on that concatted
> value and I don't seem to be able to index a field in a view. I'll
> check the docs on views to see if I'm missing something.

As others have noted, a multi-field index or a functional index is
probably the best option.

If you decide for some reason that you really, really need the
concatenated fields stored in the table its self, you will need to use a
BEFORE INSERT ... FOR EACH ROW trigger to populate field `D'.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] leak in libpq, getpwuid

2009-10-23 Thread Craig Ringer
Michael Nacos wrote:
> I have just run some tests, the number of lost bytes is always 292, no
> matter how many connections are opened and closed.
> I guess it's ok, then.

Search the archives for a detailed explanation of this issue. The
earlier discussion was about a supposed leak in ecpg.

See:
 Message-ID: <022e01ca06e8$898255c0$aa1c1...@rkc.local>
 Message-Id: <1247858675.9349.240.ca...@ayaki>
on the -general list.

In brief: while technically a leak, it doesn't matter. Freeing that
memory would only ever be done immediately before a program exits.
Trying to free it introduces finalization ordering issues (what if
someone calls getpwnam(), getpwuid() etc after the cache is freed?) and
wastes CPU cycles. There's no point freeing memory when the whole
program is about to exit and its memory will be more efficiently
released by the OS.

The right answer to this is an addition to the default valgrind
suppressions file, not any change to glibc.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] leak in libpq, getpwuid

2009-10-23 Thread Michael Nacos
thanks... I guess if it really mattered it would have come up by now
(since so many interfaces are based on libpq)

toying with the idea of yet another one :-)


[GENERAL] Interpreting content of wal

2009-10-23 Thread Charles-Antoine Guillat-Guignard
Hi list,

I need to recover the content of WAL files, in order to see what
happened to a database, so I am trying to translate it from binary to
human-readable log. My goal is to be able to read the modifications that
were made to my database (the same way mysqlbinlog does for MySQL).

I tried using xlogdump (from
http://xlogviewer.projects.postgresql.org/index.html ), but i get the
following error for every WAL file :

__
# /usr/local/postgres/bin/xlogdump 0001001800FE 

0001001800FE:

Unexpected page info flags 0003 at offset 0
Skipping unexpected continuation record at offset 0
invalid record length at 18/FE20


I tried compiling xlogdump against PgSQL 8.1 and 8.2 (the WAL files are
from a 8.1 version), which lead to the same error, with all the WAL
files I have. I suspect an unattended file structure (version
mismatch?), though the versions seem to match (except the architecture,
i386 for my compiling and amd64 for the DB server).

Can anyone point me to a tool or a method allowing to read or decode to
text the content of WAL files?

Ty in advance

Charles-Antoine


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Right Join Question

2009-10-23 Thread Little, Douglas
Andrew, 
I think you want a full outer join.   If I understood correctly, you want all 
real data, plus all codes without data. 
Doug


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Scott Marlowe
Sent: Thursday, October 22, 2009 6:07 PM
To: Bierbryer, Andrew
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Right Join Question

On Thu, Oct 22, 2009 at 12:43 PM, Bierbryer, Andrew
 wrote:
> I am trying to solve the following problem. I have a certain column,
> to_state, that can take a finite set of values. I want to make sure that my
> query always returns those sets of finite values, even if not all of the
> values are represented in the results. In essence, I am looking to expand
> the result returned by the query by using a right join.
>
> When I do the following simple query, this seems to work.
>
> create table hat (the_hat integer);
> insert into hat values (1),(2),(3);
> create table cat (the_hat integer);
> insert into cat values (2),(3),(4),(5);
>
> select the_hat from hat
> right join cat c using(the_hat)
>
> The right join changes the results query from 1,2,3 to 2,3,4,5.

A right join will return all the rows on the right that aren't removed
by the where clause, and add the rows on the left that match, and
return nulls for those values where there is no match.

Perhaps you want a left join here?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FATAL error could not find function "alt_to_mic" in file cyrillic_and_mic.so

2009-10-23 Thread Craig Ringer
Please reply to the list, not just to me. My reply follows inline.

Jayaraman, Rajaram (STSD) wrote:

> I replaced the file conversion_create.sql and the db got created.

Yes, but you never found out why it happened in the first place. I
strongly suspect your install is busted, probably due to an incomplete
install over an older version. You haven't given enough information to
be sure of that, but you clearly have an old version kicking around
somewhere that's interfering with your new one, and the most common
reason is installing on the same prefix (incompletely, or without
cleaning the old install out first).

Try installing your new version to a new, clean prefix and see if you
still have problems.

> When I try to connect to the DB and perform insert/update/delete/select 
> operations from the command line all seems to be working fine, but when I try 
> to use my application to connect to the DB I get the following message in the 
> log file and the connection aborts.
> 
> LOG:  getsockname() failed: Invalid argument
> LOG:  incomplete startup packet

Is your app using the same version of libpq as `psql' ? Use whatever
linker trace tool exists on HPUX (or search for "ldd hpux" which will
probably tell you).

Are you connecting with the same host argument in both cases? In
particular, might you be connecting over a unix socket (the default)
with `psql', and tcp/ip with your application? That might be a clue.

Does the system have IPv6 enabled?

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Interpreting content of wal

2009-10-23 Thread Craig Ringer
Charles-Antoine Guillat-Guignard wrote:
> Hi list,
> 
> I need to recover the content of WAL files, in order to see what
> happened to a database, so I am trying to translate it from binary to
> human-readable log. My goal is to be able to read the modifications that
> were made to my database (the same way mysqlbinlog does for MySQL).

The WAL doesn't contain a record, in a binary form or otherwise, of
anything as high-level as SQL statements. You should be able to get
information on tuple changes, but I wouldn't expect much more.

> I tried compiling xlogdump against PgSQL 8.1 and 8.2 (the WAL files are
> from a 8.1 version), which lead to the same error, with all the WAL
> files I have. I suspect an unattended file structure (version
> mismatch?), though the versions seem to match (except the architecture,
> i386 for my compiling and amd64 for the DB server).

The pgxlogviewer page suggests that 8.1 support isn't present. The code
looks mostly abandoned (last updated in 2006) so you might have some
work ahead of you :S

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check whether a data type can be cast to another

2009-10-23 Thread Merlin Moncure
On Fri, Oct 23, 2009 at 4:38 AM, Kareem Sedki  wrote:
> Hello all,
>
> I am trying to write a function that takes one source and one target data
> type identifiers, each as a 'text' variable, as arguments and return true if
> the cast can be made, false otherwise.
>
> I consulted the pg_cast and pg_type tables, however, I found no direct casts
> from 'int4' to 'text'. Even though, you SELECT CAST(4::int4 AS text) The
> possibility of indirect casts that use multiple intermediate casts is slim -
> to me. Is an 'int4' cast to 'text' by means of multiple intermediate casts
> from 'int4' to 'char' and eventually to 'text'?
>
> I would prefer to have a function written in PG/PLSQL for now as we are
> prototyping the system.  Could someone help me write such a function,
> please?

Well, there is always the 'parking in new york' method.  See, in the
big apple while parallel parking, people back up until they hit the
car behind them in order to determine when to move forward.  So, you
could implement a similar method by attempting a cast and using
exception handler to catch the cases that don't work :).  Since casts
are pretty stable it should be quite possible to rig a cache around
this mechanism.

That is, unless someone else can suggest a better way...

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data migration tool certification

2009-10-23 Thread Cjkalbente


Hi, 

The Talend certification is available to all users including any
individuals. You just need to have thorough knowledge on Talend. 

Pricewise, the Talend certification is free: you will need to take a test on
the Internet.




hfdabler wrote:
> 
> Hello to all, 
> 
> I have been using Talend now for a few months and am very happy with the
> software. I have seen on the website
> (http://www.talend.com/partners/index.php ) the page on the Talend
> certification and the exam. 
> 
> I'm pretty curious to see what it takes to take the exam, if you need to
> be REALLY good at Talend. Also I'm not sure if the exam is available for
> everyone and if it is free or paying. 
> 
> Maybe someone here has an idea about that? Thanks a lot.
> 

-- 
View this message in context: 
http://www.nabble.com/Data-migration-tool-certification-tp25994585p26025402.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check whether a data type can be cast to another

2009-10-23 Thread Tom Lane
Kareem Sedki  writes:
> I am trying to write a function that takes one source and one target
> data type identifiers, each as a '/text/' variable, as arguments and
> return true if the cast can be made, false otherwise.

I think this is fundamentally the wrong approach, because it ignores the
question of whether a specific value can be cast.  For example, there is
a cast from int4 to int2, but that doesn't mean that 99::int2 will
succeed.  What I think you should do is just try the cast and catch any
error that occurs.

If you really want to try to do what you're saying, look into
parser/parse_coerce.c.  In particular, can_coerce_type() is the
authoritative answer.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg 8.4 (Auto)-vacuumlo

2009-10-23 Thread Richard Broersma
I wanted to verify that the auto-vacuum mechanism of postgres doesn't
apply to lo objects.  Does vocuumlo still need to be called manually?

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check whether a data type can be cast to another

2009-10-23 Thread Kareem Sedki
Thank you Merlin and Tom. I appreciate your answers.

I have tried to follow that approach before I go this way. So, I checked
the appendix  of the documentation for error conditions. Searching for
cast-related conditions, I found INVALID_CHARACTER_VALUE_FOR_CAST. Would
that be the exception to catch? It doesn't sound right.

The problem I faced is that the type 'any' is not permitted in
procedural languages. If it were permitted, then we could have a
function like is_castable( value ANY, target_type TEXT) and then we
would catch exceptions if they are thrown. We didn't want to begin
working on C functions before the prototype is complete.  However, if
this is the only way, C is the way to go for now then.

If we can, programmatically, figure out whether a type can be cast to
another, we can then check each type's limits or valid values to
consider the valid.

Here is what we have, there is a field in which we should store
different data types and another field in the same row to store the
identifier of the original data type of the first field. The first field
should be polymorphic. Since I couldn't find how to make a field
polymorphic, I resorted to a 'text' field so that different data types
can be stored in the field after being cast to 'text'. When retrieving a
field, its original state should be cast back from 'text'.

Do you think of a better strategy?

Thank you, Merlin and Tom. I benefited from your answers. I appreciate
your help.

Regards,
Kareem

On 10/23/2009 04:00 PM, Tom Lane wrote:
> Kareem Sedki  writes:
>   
>> I am trying to write a function that takes one source and one target
>> data type identifiers, each as a '/text/' variable, as arguments and
>> return true if the cast can be made, false otherwise.
>> 
> I think this is fundamentally the wrong approach, because it ignores the
> question of whether a specific value can be cast.  For example, there is
> a cast from int4 to int2, but that doesn't mean that 99::int2 will
> succeed.  What I think you should do is just try the cast and catch any
> error that occurs.
>
> If you really want to try to do what you're saying, look into
> parser/parse_coerce.c.  In particular, can_coerce_type() is the
> authoritative answer.
>
>   regards, tom lane
>   


Re: [GENERAL] How to check whether a data type can be cast to another

2009-10-23 Thread Tom Lane
Kareem Sedki  writes:
> The problem I faced is that the type 'any' is not permitted in
> procedural languages. If it were permitted, then we could have a
> function like is_castable( value ANY, target_type TEXT) and then we
> would catch exceptions if they are thrown.

ANYELEMENT maybe?

> Here is what we have, there is a field in which we should store
> different data types and another field in the same row to store the
> identifier of the original data type of the first field. The first field
> should be polymorphic.

Although actually I think you should stop right here and rethink what
you are doing.  It sounds a whole lot to me like you are trying to build
an EAV store, and that seldom leads to anything good.  It is much better
to spend the necessary effort up-front to work out a concrete
fully-typed schema for your database.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg 8.4 (Auto)-vacuumlo

2009-10-23 Thread Alvaro Herrera
Richard Broersma escribió:
> I wanted to verify that the auto-vacuum mechanism of postgres doesn't
> apply to lo objects.

It doesn't.

> Does vocuumlo still need to be called manually?

Not sure about this.  I thought that one was about deleting LOs that
were not referenced in any user table.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check whether a data type can be cast to another

2009-10-23 Thread Alvaro Herrera
Kareem Sedki escribió:
> Thank you Merlin and Tom. I appreciate your answers.
> 
> I have tried to follow that approach before I go this way. So, I checked
> the appendix  of the documentation for error conditions. Searching for
> cast-related conditions, I found INVALID_CHARACTER_VALUE_FOR_CAST. Would
> that be the exception to catch? It doesn't sound right.

TIAS -- http://www.fetter.org/tias.html

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check whether a data type can be cast to another

2009-10-23 Thread Merlin Moncure
On Fri, Oct 23, 2009 at 11:14 AM, Kareem Sedki  wrote:
> Thank you Merlin and Tom. I appreciate your answers.
>
> I have tried to follow that approach before I go this way. So, I checked the
> appendix  of the documentation for error conditions. Searching for
> cast-related conditions, I found INVALID_CHARACTER_VALUE_FOR_CAST. Would
> that be the exception to catch? It doesn't sound right.
>
> The problem I faced is that the type 'any' is not permitted in procedural
> languages. If it were permitted, then we could have a function like
> is_castable( value ANY, target_type TEXT) and then we would catch exceptions
> if they are thrown. We didn't want to begin working on C functions before
> the prototype is complete.  However, if this is the only way, C is the way
> to go for now then.
>
> If we can, programmatically, figure out whether a type can be cast to
> another, we can then check each type's limits or valid values to consider
> the valid.
>
> Here is what we have, there is a field in which we should store different
> data types and another field in the same row to store the identifier of the
> original data type of the first field. The first field should be
> polymorphic. Since I couldn't find how to make a field polymorphic, I
> resorted to a 'text' field so that different data types can be stored in the
> field after being cast to 'text'. When retrieving a field, its original
> state should be cast back from 'text'.
>
> Do you think of a better strategy?

I think Tom is probably giving you the right advice here.   However,
if you decide to implement something around my 'catch the error'
approach (i was half joking when I suggested it), it should be a
simple matter to test a few and print out (raise notice) the sqlstate
variable inside the exception handler.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg 8.4 (Auto)-vacuumlo

2009-10-23 Thread Richard Broersma
On Fri, Oct 23, 2009 at 8:29 AM, Alvaro Herrera
 wrote:

>> Does vocuumlo still need to be called manually?
>
> Not sure about this.  I thought that one was about deleting LOs that
> were not referenced in any user table.

Thanks this is correct vacuumlo deletes orphaned LOs.

-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incompatible library : Missing Magic Block

2009-10-23 Thread Merlin Moncure
On Fri, Oct 23, 2009 at 7:02 AM, Craig Ringer
 wrote:
> Massa, Harald Armin wrote:
>> Ambarsih,
>>
>>> // add.cpp : Defines the entry point for the DLL application.
>>
>> are you sure that you using the C-Compiler and not the c++ compiler?
>> As much as I know about defaults, that will be a C++ compiled
>> function, which is something totally different then a C compiled
>> function. (and, to my knowledge, a C++ function is not easily callable
>> by PostgreSQL)
>
> So long as the function uses C linkage (extern "C") it'll be callable
> directly from C++ using dlopen(), LoadLibrary, etc.
>
> A C++ function is callable from C via function pointer even if it has
> C++ linkage, though the C calling convention declared must handle stack
> management, parameter passing and return value popping in the same way
> the C++ calling convention on that compiler does. That means that it's
> unsafe to call a __thiscall C++ member function from C, but otherwise
> it's generally fine so long as the C and C++ compilers default to the
> same calling convention or you explicitly specify it on both ends.

exactly...also c++ exceptions are incompatible with postgres
exceptions iirc, and it's highly advisable to avoid using threads..
also there are a couple of keyword conflicts that are annoying (those
may have been fixed).  those gotchas aside, putting c++ in the backend
is trivially done if you understand difference between c and c++
linkage.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question of using trigger's OLD in EXECUTE

2009-10-23 Thread Nim Li

Hello,

I'm new to PostgreSQL and wonder if anyone can help.

I'm creating an after-update-trigger for all tables, which copy the old 
records to a backup table.  In my test, the table with this trigger has 
only two columns - one BIGINT and one VARCHAR.


Also I'd like to pass the backup table's name through a parameter 
because I may reuse this function for more than one backup tables.


=
CREATE OR REPLACE FUNCTION cp_tbl() RETURNS TRIGGER AS $proc$
   BEGIN
  EXECUTE 'INSERT INTO ' ||
  TG_ARGV[0] ||
  ' SELECT ' ||
  OLD;
  RETURN NEW;
   END; 
$proc$ LANGUAGE plpgsql;
=

At run-time, it prompts an error:


ERROR:  column "beginning" does not exist
LINE 1: INSERT INTO test_log SELECT (1,BEGINNING)
   ^
QUERY:  INSERT INTO test_log SELECT (1,BEGINNING)
CONTEXT:  PL/pgSQL function "cp_tbl" line 2 at EXECUTE statement


"beginning" is the actual data I stored in the second column of the table.

I think the issue is related to the use of OLD in the EXECUTE statement.

Does anyone have any idea how to fix it?

Many thanks!!

Nim

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg 8.4 (Auto)-vacuumlo

2009-10-23 Thread Tom Lane
Richard Broersma  writes:
> Thanks this is correct vacuumlo deletes orphaned LOs.

Right.  That functionality is not part of autovacuum --- if you need it
you'll still need to run the contrib script occasionally.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error when trying to use pg_dump on 8.3 after minor release update

2009-10-23 Thread Joshua Berry
Greetings,

I get the following error when I try to dump a database on a production server:

[jbe...@dms dms]$ pg_dump -U dms_user -s dms > dms_s.sql
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  could not open relation
with OID 27224
pg_dump: The command was: SELECT
pg_catalog.pg_get_viewdef('27289'::pg_catalog.oid) as viewdef


It is a RHEL 5 x86_64 server, running 8.3.7/8

Since the previous backup, we upgraded (via yum) the server from 8.3.7
to 8.3.8 without a restart.

With this error, I'm concerned about stopping and starting postgresql
to see if the error goes away: I don't want us to be left with a dead
database and no backup with the latest data.

So, is this expected behavior when you fail to restart after a minor
version upgrade, or is this a sign of hardware failure and we should
try dumping relations one at a time?

Regards,
-Joshua Berry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question of using trigger's OLD in EXECUTE

2009-10-23 Thread Sam Mason
On Fri, Oct 23, 2009 at 11:27:11AM -0400, Nim Li wrote:
> CREATE OR REPLACE FUNCTION cp_tbl() RETURNS TRIGGER AS $proc$
>BEGIN
>   EXECUTE 'INSERT INTO ' ||
>   TG_ARGV[0] ||
>   ' SELECT ' ||
>   OLD;
>   RETURN NEW;
>END;   
> $proc$ LANGUAGE plpgsql;
> 
> At run-time, it prompts an error:

> Does anyone have any idea how to fix it?

You need to quote your literals! :) The details are a bit fiddly, but
the following thread covered similar things recently:

  http://archives.postgresql.org/pgsql-general/2009-09/msg01176.php

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Error when trying to use pg_dump on 8.3 after minor release update

2009-10-23 Thread Joshua Berry
> It is a RHEL 5 x86_64 server, running 8.3.7/8
>
> Since the previous backup, we upgraded (via yum) the server from 8.3.7
> to 8.3.8 without a restart.

Sorry, my facts are wrong: there was no upgrade done on this server.
It has been running 8.3.7 compiled from source built and installed on
April 1. The current instance postmaster has been running 32 days.

How can I assess the situation? What procedure would you recommend?

Regards,
-Joshua Berry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error when trying to use pg_dump on 8.3 after minor release update

2009-10-23 Thread Tom Lane
Joshua Berry  writes:
> [jbe...@dms dms]$ pg_dump -U dms_user -s dms > dms_s.sql
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  could not open relation
> with OID 27224
> pg_dump: The command was: SELECT
> pg_catalog.pg_get_viewdef('27289'::pg_catalog.oid) as viewdef

This looks like catalog corruption :-(.  Can you find a pg_class row
with that OID, ie
select * from pg_class where oid = 27224
I expect probably not, but then try it with enable_indexscan and
enable_bitmapscan turned off.  If that finds a row, then what you
have is a corrupt pg_class_oid_index and you can probably get out
of trouble by reindexing it.  You should also look at what 27289
is so you know what view is causing the problem.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question of using trigger's OLD in EXECUTE

2009-10-23 Thread Nim Li

Thank you Sam!!
The code in the posting solves my issue.  :)

Nim

On 10/23/2009 12:07 PM, Sam Mason wrote:

On Fri, Oct 23, 2009 at 11:27:11AM -0400, Nim Li wrote:

CREATE OR REPLACE FUNCTION cp_tbl() RETURNS TRIGGER AS $proc$
   BEGIN
  EXECUTE 'INSERT INTO ' ||
  TG_ARGV[0] ||
  ' SELECT ' ||
  OLD;
  RETURN NEW;
   END; 
$proc$ LANGUAGE plpgsql;

At run-time, it prompts an error:



Does anyone have any idea how to fix it?


You need to quote your literals! :) The details are a bit fiddly, but
the following thread covered similar things recently:

  http://archives.postgresql.org/pgsql-general/2009-09/msg01176.php



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error when trying to use pg_dump on 8.3 after minor release update

2009-10-23 Thread Joshua Berry
On Fri, Oct 23, 2009 at 1:43 PM, Tom Lane  wrote:
>
> This looks like catalog corruption :-(.  Can you find a pg_class row
> with that OID, ie
>select * from pg_class where oid = 27224
> I expect probably not, but then try it with enable_indexscan and
> enable_bitmapscan turned off.  If that finds a row, then what you
> have is a corrupt pg_class_oid_index and you can probably get out
> of trouble by reindexing it.  You should also look at what 27289
> is so you know what view is causing the problem.
>
>regards, tom lane
>

Thanks, that lead me to a recently added view by the customers. I
could not get the view definition:

dms=# select * from pg_class where oid = 27289;
[lots of columns]
 v_reporte_pdv_du_epin | 2200 |   27294 |16384 | 0 |
   27289 | 0 |0 | 0 | 0 |
   0 | f   | f   | v   |   27 | 0
|   0 |0 |0 |   0 | f  | f
 | t   | f  |0 ||
(1 row)

dms=# \d v_reporte_pdv_du_epin
ERROR:  could not open relation with OID 27224

However, once the view was dropped, I was able to perform a database
dump. Any ideas how this can happen, and how to avoid? Should I assume
that there are more problems and recreate the database from a fresh
dump? Or will the system suffice in it's current state?

Regards,
-Joshua Berry

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] drop view and recreate - for sync

2009-10-23 Thread Vick Khera
On Fri, Oct 23, 2009 at 2:45 AM, Sydney Puente  wrote:

> This data will be accessed a couple of times a second, and I have a cunning
> plan to have a view that points to the initial dataload, and then load up
> the new data into a shadow table, drop the view and then recreate it
> pointing to the shadow table ( which will then no longer be the shadow).

If it is only 100k rows, then within a transaction: 1) delete all
rows, 2) insert all new rows, 3) commit, 4) vacuum.

don't try to compact the table with cluster or vacuum full since
you'll just re-expand it on the next synchronization.

There should be no blocking of your read access.  This assumes your
copy is read-only, which you imply.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Data migration tool certification

2009-10-23 Thread Merlin Moncure
On Fri, Oct 23, 2009 at 8:26 AM, Cjkalbente  wrote:
>
>
> Hi,
>
> The Talend certification is available to all users including any
> individuals. You just need to have thorough knowledge on Talend.
>
> Pricewise, the Talend certification is free: you will need to take a test on
> the Internet.

how much does it cost to get talend to stop astroturfing this list?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to check whether a data type can be cast to another

2009-10-23 Thread Kareem Sedki
Well, actually, yes our work is based on an EAV model. And it has been
successful so far. However, the point I was discussing is not really
dependent on the EAV model. It is rather dependent on the fact that we
are employing an object-oriented design implemented on top of an RDBMS.

Regarding, the fully-typed system point. In fact, if you allow types to
be automatically defined and constrained by priorly defined rules of
suitability for what should define a type and how a type may behave when
input to typing functions (type-test functions), an EAV may be
fully-typed. This doesn't apply only to EAV model. A further step would
be making the suitability rules automatically inferred from existing
types if possibles.

Regarding the cast thing I was asking about, I guess we can get away
with a constraint on the field's type (which was supposed to be
polymorphic) and handle polymorphic issues in the dynamic parts of the
systems rather than in static structures, such as, tables.

Thank you very much, Merlin and Tom. I appreciate you suggestions and help.

Kind Regards,
Kareem Sedki
 

On 10/23/2009 05:24 PM, Tom Lane wrote:
> Kareem Sedki  writes:
>   
>> The problem I faced is that the type 'any' is not permitted in
>> procedural languages. If it were permitted, then we could have a
>> function like is_castable( value ANY, target_type TEXT) and then we
>> would catch exceptions if they are thrown.
>> 
> ANYELEMENT maybe?
>
>   
>> Here is what we have, there is a field in which we should store
>> different data types and another field in the same row to store the
>> identifier of the original data type of the first field. The first field
>> should be polymorphic.
>> 
> Although actually I think you should stop right here and rethink what
> you are doing.  It sounds a whole lot to me like you are trying to build
> an EAV store, and that seldom leads to anything good.  It is much better
> to spend the necessary effort up-front to work out a concrete
> fully-typed schema for your database.
>
>   regards, tom lane
>   


[GENERAL] Research and EAV models

2009-10-23 Thread Leif B. Kristensen
I've followed this list for quite a long time, and I think that I've 
discovered a pattern that I would like to discuss.

It seems like there are two camps considering EAV models. On the one 
hand, there are researchers who think that EAV is a great way to meet 
their objectives. On the other hand, there are the "business" guys who 
thnk that EAV is crap.

I've seen this pattern often enough and consistently enough that I think 
there may be an underlying difference of objectives concerning the use 
of databases itself that may be responsible for this divergence.

I'm a researcher type, and I've made an EAV model that suits me well in 
my genealogy research. How can you associate an essentially unknown 
number of sundry "events" to a "person" without an EAV model?

It seems to me that data models made for research is a quite different 
animal than data models made for business. In research, we often need to 
register data that may be hard to pin down in exactly the right pigeon 
hole, but never the less need to be recorded. The most sensible way to 
do this, IMO, is frequently to associate the data with some already-
known or postulated entity. That's where the EAV model comes in really 
handy.
-- 
Leif Biberg Kristensen | Registered Linux User #338009
Me And My Database: http://solumslekt.org/blog/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Research and EAV models

2009-10-23 Thread Tom Lane
"Leif B. Kristensen"  writes:
> It seems like there are two camps considering EAV models. On the one 
> hand, there are researchers who think that EAV is a great way to meet 
> their objectives. On the other hand, there are the "business" guys who 
> thnk that EAV is crap.

Well, no, it's not that EAV is crap.  It's that EAV is strong evidence
that you're using the wrong tool for the job.  If a SQL database is
actually a good fit for your application, then it should be possible to
extract a stronger schema for your data.  If you cannot, then you
probably should be storing your data in something else.  Otherwise
you'll spend untold man-hours beating your head against assorted walls
while you try to persuade the SQL database to do things it was never
meant for, and coping with performance issues because the cases you need
are not optimized nor indeed optimizable.  (I can just see that other
guy trying to search on one of his "polymorphic" columns :-(.)

SQL isn't the be-all and end-all of data storage.  It does relational
stuff well, and other stuff poorly.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Research and EAV models

2009-10-23 Thread Peter Hunsberger
On Fri, Oct 23, 2009 at 5:04 PM, Tom Lane  wrote:
> "Leif B. Kristensen"  writes:
>> It seems like there are two camps considering EAV models. On the one
>> hand, there are researchers who think that EAV is a great way to meet
>> their objectives. On the other hand, there are the "business" guys who
>> thnk that EAV is crap.
>
> Well, no, it's not that EAV is crap.  It's that EAV is strong evidence
> that you're using the wrong tool for the job.  If a SQL database is
> actually a good fit for your application, then it should be possible to
> extract a stronger schema for your data.  If you cannot, then you
> probably should be storing your data in something else.  Otherwise
> you'll spend untold man-hours beating your head against assorted walls
> while you try to persuade the SQL database to do things it was never
> meant for, and coping with performance issues because the cases you need
> are not optimized nor indeed optimizable.  (I can just see that other
> guy trying to search on one of his "polymorphic" columns :-(.)

I can certainly see where Tom is coming from on this and I tend to
agree in general.  However, the reality of the research world is never
that clear cut. In particular, what you find is that you have some
split where some percentage of the domain is well understood and can
be placed in a conventional schema and some portion is not so well
understood and requires something more flexible.  You could try and
split the data between two databases but that itself is very
problematic.

My take on this, for the research world, is to not go pure EAV, but
rather normalize by some more generic concepts within the domain.  Eg.
"measurement", or "evaluation", etc. You might ultimately end up with
a sort of EAV model, but the "V" portion is strongly typed within the
database and you're not trying to cast a string into 20 conventional
data types. This still requires rigorous metadata management on the EA
side of the EAV model, but you can tackle that in many ways.

> SQL isn't the be-all and end-all of data storage.  It does relational
> stuff well, and other stuff poorly.

You can build variations on EAV that are closer to a regular
relational schema.  These don't necessarily work well or poorly but
often, at least in the research world, the middle ground is good
enough.  You are after all, talking about people who spit out MySQL
databases at the drop of a hat

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Research and EAV models

2009-10-23 Thread Karsten Hilbert
On Fri, Oct 23, 2009 at 11:53:26PM +0200, Leif B. Kristensen wrote:

> I'm a researcher type, and I've made an EAV model that suits me well in 
> my genealogy research. How can you associate an essentially unknown 
> number of sundry "events" to a "person" without an EAV model?

create table person (
pk serial primary key,
who_is_it text
);

create table event (
pk serial primary key,
fk_person integer references person(pk),
what_happened text
);

I'm sure you meant something else, no ?

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Research and EAV models

2009-10-23 Thread Greg Smith

On Fri, 23 Oct 2009, Leif B. Kristensen wrote:

I'm a researcher type, and I've made an EAV model that suits me well in 
my genealogy research. How can you associate an essentially unknown 
number of sundry "events" to a "person" without an EAV model?


CREATE TABLE events(person text,key text,value text);

You seem to think there's some sort of fundamental difference between the 
two models.  There isn't.  The tuples of a standard relational row are 
simply a fixed set of key/value pairs, but you can always store EAV data 
explicitly like this simple example.  Similarly, you can always decompose 
relational data into an equivalent EAV set, where the keys are the column 
names.


The difference between the two data models is that knowing the keys in 
advance allows you to do optimizations when looking up the data that 
result in faster queries than any really generic system can do.  Trying to 
optimize queries against huge EAV sets will just end up reinventing 
concepts like indexes if you want them to run well, and then you're back 
to having to specify which are the important keys/columns in advance.


Your thinking this is a business/research distinction isn't right, it 
comes down to the size of the data set and how fast/expressive the queries 
against them are.  In the research realm, you can watch this struggle play 
in things like bioinformatics, where there's giant stacks of very similar 
data they need to sort through.  What often ends up happening is you have 
scientists start with a simple EAV system, then watch it completely fall 
down under real-world load and complicated queries once it gets past 
prototype.  Then they try to patch it for a while by reinventing SQL query 
and storage concepts (badly).


Eventually someone with traditional database background comes along, notes 
that the data you really need is for the most part predetermined, rewrites 
that into something more amenable to standard SQL, and then the whole 
thing performs better for that subset.  But now you've got stuff like 
ALTER TABLE to add a column every time you want to track something new, 
and people don't like that when the schema is still fluid.  So they start 
putting stuff into EAV stores for their next project...and the cycle 
begins anew.


Ultimately you can be really flexible in how your store your data, or you 
can get good query performance, but it's quite hard to do both.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] drop view and recreate - for sync

2009-10-23 Thread Sydney Puente
On Fri, Oct 23, 2009 at 2:45 AM, Sydney Puente  wrote:

> This data will be accessed a couple of times a second, and I have a cunning
> plan to have a view that points to the initial dataload, and then load up
> the new data into a shadow table, drop the view and then recreate it
> pointing to the shadow table ( which will then no longer be the shadow).

If it is only 100k rows, then within a transaction: 1) delete all
rows, 2) insert all new rows, 3) commit, 4) vacuum.

don't try to compact the table with cluster or vacuum full since
you'll just re-expand it on the next synchronization.

There should be no blocking of your read access.  This assumes your
copy is read-only, which you
 imply.

Ah I see what you mean - thanks very much that is v helpful!
Yes the copy will be read-only.
Will have 3 tables of data, being read (readonly) and in the background 
Will have 3 shadow tables populated from an unreliable db over an unreliable 
network.
not quite sure how I can "insert all the rows" in sql.
have postgres 8.03 BTW.

Syd


  

Re: [GENERAL] Research and EAV models

2009-10-23 Thread Peter Hunsberger
On Fri, Oct 23, 2009 at 5:30 PM, Greg Smith  wrote:
>
> Your thinking this is a business/research distinction isn't right, it comes
> down to the size of the data set and how fast/expressive the queries against
> them are.  In the research realm, you can watch this struggle play in things
> like bioinformatics, where there's giant stacks of very similar data they
> need to sort through.  What often ends up happening is you have scientists
> start with a simple EAV system, then watch it completely fall down under
> real-world load and complicated queries once it gets past prototype.  Then
> they try to patch it for a while by reinventing SQL query and storage
> concepts (badly).

I've got little idea what the OP is really dealing with but there is a
fundamental difference between the research world and the more
traditional business world.  On the research side the data models are
not well known in advance, there are few established best practices
and even data types are not always well known.  In a conventional
schema we would have about 2,000+ tables with 20,000+ columns.  This
is 6 year old system with 75% of our metadata having been added over
the last 3 years.  I expect growth to slow over the next 3 years, but
a 25% growth would not be unreasonable.  The current physical schema
is some 40 tables with maybe some 300 columns (we also render about
6000 customized presentations of the data and manage work flow and
validation with this schema).  Performance tuning can be a pain, but
we mostly got that mastered at this stage in the game: it's a trade
off, but using a conventional schema would have consumed far more
resources than we've spent over the last 6 years.  The first two
versions of the system were conventional schema and new research
protocols where taking 6 months to a year to add.  We can now add
simple ones in a day or two and complex one in a couple months (the
largest having some 60,000+ pieces of metadata, including validation
rules and workflow).

>
> Eventually someone with traditional database background comes along, notes
> that the data you really need is for the most part predetermined, rewrites
> that into something more amenable to standard SQL, and then the whole thing
> performs better for that subset.  But now you've got stuff like ALTER TABLE
> to add a column every time you want to track something new, and people don't
> like that when the schema is still fluid.  So they start putting stuff into
> EAV stores for their next project...and the cycle begins anew.
>
> Ultimately you can be really flexible in how your store your data, or you
> can get good query performance, but it's quite hard to do both.

Not much argument there!  However, it's a three way trade off: such a
schema can get good performance for a small number of users and / or a
small amount of data (we're about 300 users and 45gb total data).

-- 
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can I get one OLD.* field in a dynamic query inside a trigger function ?

2009-10-23 Thread Bruno Baguette

Hello !

I'm trying to write a little trigger function with variable arguments
quantity (at least one, but can be 2,3,4 arguments).
Theses arguments are fields name, so only varchar variable.

Since it is a dynamic query, I use the EXECUTE statement as explained on


CREATE OR REPLACE FUNCTION delete_acl_trigger() RETURNS trigger AS
$delete_acl_trigger$
DECLARE
BEGIN
  FOR i IN 0 .. TG_NARGS LOOP
EXECUTE 'SELECT delete_acl(OLD.' || TG_ARGV[i] || ');';
  END LOOP;
  RETURN OLD;
END;
$delete_acl_trigger$ LANGUAGE plpgsql;

But, when the trigger is triggered, I receive this error message :
"Query failed: ERROR: OLD used in query that is not in a rule"

How can I get the value of the OLD.' || TG_ARGV[i] field ?

Many thanks in advance !

Kind regards,

--
Bruno Baguette - bruno.bague...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I get one OLD.* field in a dynamic query inside a trigger function ?

2009-10-23 Thread Pavel Stehule
2009/10/24 Bruno Baguette :
> Hello !
>
> I'm trying to write a little trigger function with variable arguments
> quantity (at least one, but can be 2,3,4 arguments).
> Theses arguments are fields name, so only varchar variable.
>
> Since it is a dynamic query, I use the EXECUTE statement as explained on
> 
>
> CREATE OR REPLACE FUNCTION delete_acl_trigger() RETURNS trigger AS
> $delete_acl_trigger$
> DECLARE
> BEGIN
>  FOR i IN 0 .. TG_NARGS LOOP
>    EXECUTE 'SELECT delete_acl(OLD.' || TG_ARGV[i] || ');';
>  END LOOP;
>  RETURN OLD;
> END;
> $delete_acl_trigger$ LANGUAGE plpgsql;
>
> But, when the trigger is triggered, I receive this error message :
> "Query failed: ERROR: OLD used in query that is not in a rule"
>
> How can I get the value of the OLD.' || TG_ARGV[i] field ?

OLD is variable only in PLpgSQL procedure, - outside doesn't exists.
If you have a 8.4, you can use USING clause

EXPLAIN 'SELECT $1.' || TG_ARGV[i] INTO somevar USING OLD;

http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN

Regards
Pavel Stehule



>
> Many thanks in advance !
>
> Kind regards,
>
> --
> Bruno Baguette - bruno.bague...@gmail.com
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto-filling a field on insert

2009-10-23 Thread semi-ambivalent
On Oct 23, 5:10 am, cr...@postnewspapers.com.au (Craig Ringer) wrote:
> semi-ambivalent wrote:
> > At first blush that looks good but I need an index on that concatted
> > value and I don't seem to be able to index a field in a view. I'll
> > check the docs on views to see if I'm missing something.
>
> As others have noted, a multi-field index or a functional index is
> probably the best option.
>
> If you decide for some reason that you really, really need the
> concatenated fields stored in the table its self, you will need to use a
> BEFORE INSERT ... FOR EACH ROW trigger to populate field `D'.
>
> --
> Craig Ringer
>
> --
> Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
> To make changes to your 
> subscription:http://www.postgresql.org/mailpref/pgsql-general

Thanks everyone for the pointers. I like the idea of getting rid of
the concatenated field even though it reduced the worst case query
time to one tenth of what I had been seeing. But for now I'm going to
keep it there because I'm ignorant about triggers so this will be a
good opportunity to learn about them before I drop the column for
something more efficient, assuming there is.

sa

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] auto-filling a field on insert

2009-10-23 Thread Scott Marlowe
On Fri, Oct 23, 2009 at 9:32 PM, semi-ambivalent  wrote:
> On Oct 23, 5:10 am, cr...@postnewspapers.com.au (Craig Ringer) wrote:
>> semi-ambivalent wrote:
>> > At first blush that looks good but I need an index on that concatted
>> > value and I don't seem to be able to index a field in a view. I'll
>> > check the docs on views to see if I'm missing something.
>>
>> As others have noted, a multi-field index or a functional index is
>> probably the best option.
>>
>> If you decide for some reason that you really, really need the
>> concatenated fields stored in the table its self, you will need to use a
>> BEFORE INSERT ... FOR EACH ROW trigger to populate field `D'.
>>
>> --
>> Craig Ringer
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
>> To make changes to your 
>> subscription:http://www.postgresql.org/mailpref/pgsql-general
>
> Thanks everyone for the pointers. I like the idea of getting rid of
> the concatenated field even though it reduced the worst case query
> time to one tenth of what I had been seeing. But for now I'm going to
> keep it there because I'm ignorant about triggers so this will be a
> good opportunity to learn about them before I drop the column for
> something more efficient, assuming there is.

The multi column index should give you equivalent speed.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general