Re: [GENERAL] Research and EAV models

2009-10-24 Thread Johan Nel

Hi Peter,

I agree 100% with you.  EAV can be a good "middle of the road" appoach as 
you suggest.


Peter Hunsberger wrote:

My take on this, for the research world, is to not go pure EAV, but
rather normalize by some more generic concepts within the domain.  Eg.
"measurement", or "evaluation", etc. You might ultimately end up with
a sort of EAV model, but the "V" portion is strongly typed within the
database and you're not trying to cast a string into 20 conventional
data types. This still requires rigorous metadata management on the EA
side of the EAV model, but you can tackle that in many ways.


SQL isn't the be-all and end-all of data storage.  It does relational
stuff well, and other stuff poorly.


You can build variations on EAV that are closer to a regular
relational schema.  These don't necessarily work well or poorly but
often, at least in the research world, the middle ground is good
enough.  You are after all, talking about people who spit out MySQL
databases at the drop of a hat


I use a very similar approach in managing meta-data, normalize the data 
that can be normalized and use EAV for the rest.  Potentially eliminating 
as much as possible text search, however in some scenarios it might be 
necessary but an additional where on some normalized columns can help a 
lot with performance.


One of my application meta-data frameworks uses only two tables to store 
all meta-data about an application and have basically the following structure:


CREATE TABLE controls (
ctrl_no SERIAL PRIMARY KEY NOT NULL,
app_id varchar(30) NOT NULL,
ctrl_type varchar(30) NOT NULL,
ctrl_id varchar(30) NOT NULL,
ctrl_property text, -- This can be also hstore to add some intelligence
CONSTRAINT controls_unique UNIQUE (app_id, ctrl_type, ctrl_id));

CREATE TABLE members (
ctrlmember_no SERIAL PRIMARY KEY NOT NULL,
ctrl_no INTEGER NOT NULL REFERENCES controls(ctrl_no),
member_no INTEGER NOT NULL REFERENCES controls(ctrl_no),
member_type varchar(30) NOT NULL,
member_property text, -- This can be a hstore to add more intelligence
CONSTRAINT member_unique UNIQUE (ctrl_no, member_no));

ctrl_property is used to store meta-data based on ctrl_type.

member_property stores meta-data based on member_type and/or overriding 
default ctrl_property values based on the parent ctrl_no it is associated 
with.


Without this approach I would have to create more than 100 tables if I 
want to fully normalize this meta-data.


Many people have indicated that I am actually duplicating many of the 
catalog features of a relational database, and I agree.  However, it 
allows me to port this meta-data onto any user specified RDBMS without 
having to worry about database specifics.


With two [recursive] queries on the above two tables I can answer most of 
the fundamental questions regarding the meta-data:


1. Show me the members (parent -> child) of a specific feature.
2. Show me the owners (child -> parent) of a specific feature.

Extending the above, it allows for an easy development plan for writing a 
generic application framework that not only manages the meta-data, but 
also allows the same framework to run/create a user interface on the fly 
by a couple of nuances on the above two queries and using the EAV 
(%_property) columns to supply all the default properties and behaviour of 
the application.


Changing the behavior of an application becomes primarily a database 
management issue, a lot less application upgrade management in a 
distributed environment.


To come back to the original message.  Yes there are a place for EAV, not 
only in Research but even in Business data.  I have a Environmental 
software scenario, and EAV on the business data provide me the edge 
against my competitors.  Lot less time needed to implement new features 
compared to doing the normal functional decomposition and system 
development life cycle.


In conclusion, I include a extract from an article by Dan Appleton 
(Datamation, 1983) that my approach is based on:


“The nature of end-user software development and maintenance will change 
radically over the next five years simply because 500 000 programmers will 
not be able to rewrite $400 billion of existing software (which is hostage 
to a seven- to 10-year life cycle).  They'll be further burdened by those 
new applications in the known backlog, as well as by those applications in 
the hidden backlog.  To solve the problem, dp (data processing) shops must 
improve productivity in generating end-user software and provide end-users 
with the means of generating their own software without creating 
anarchy...  The answer to this is a data-driven (meta-data) prototyping 
approach, and companies that do not move smoothly in this direction will 
either drown in their own information pollution or loose millions on 
systems that are late, cost too much, and atrophy too quickly.”


Regards,

Johan Nel
Pretoria, South Africa.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
T

Re: [GENERAL] Research and EAV models

2009-10-24 Thread Merlin Moncure
On Fri, Oct 23, 2009 at 5:53 PM, Leif B. Kristensen  wrote:
> I've followed this list for quite a long time, and I think that I've
> discovered a pattern that I would like to discuss.
>
> It seems like there are two camps considering EAV models. On the one
> hand, there are researchers who think that EAV is a great way to meet
> their objectives. On the other hand, there are the "business" guys who
> thnk that EAV is crap.

I think where people get into trouble with EAV is they tend to
simplify data too much so that the database can no longer deliver on
the requirements of the application.  They then have to support those
requirements in other layers on top of the database which adds
complexity and eats performance.  But judiciously to solve particular
problems it can be a fine solution.

ISTM the EAV debate is an example of a much broader debate going on in
the information management world, which is whether to keep
data-managing logic in the database or outside of the database.  The
'in the database' camp tends to prefer things like views, rich
constraints, stored procedures and other database features that
support and maintain your data.  'out of the database' people tend to
try and keep the database simple so they can manage things in an
application framework or an ORM.  The EAV position taken to the
extreme could be considered the fundamentalists of the latter camp.
People here are obviously more database biased...postgresql is
probably the best open source solution out there that provides rich
database features.

Personally, I have long ago come to the position that I don't like
code that writes sql (with certain limited exceptions)...generated sql
tends to lead to many problems IMO.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can the string literal syntax for function definitions please be dropped ?

2009-10-24 Thread Timothy Madden
Hello


Can the string literal syntax for the function body in a CREATE FUNCTION
statement please,
please be dropped ?

http://www.postgresql.org/docs/8.4/interactive/plpgsql-structure.html

It is so annoying and not ISO/ANSI and not compatible with other DBMSs...

I have written a mail about SQL conformance on a list like this once before,
and I promptly got
a detailed negative response back!

Now I can understand that the standard may be unrealistically demanding for
someone actually
trying to build and implement a DBMS (although I am yet to read or hear this
actually), but for
features already present in PostgreSQL (binary objects, SQL functions), some
effort to also
make the syntax conforming to the standards should be worthy ...

Thank you,
Timothy Madden


[GENERAL] is postgres a good solution for billion record data

2009-10-24 Thread shahrzad khorrami
is postgres a good solution for billion record data, think of 300kb data
insert into db at each minutes, I'm coding with php
what do you recommend to manage these data?

-- 
Shahrzad Khorrami


[GENERAL] recovery mode

2009-10-24 Thread paulo matadr
Hi all,
my database entered in recovery mode last week,

analyzing log file of  server found this error:

cat /var/log/menssages 
 kernel: postmaster[1023]: segfault at fff0 rip 0060d993 
rsp 7fff15f53c28 error 4

which means that? 

att

Paul


  

Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

Re: [GENERAL] Can the string literal syntax for function definitions please be dropped ?

2009-10-24 Thread Tom Lane
Timothy Madden  writes:
> Can the string literal syntax for the function body in a CREATE FUNCTION
> statement please,
> please be dropped ?

No.  Since the function's language might be anything, there's no way to
identify the end of the function body otherwise.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Research and EAV models

2009-10-24 Thread Simon Riggs
On Fri, 2009-10-23 at 23:53 +0200, Leif B. Kristensen wrote:
> I've followed this list for quite a long time, and I think that I've 
> discovered a pattern that I would like to discuss.
> 
> It seems like there are two camps considering EAV models. On the one 
> hand, there are researchers who think that EAV is a great way to meet 
> their objectives. On the other hand, there are the "business" guys who 
> thnk that EAV is crap.
> 
> I've seen this pattern often enough and consistently enough that I think 
> there may be an underlying difference of objectives concerning the use 
> of databases itself that may be responsible for this divergence.
> 
> I'm a researcher type, and I've made an EAV model that suits me well in 
> my genealogy research. How can you associate an essentially unknown 
> number of sundry "events" to a "person" without an EAV model?
> 
> It seems to me that data models made for research is a quite different 
> animal than data models made for business. In research, we often need to 
> register data that may be hard to pin down in exactly the right pigeon 
> hole, but never the less need to be recorded. The most sensible way to 
> do this, IMO, is frequently to associate the data with some already-
> known or postulated entity. That's where the EAV model comes in really 
> handy.

This problem is common in many different areas, not just research. 

In most data models there will be parts that are well known and parts
that are changing rapidly. For example, in banking, a customer "account"
has been defined the same way for almost as long as computers have been
around. However, customer characteristics that the bank wishes to track
for fraud detection are newly invented each week.

The way you model data should not be only one or the other way. You
should model your well-known portions using relational models and the
faster changing/dynamically developing aspects using EAV models. You can
put both into an RDBMS, as Karsten shows. I've seen that implemented in
various ways, from including XML blobs to text strings, EAV tables or
something like hstore.

-- 
 Simon Riggs   www.2ndQuadrant.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] recovery mode

2009-10-24 Thread paulo matadr
Hi all,
my database entered in recovery mode last week,

analyzing log file of  server found this error:

cat /var/log/menssages 
 kernel: postmaster[1023]: segfault at fff0 rip 0060d993 
rsp 7fff15f53c28 error 4

which means that? 

att

Paul



  

Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

[GENERAL] How to list a role's permissions for a given relation?

2009-10-24 Thread Kynn Jones
How can I list the permissions of a given user/role for a specific
relation/view/index, etc.?

Thanks!

Kynn


Re: [GENERAL] How can I get one OLD.* field in a dynamic query inside a trigger function ?

2009-10-24 Thread Bruno Baguette

Le 24/10/09 06:46, Pavel Stehule a écrit :

2009/10/24 Bruno Baguette :

Hello !

I'm trying to write a little trigger function with variable arguments
quantity (at least one, but can be 2,3,4 arguments).
Theses arguments are fields name, so only varchar variable.

Since it is a dynamic query, I use the EXECUTE statement as explained on


CREATE OR REPLACE FUNCTION delete_acl_trigger() RETURNS trigger AS
$delete_acl_trigger$
DECLARE
BEGIN
 FOR i IN 0 .. TG_NARGS LOOP
   EXECUTE 'SELECT delete_acl(OLD.' || TG_ARGV[i] || ');';
 END LOOP;
 RETURN OLD;
END;
$delete_acl_trigger$ LANGUAGE plpgsql;

But, when the trigger is triggered, I receive this error message :
"Query failed: ERROR: OLD used in query that is not in a rule"

How can I get the value of the OLD.' || TG_ARGV[i] field ?


OLD is variable only in PLpgSQL procedure, - outside doesn't exists.
If you have a 8.4, you can use USING clause

EXPLAIN 'SELECT $1.' || TG_ARGV[i] INTO somevar USING OLD;

http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN


Hello Pavel,

Thanks for your answer !

Unfortunately, I'm running PostgreSQL 8.3.7 (PostgreSQL 8.3.7 on 
i586-mandriva-linux-gnu, compiled by GCC i586-mandriva-linux-gnu-gcc 
(GCC) 4.2.3 (4.2.3-6mnb1)).


Since 8.4.1 is not available for Mandriva 2009.1, I can only have this 
PostgreSQL version. (I don't have root access on that server).


Is there another way, usable in PostgreSQL 8.3.7, to solve my problem ?

Many thanks in advance !

Kind Regards,

--
Bruno Baguette - bruno.bague...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to determine the operator class that an operator belongs to

2009-10-24 Thread Marc Munro
I'm trying to reconstruct create operator class statements from the
catalogs.  My problem is that access method operators are related only
to their operator families.

My current assumtion is that all operators for an operator class will
have the same left and right operand types as the data type (opcintype)
of the operator class, or will be None.

Here is my query:
-- List all operator family operators.
select am.amopfamily as family_oid,
   oc.oid as class_oid
from   pg_catalog.pg_amop am
inner join pg_catalog.pg_operator op
  on  op.oid = am.amopopr
left outer join pg_catalog.pg_opclass oc
  on  oc.opcfamily = am.amopfamily
  and (oc.opcintype = op.oprleft or op.oprleft = 0)
  and (oc.opcintype = op.oprright or op.oprright = 0)

Is my assumption correct?  Does the query seem correct?

Thanks in advance.

__
Marc

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is postgres a good solution for billion record data.. what about mySQL?

2009-10-24 Thread Tawita Tererei
In addition to this what about MySQL, how much data (records) that can be
managed with it?

regards

On Sun, Oct 25, 2009 at 3:32 AM, shahrzad khorrami <
shahrzad.khorr...@gmail.com> wrote:

> is postgres a good solution for billion record data, think of 300kb data
> insert into db at each minutes, I'm coding with php
> what do you recommend to manage these data?
>
> --
> Shahrzad Khorrami
>



-- 
Mr Tererei A Aruee
IT Manager
MLPID


Re: [GENERAL] is postgres a good solution for billion record data.. what about mySQL?

2009-10-24 Thread Raymond O'Donnell
On 24/10/2009 20:46, Tawita Tererei wrote:
> In addition to this what about MySQL, how much data (records) that can be
> managed with it?
> 
> regards
> 
> On Sun, Oct 25, 2009 at 3:32 AM, shahrzad khorrami <
> shahrzad.khorr...@gmail.com> wrote:
> 
>> is postgres a good solution for billion record data, think of 300kb data
>> insert into db at each minutes, I'm coding with php
>> what do you recommend to manage these data?

I know that many people on this list manage very large databases with
PostgreSQL. I haven't done it myself, but I understand that with the
right hardware and good tuning, PG will happily deal with large volumes
of data; and 300kb a minute isn't really very much by any standards.

You can get a few numbers here: http://www.postgresql.org/about/

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I get one OLD.* field in a dynamic query inside a trigger function ?

2009-10-24 Thread Pavel Stehule
2009/10/24 Bruno Baguette :
> Le 24/10/09 06:46, Pavel Stehule a écrit :
>>
>> 2009/10/24 Bruno Baguette :
>>>
>>> Hello !
>>>
>>> I'm trying to write a little trigger function with variable arguments
>>> quantity (at least one, but can be 2,3,4 arguments).
>>> Theses arguments are fields name, so only varchar variable.
>>>
>>> Since it is a dynamic query, I use the EXECUTE statement as explained on
>>>
>>> 
>>>
>>> CREATE OR REPLACE FUNCTION delete_acl_trigger() RETURNS trigger AS
>>> $delete_acl_trigger$
>>> DECLARE
>>> BEGIN
>>>  FOR i IN 0 .. TG_NARGS LOOP
>>>   EXECUTE 'SELECT delete_acl(OLD.' || TG_ARGV[i] || ');';
>>>  END LOOP;
>>>  RETURN OLD;
>>> END;
>>> $delete_acl_trigger$ LANGUAGE plpgsql;
>>>
>>> But, when the trigger is triggered, I receive this error message :
>>> "Query failed: ERROR: OLD used in query that is not in a rule"
>>>
>>> How can I get the value of the OLD.' || TG_ARGV[i] field ?
>>
>> OLD is variable only in PLpgSQL procedure, - outside doesn't exists.
>> If you have a 8.4, you can use USING clause
>>
>> EXPLAIN 'SELECT $1.' || TG_ARGV[i] INTO somevar USING OLD;
>>
>>
>> http://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
>
> Hello Pavel,
>
> Thanks for your answer !
>
> Unfortunately, I'm running PostgreSQL 8.3.7 (PostgreSQL 8.3.7 on
> i586-mandriva-linux-gnu, compiled by GCC i586-mandriva-linux-gnu-gcc (GCC)
> 4.2.3 (4.2.3-6mnb1)).
>
> Since 8.4.1 is not available for Mandriva 2009.1, I can only have this
> PostgreSQL version. (I don't have root access on that server).
>
> Is there another way, usable in PostgreSQL 8.3.7, to solve my problem ?

you can use plperl or plpython for this task.

Pavel

>
> Many thanks in advance !
>
> Kind Regards,
>
> --
> Bruno Baguette - bruno.bague...@gmail.com
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is postgres a good solution for billion record data.. what about mySQL?

2009-10-24 Thread Scott Marlowe
On Sat, Oct 24, 2009 at 1:46 PM, Tawita Tererei  wrote:
> In addition to this what about MySQL, how much data (records) that can be
> managed with it?

That's a question for the mysql mailing lists / forums really.  I do
know there's some artificial limit in the low billions that you have
to create your table with some special string to get it to handle
more.

As for pgsql, we use one of our smaller db servers to keep track of
our stats.  It's got a 6 disk RAID-10 array of 2TB SATA 5400 RPM
drives (i.e. not that fast really) and we store about 2.5M rows a day
in it.  So in 365 days we could see 900M rows in it.  Each daily
partition takes about 30 seconds to seq scan.  On our faster servers,
we can seq scan all 2.5M rows in about 10 seconds.

PostgreSQL can handle it, but don't expect good performance with a
single 5400RPM SATA drive or anything.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is postgres a good solution for billion record data

2009-10-24 Thread Scott Marlowe
On Sat, Oct 24, 2009 at 7:32 AM, shahrzad khorrami
 wrote:
> is postgres a good solution for billion record data, think of 300kb data
> insert into db at each minutes, I'm coding with php
> what do you recommend to manage these data?

You'll want a server with LOTS of hard drives spinning under it.  Fast
RAID controller with battery backed RAM.  Inserting the data is no
problem. 300kb a minute is nothing.  My stats machine that handles
about 2.5M rows a day during the week is inserting in the megabytes
per second (it's also the search database so there's the indexer wtih
16 threads hitting it).  The stats part of the load is miniscule until
you start retrieving large chunks of data, then it's mostly sequential
reads in the 100+Megs a second.

The more drives and the better the RAID controller you throw at the
problem the better performance you'll get.  For the price of one
oracle license for one core, you can build a damned find pgsql server
or pair of servers.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I get one OLD.* field in a dynamic query inside a trigger function ?

2009-10-24 Thread Bruno Baguette

Le 24/10/09 22:09, Pavel Stehule a écrit :

you can use plperl or plpython for this task.

Pavel


re-Hello Pavel,

I always used plpgsql because I *thought* it was the most powerfull 
procedural language for stored procedures in PostgreSQL.


The fact that PL/pgSQL is most documented than PL/Tcl and PL/Perl (and 
PL/Python *seems* to have very poor documentation) has probably made me 
wrong.


Which one would you advise me to learn and use, instead of PL/pgSQL ?

Can you explain me why does PL/Perl and PL/Python can does this task 
(using OLD in a dynamic query) where PL/pgSQL can't ?


Once again, thanks a lot for your tips ! :-)

Kind regards,

--
Bruno Baguette - bruno.bague...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is postgres a good solution for billion record data

2009-10-24 Thread Scott Marlowe
On Sat, Oct 24, 2009 at 2:43 PM, Scott Marlowe  wrote:
> On Sat, Oct 24, 2009 at 7:32 AM, shahrzad khorrami
>  wrote:
>> is postgres a good solution for billion record data, think of 300kb data
>> insert into db at each minutes, I'm coding with php
>> what do you recommend to manage these data?
>
> You'll want a server with LOTS of hard drives spinning under it.  Fast
> RAID controller with battery backed RAM.  Inserting the data is no
> problem. 300kb a minute is nothing.  My stats machine that handles
> about 2.5M rows a day during the week is inserting in the megabytes
> per second (it's also the search database so there's the indexer wtih
> 16 threads hitting it).  The stats part of the load is miniscule until
> you start retrieving large chunks of data, then it's mostly sequential
> reads in the 100+Megs a second.
>
> The more drives and the better the RAID controller you throw at the
> problem the better performance you'll get.  For the price of one
> oracle license for one core, you can build a damned find pgsql server
> or pair of servers.

Quick reference, you get one of these:

http://www.aberdeeninc.com/abcatg/Stirling-X888.htm

with dual 2.26GHz Nehalem CPUs, 48 Gigs ram, and 48 73K 15kRPM Seagate
barracudas for around $20,000.  That's the same cost for a single
oracle license for one CPU.  That's way overkill for what you're
talking about doing.  A machine with 8 or 16 disks could easily handle
the load you're talking about.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I get one OLD.* field in a dynamic query inside a trigger function ?

2009-10-24 Thread Pavel Stehule
2009/10/24 Bruno Baguette :
> Le 24/10/09 22:09, Pavel Stehule a écrit :
>>
>> you can use plperl or plpython for this task.
>>
>> Pavel
>
> re-Hello Pavel,
>
> I always used plpgsql because I *thought* it was the most powerfull
> procedural language for stored procedures in PostgreSQL.

plpgsql is best language for stored procedures, but simply it is not
for all. There are some issue still, but some significant limits are
removed in 8.4.

>
> The fact that PL/pgSQL is most documented than PL/Tcl and PL/Perl (and
> PL/Python *seems* to have very poor documentation) has probably made me
> wrong.
>
> Which one would you advise me to learn and use, instead of PL/pgSQL ?

It depends on what you are know (if you known better perl or pthon).
Usually I using mainly plpgsql and on some functions plperl and C.
plpgsql is good language as glue of SQL statements, plperl is good for
IO and CPAN libraries, and C should help with some processor intensive
functions.

>
> Can you explain me why does PL/Perl and PL/Python can does this task (using
> OLD in a dynamic query) where PL/pgSQL can't ?

External procedures (like procedures in plperl or plpython) has not
100% integrated SQL. Next, SQL row is mapped to perl's array and it
should be iterated. Plpgsql is similar to classic programming
languages. In mostly older languages you cannot iterate over
recordset.. plpgsql is static language like C, Pascal, ADA. Perl or
Python are dynamic languages, so you can do some tasks very simple.
And other not, because these languages are only on partially
integrated with SQL.

>
> Once again, thanks a lot for your tips ! :-)

with pleasure

Pavel
>
> Kind regards,
>
> --
> Bruno Baguette - bruno.bague...@gmail.com
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery mode

2009-10-24 Thread Tom Lane
paulo matadr  writes:
> analyzing log file of  server found this error:
> cat /var/log/menssages 
>  kernel: postmaster[1023]: segfault at fff0 rip 0060d993 
> rsp 7fff15f53c28 error 4

> which means that? 

Well, it's a crash, but there's not nearly enough information here to
say what caused it.  Do you have a core-dump file you could get a
stack trace from?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to determine the operator class that an operator belongs to

2009-10-24 Thread Tom Lane
Marc Munro  writes:
> I'm trying to reconstruct create operator class statements from the
> catalogs.  My problem is that access method operators are related only
> to their operator families.

That's because operators might not be in any class.

If you really want to find this out, look in pg_depend for linkages
from operators to classes.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is postgres a good solution for billion record data.. what about mySQL?

2009-10-24 Thread Lew

Raymond O'Donnell wrote:

On 24/10/2009 20:46, Tawita Tererei wrote:

In addition to this what about MySQL, how much data (records) that can be
managed with it?

regards

On Sun, Oct 25, 2009 at 3:32 AM, shahrzad khorrami <
shahrzad.khorr...@gmail.com> wrote:


is postgres a good solution for billion record data, think of 300kb data
insert into db at each minutes, I'm coding with php
what do you recommend to manage these data?


I know that many people on this list manage very large databases with
PostgreSQL. I haven't done it myself, but I understand that with the
right hardware and good tuning, PG will happily deal with large volumes
of data; and 300kb a minute isn't really very much by any standards.

You can get a few numbers here: http://www.postgresql.org/about/


I know folks who've successfully worked with multi-terabyte databases with PG.

--
Lew

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general