Francisco Reyes ?:
On Thu, 18 Mar 2004, Anton Nikiforov wrote:
But i'm worry about mentioned centeral database that should store 240
millions of records daily and should collect this data for years.
I have not worked with anything even remotely so big.
A few thougths..
I think this
Imagine an Orkut-like site. Suppose we have 'person' table of 100k
people. About 75% of these people fill in their location
(City/State/Country) information. We also have a 'city' table containing
list of cities with their state country and each city's
latitude/longitude. Assume all people's
On Friday 19 March 2004 08:10, Anton Nikiforov wrote:
I do not exactly know how to deal wth such a huge amount of data. The disk
subsytem is the must and i do undrstand this. SMP architecture is the must
also. I was asking is there any way that data will migrate from fast disk
subsystem to
On Friday 19 March 2004 07:29, Rajat Katyal wrote:
Hi:
The below pasted trigger function is executing slow
Please refer to the lines of the code in BOLD commented as performance
degradation.
A small point Rajat, but try and include a subject-line next time - it makes
it easier for people
Le ven 19/03/2004 à 11:05, David Garamond a écrit :
Imagine an Orkut-like site. Suppose we have 'person' table of 100k
people. About 75% of these people fill in their location
(City/State/Country) information. We also have a 'city' table containing
list of cities with their state country
Yannick Warnier wrote:
Imagine an Orkut-like site. Suppose we have 'person' table of 100k
people. About 75% of these people fill in their location
(City/State/Country) information. We also have a 'city' table containing
list of cities with their state country and each city's
hello,
is there a way to access the statement fiering a trigger
from within a function (C for example) ?
thanks,
razvan radu
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining
Hi:
I have a trigger function defined for the insert
(pasted below) for the table say "transform_item_tgt" . The
purpose of my trigger function is that beforeinserting each row it checks
whether value in the primary key column exists or not. The checkingis
through executing the below
* Nick Barr ([EMAIL PROTECTED]) wrote:
http://postgis.refractions.net
I second this recommendation. Additionally, consider checking out
GDAL, ogr2ogr and the TIGER dataset provided by the US Census (if you're
in the US anyway). It provides information about basically all the
streets,
I'm building some functions in plpgsql, and I got to a point where I thought:
How are transactions used in the function? Is it posible to rollback the
modifications that were made to the tables at some point in the plpgsql
function?
--
09:58:01 up 10 days, 14:29, 4 users, load average:
I have a table of 2mil records. The table consists of two columns, id
(BYTEA/GUID, PK) and i (INT, UNIQUE INDEX). Could someone explain why,
when using a bigint value like this:
select * from partition where i=30;
or
select * from partition where i in (1,2,3,30);
Postgres
Hi,
how can we do this ?
CREATE TABLE test (key1 int, key2 int , primary key (key1,key2), parent
?kind of data type? constraint parent references test );
Cédric BRINER
---(end of broadcast)---
TIP 6: Have you searched our list archives?
On Fri, 19 Mar 2004, Anton Nikiforov wrote:
Or were you talking about something else like storing different data in
different media speeds? (Like Hierarchical Storage Management)
I do not exactly know how to deal wth such a huge amount of data. The disk subsytem
is the must and i do
Table of 2mil records, two columns: id (BYTEA/GUID, PK) and i (INT,
UNIQUE INDEX). i values range from 1 to 200.
I'm creating several partial index for i as follows:
create unique index i_partition_i_1to100k on partition(i)
where i=0 and i=10;
create unique index
David Garamond wrote:
I have a table of 2mil records. The table consists of two columns, id
(BYTEA/GUID, PK) and i (INT, UNIQUE INDEX). Could someone explain why,
when using a bigint value like this:
select * from partition where i=30;
or
select * from partition where i in
I post this mail yesterday to the hacker list as a reply/following,
but it has been stalled, so I re-post it here...
Hi postgresql experts,
I am new to the group so forgive any misunderstanding or repetition.
I noticed a previous mail concerning unsigned types, claiming
several solutions, which
On Fri, 19 Mar 2004, BRINER Cedric wrote:
CREATE TABLE test (key1 int, key2 int , primary key (key1,key2), parent
?kind of data type? constraint parent references test );
Foreign keys must reference a unique value. If the only uniqueness
constraint is (key1, key2) as above, then it
Hi.
I want to install postgres 7.4.2 on solaris 9.
I configure with: ./configure --enable-thread-safety
and it works fine (no error).
When I do gmake, I get this error:
thread.c: In function `pqGetpwuid':
thread.c:116: error: too many arguments to function `getpwuid_r'
make[3]: *** [thread.o]
David Garamond [EMAIL PROTECTED] writes:
I have a table of 2mil records. The table consists of two columns, id
(BYTEA/GUID, PK) and i (INT, UNIQUE INDEX). Could someone explain why,
when using a bigint value like this:
Postgres immediately switches from index scan to seq scan?
Bas Scheffers [EMAIL PROTECTED] writes:
Once you have that, the rest is easy. create a column of the type point
and store the grid coordinates in there. The just use the contains
operator (~) in a query.
(http://www.postgresql.org/docs/7.4/static/functions-geometry.html)
Example:
My
David Garamond [EMAIL PROTECTED] writes:
explain tells me it is using the partial index. But when I create
partial index on the id column (BYTEA):
create unique index i_partition_id_000 on partition(id)
where id like '\\000%';
explain select * from partition where id like
On Friday 19 March 2004 13:01, Martin Marques wrote:
I'm building some functions in plpgsql, and I got to a point where I
thought: How are transactions used in the function? Is it posible to
rollback the modifications that were made to the tables at some point in
the plpgsql function?
You can
I've done this with the debug level all the way up...
You can get the file here( 20 megs uncompressed, 130K compressed ):
http://www.papajohns.com/postgres/postgres.log.bz2
While this dump was running, top reported that pg_dump was taking up
around 500-550megs. Then the machine stopped
On Fri, 19 Mar 2004, Pierre Didelon wrote:
I post this mail yesterday to the hacker list as a reply/following,
but it has been stalled, so I re-post it here...
Hi postgresql experts,
I am new to the group so forgive any misunderstanding or repetition.
I noticed a previous mail concerning
On Fri, Mar 19, 2004 at 17:05:11 +0700,
David Garamond [EMAIL PROTECTED] wrote:
Imagine an Orkut-like site. Suppose we have 'person' table of 100k
people. About 75% of these people fill in their location
(City/State/Country) information. We also have a 'city' table containing
list of
Hi,
I am pretty new to pgsql but have the basic knowledge of sql. I am trying
to figure out how to solve the following with a funtion:
I want to run a function (I guess written in pl/pgsql) that takes two
variables (username and nasname).
Depending on boxname I want two different results.
Max Ahston wrote:
create or replace function get_dns(varchar(40), varchar(40)) returns
setof holder as
'
declare
r holder%rowtype;
begin
IF ($2 == tigris) then
for r in select Attribute, op, value FROM radreply
WHERE username = $1 loop
return next r;
end
Hello list,
Have a table with char(3) column, the column should contain values like
'001','002','003'... and so on, keys must be correlatives no
'001','005','007' should be allowed, any idea how to get it done using a
trigger?
Thanks in advance
--
Josué Maldonado
Hello Richard,
Thanks for your idea, I had something similar but I was unable to write
it in plpgsql, now I have it done.
Thanks.
Richard Huxton wrote:
On Friday 19 March 2004 19:25, Josué Maldonado wrote:
Hello list,
Have a table with char(3) column, the column should contain values like
GeneralBits is published weekly on Mondays, for those
who do not know already . Every issue is completely
about PostgreSQL.
There are many good (and some mediocre :-)
articles available; check out the archive area.
There are currently 66 issues available for
review--over a year's worth of
We do have quite a few trigger. Some are RI triggers postgres
automagically creates. The remaining triggers are used for things like
audit tables, or logging who updated a row etc...
I did what you suggested. I ran pg_dump dbname file.dmp. Immediatly
aftwards, kill -ABRT pid. I ended up
31 matches
Mail list logo