Re: [GENERAL] Performance problem with 50,000,000 rows

2001-10-10 Thread Feite Brekeveld

David Link wrote:

Just, reading it and I always enter these kind of queries like:

select   i.isbn,
 t.vendor,
 i.store,
 i.qty
from bk_inv i,
 bk_title t
where
   t.vendor = '01672708' and
   i.isbn = t.isbn;

Don't know if it makes a difference but I can imagine that it could because
the t.vendor = '...' limits the possibilities in the first stage.

Regards,

Feite

 I'm new to PG but this just seems wrong.  Can someone take a look:

   .---. .---.
   | bk_inv| | bk_title  |
   |---| |---|
   | isbn  |---| isbn  |
   | store | | vendor|
   | qty   | |   |
   | week  | `---'
   |   |  2,000,000 recs
   `---'
50,000,000 recs

   Actual record numbers:
 bk_inv  : 46,790,877
 bk_title:  2,311,710

 VENDOR REPORT

   A list of Inventory items, for any one given vendor (e.q. 01672708)

 select   i.isbn,
  t.vendor,
  i.store,
  i.qty
 from bk_inv i,
  bk_title t
 wherei.isbn = t.isbn
 and  t.vendor = '01672708' ;

 This query should be instantaneous.  Granted that's 50 million rows, but
 I have created an index on the isbn column for both tables.
 After about 25 minutes (on 4 processor Del 6300 with 1GB Memory) it
 spits out:

 ERROR:  Write to hashjoin temp file failed

 tiger=# explain select * from bk_inv i, bk_title t where i.isbn = t.isbn
 and t.vendor  ='5029';
 NOTICE:  QUERY PLAN:

 Merge Join  (cost=0.00..11229637.06 rows=2172466 width=72)
   -  Index Scan using bk_title_isbn_idx on bk_title t
 (cost=0.00..390788.08 rows=107331 width=24)
   -  Index Scan using bk_inv_isbn_idx on bk_inv i
 (cost=0.00..10252621.38 rows=46790877 width=48)

 BIG COST!

 These explain queries show the existance of the indexes and give small
 costs:

 tiger=# explain select * from bk_title where isbn = '5029';
 NOTICE:  QUERY PLAN:

 Index Scan using bk_title_isbn_idx on bk_title  (cost=0.00..4.90 rows=1
 width=24)

 tiger=# explain select * from bk_inv where isbn = '0897474228';
 NOTICE:  QUERY PLAN:

 Index Scan using bk_inv_isbn_idx on bk_inv  (cost=0.00..225.53 rows=55
 width=48)

 Note.  Same tables, same query returns instantaneously with Oracle 8.1.
 What I am hoping to show is that Postgres can do our job too.

 Any help on this much obliged.  (Yes I ran vacuum analyze).

 David Link
 White Plains, NY

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 http://archives.postgresql.org

--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] how to get the md5 result of a string ?

2001-09-10 Thread Feite Brekeveld

Hi,

Is there a function like:

select md5(attribute_name) from 

Thanks,


--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] slow update but have an index

2001-08-17 Thread Feite Brekeveld

Hi,

I have a table with no relations to other tables. It has a sequence
number field (integer) and a status field being a char.

There is a unique index on the seqno field.

Now this table has about 80,000 records. I need to update 74,000 status
fields. So I made a dump, and hacked the dump into SQL statements like:

update accounting set status = 'C' where seqno = 1566385;

and the other 74,000



This is awfully slow. How come ? The index on the seqno should give
speedy access to the record.

Thanks,

--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] slow update but have an index

2001-08-17 Thread Feite Brekeveld

Martijn van Oosterhout wrote:

 On Fri, Aug 17, 2001 at 01:08:29PM +0200, Feite Brekeveld wrote:
  Now this table has about 80,000 records. I need to update 74,000 status
  fields. So I made a dump, and hacked the dump into SQL statements like:
 
  update accounting set status = 'C' where seqno = 1566385;
  
  and the other 74,000
 
  This is awfully slow. How come ? The index on the seqno should give
  speedy access to the record.

 Well, an index speeds it up, but that times 80,000 will still take a while.
 Is there any trickery or will this work?

 update accounting set status = 'C';

 If so, that will be much faster.

No that will not work, because they other 6000 need not to be changed. Of
course I could update the this way and change the other 6000 back to their
original status, but the query I issued is so slow that I think something is
wrong.





 One sequential scan is faster than 80,000 index scans.

 --
 Martijn van Oosterhout [EMAIL PROTECTED]
 http://svana.org/kleptog/
  It would be nice if someone came up with a certification system that
  actually separated those who can barely regurgitate what they crammed over
  the last few weeks from those who command secret ninja networking powers.

 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html

--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] slow update but have an index

2001-08-17 Thread Feite Brekeveld

Martijn van Oosterhout wrote:

 On Fri, Aug 17, 2001 at 01:40:32PM +0200, Feite Brekeveld wrote:
   Well, an index speeds it up, but that times 80,000 will still take a while.
   Is there any trickery or will this work?
  
   update accounting set status = 'C';
  
   If so, that will be much faster.
 
  No that will not work, because they other 6000 need not to be changed. Of
  course I could update the this way and change the other 6000 back to their
  original status, but the query I issued is so slow that I think something is
  wrong.

 Well, there's a bit of an issue here. Each time you do an insert, the table
 gets larger, the index gets larger, etc. Disk accesses everywhere. If you
 can do it one query then the sequential is much friendlier to disk caches
 and the performance will be much more consistant.

 Can you codify in an SQL query how you decide which records to change. I've
 found the best way to improve performance is to minimise the number of
 queries, letting the database do the maximum optimisation possible.


hacked it with perl into several

update ... where seqno between x and y statements.

That went smoothly.




 --
 Martijn van Oosterhout [EMAIL PROTECTED]
 http://svana.org/kleptog/
  It would be nice if someone came up with a certification system that
  actually separated those who can barely regurgitate what they crammed over
  the last few weeks from those who command secret ninja networking powers.

 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] Database Size

2001-06-06 Thread Feite Brekeveld



jackie wrote:

Hello
EveryBody How could I get
the size(MB) of the Database
du on the datadirectory where the database are located.
Feite






Re: [GENERAL] IpcMemoryCreate

2001-05-23 Thread Feite Brekeveld

Joakim Bomelin wrote:

 In Solaris 8, I get IpcMemoryCreate: shmget(key=5432001, size=1236992,
 03600) failed: Invalid argument when I try to start postmaster. I can
 see that I don't have enough shared memory, and that I should change the
 SHMMAX parameter in the kernel.
 I'm not really up for recompiling the kernel, since I'm kinda new to
 Solaris. Is there any other way to get rid of this error? I hope so :)

Solaris has a dynamic kernel, so you don't have to rebuild.

You can edit the file :

/etc/system
* Oracle shared memory
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmni=100
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmns=250
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767

these are some parameters from a solaris machine on which we run Oracle.

Regards,

Feite





 --
 Joakim Bomelin
 Gröna Verket AB
 040-691 91 36

 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] IpcMemoryCreate

2001-05-23 Thread Feite Brekeveld

Joakim Bomelin wrote:

 In Solaris 8, I get IpcMemoryCreate: shmget(key=5432001, size=1236992,
 03600) failed: Invalid argument when I try to start postmaster. I can
 see that I don't have enough shared memory, and that I should change the
 SHMMAX parameter in the kernel.
 I'm not really up for recompiling the kernel, since I'm kinda new to
 Solaris. Is there any other way to get rid of this error? I hope so :)

You can also bring down the number of allowed simultanous connection using
for instance -N 16 -B 32 at the commandline of the postmaster.

I have experienced the same thing on our Sun, this worked fine and is no
problem if you don't need that much connections.

Feite



 --
 Joakim Bomelin
 Gröna Verket AB
 040-691 91 36

 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])

--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] update ... from where id in (..) question

2001-05-02 Thread Feite Brekeveld

Hi,

I have a table with approx.  2mln records.

There were a few for which I had to update statusfield, so I did:

update table set statusflag = 'U' where id in ('id10', 'id20',
'id30');

this took so long that I cancelled it, and used separate

update table set statusflag = 'U' where id = 'id10';

statements, which were executed in a fraction of a second.


Has someone an explanation for this ?


--
Feite Brekeveld
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] problems with pgplsql looping through query results (FOR row IN)

2001-04-04 Thread Feite Brekeveld


Hi,

 Trying  to get grip on FOR row IN select clause using this
testfunction.

The FOR row|record in is the one I can't get to work.

CREATE FUNCTION  dosomething (varchar)
RETURNS integer
AS '
DECLARE
_conn_id ALIAS FOR $1;
ts integer;
cdrrec record;
BEGIN
  ts := 0;
  FOR row IN
 SELECT *
 FROM cdr_accounting
 WHERE connection_id = _conn_id
  LOOP
ts := ts + row.time_stamp
  END LOOP;

  return ts;
END;'
LANGUAGE 'plpgsql';

How to get this one to work ?

Thanks,

--
Feite Brekeveld
[EMAIL PROTECTED]
http://www.osiris-it.nl




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] how to do a select distinct within pgplsql

2001-04-04 Thread Feite Brekeveld


Hi,
I want to do the following:
snip>
create function ...
as
...
BEGIN
 FOR myrec INselect distinct id from table where condition
 LOOP
 ENDLOOP;
END;
Now this doesn't work because the FORINstatment expects
a complete record. How to deal with this ?
Tried CURSOR but that led to errors too.
Thanks,

--
Feite Brekeveld
[EMAIL PROTECTED]





[GENERAL] to_timestamp question

2001-03-21 Thread Feite Brekeveld

Hi,

I need to convert thisone into an integer timestamp value:

14:57:55.540 UTC Sun Mar 4 2001

How to deal with the milliseconds part (.540) and the timezone UTC ?


Thanks,

--
Feite Brekeveld
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] embedded sql pointer to structure question

2001-03-16 Thread Feite Brekeveld


Hi,

I have an application using structures say

typedef struct {
   int id;
   char value[40];
} MYREC;


When passing a record to a function insert_record( MYREC *pmyrec)

how to deal with the sql-statement regarding the fields of the struct ?
When doing this (TESTPROGRAM) :

exec sql whenever sqlerror sqlprint;

exec sql include sqlca;

exec sql type MYREC is struct {
int id;
char value;
} ;

int main()
{
   MYREC mr;
  mr.id = 10;
  strcpy(mr.value, "testval");
...

 Exec sql begin declare section;
   MYREC *pmr = mr;

 exec sql end declare section;

EXEC SQL INSERT INTO testVALUES(:pmr-id, :pmr-value);
...
}

I get the message :ERROR: The variable id is not declared
--
Feite Brekeveld
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] please some help on trigger creation

2001-03-12 Thread Feite Brekeveld

Hi,

Suppose:

create table  rawrecords (
  myrec text;
);


create table cookedrecords (
id varchar(10) not null,
name varchar(20) not null,
   value integer not null
);


We insert a record say string:  "thisid/thisname/12345" into the
rawrecords table.

I would like to have a trigger on that table that splits the raw record
into:

thisidthisname12345

and inserts those in the cookedrecords table.

It's easy to split such a string using a perl function but how to get
them in the table then ?

Seperate functions to get each attribute ? seems quite expensive !


Am I overlooking some posibilities here ?


Thanks,


Feite Brekeveld


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] calling a fuction within a function

2001-03-09 Thread Feite Brekeveld

Hi,

I have an application that receives data with attributes concatenated as
a string seperated by a /

I use perl code to split it into seperate attributes and store it from
one table into another (Solid database)

What I would like to do is:

- Receive the data in a table  that stores the cancat. string
- have a trigger on that table that receives splits  the string into
pieces using plperl ( some tricky regexps are involved so perl would be
nice )
- have this function insert the new record into another table with the
attibutes seperated.

QUESTION:

Can I call a function from the perl-based trigger to store the new
record ?

Thanks,

Feite Brekeveld


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] PHP 3.0.16 and pg_cmdtuples

2001-02-23 Thread Feite Brekeveld

Hi,

I have a
- development machine with postgresql 7.0.3 installed and php 3.0.16
with phplib 7.0.2
- production machine idemdito

When running my webapplication on the development machine I have no
problems, but when running it from the production machine I constantly
get the message:

-- This compilation does not support pg_cmdtuples() in
db_pgsql.inc on line 122

which is crapp because I built it with exactly the same config options.

What could pissibly be wrong here ?


Thanks,


Feite Brekeveld





[GENERAL] dump of dictionary

2001-02-17 Thread Feite Brekeveld

Hi,

Is there a way to get a dictionary back in the (or almost the) same way
the database is created ?

When defining:

create table xxx (
   id varchar(10) not null,
   blabla varchar(100)
);


create table refxx (
   id varchar(10) not null,
   other values ...

  primary key(id),
  foreign key(id) references xxx(id)
);


After a pg_dump you get a file with the definition of triggers that were
a result of the foreign key part in the table definition. When having a
complex database the file becomes almost unreadable.

Is there a way to get the dictionary back in the way as stated above ?


Thanks,

Feite Brekeveld