[GENERAL] Table partitioning?

2001-06-10 Thread Gerald Gutierrez


I was browsing some database mailing lists and found this message available 
at:

--
(http://www.phpbuilder.com/forum/read.php3?num=2id=139678thread=139671)
I don't know MSSQL, but if it tries to compete with Oracle, it should have 
this funcitonality (which oracle does):

divide tables into smaller tables that are searched on queries based on a 
set field/rule.

An example of this would be:  given the first character of a text column, 
insert the row into the table partition that has other rows with the same 
letter.   When a query is done where mycol = 'axxs'; the first thing Oracle 
does is decide which table partition to use and then does the rest of the 
where statement.

You should be able to design a database application without ever caring how 
many rows will be in each table...as I said, dealing with that is a DBA issue.
--

It seems that, depending on how data is actually stored on disk and 
accessed, this method of partitioning a table internally in the database 
could be a benefit. Has anyone investigated this for PostgreSQL?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] inserting, index and no index - speed

2001-06-10 Thread zilch


The test script that set up the tables is the following:

---

/* Cleanup */

DROP SEQUENCE index_with_id_seq;
DROP SEQUENCE index_without_id_seq;

DROP INDEX name_index;

DROP TABLE index_with;
DROP TABLE index_without;

/* Create a table with an index */

CREATE TABLE index_with (

 id SERIAL,
 name TEXT
  
);
  
CREATE INDEX name_index ON index_with(name);
  
/* Create a table without an index */
  
CREATE TABLE index_without (
  
  id SERIAL,
  name TEXT

);

---

This is run just before it is tested,
then I have this little C++ program that inserts N rows into the tables, and
meassures how long it takes.

A DELETE * FROM table (both tables) followed by a VACCUUM is also run 
before each test run (which consists of regular INSERT statements).

Do I do anything wrong?

The postmaster (7.1.2) is run with then current Debian testing/unstable
standard options.

Daniel Akerud

  I just noticed that inserting 1 tuples in an indexed table took exactly
  the same amount of time as inserting 1 tuples in a non-indexed table
  (194 seconds). Why is this? The difference in MySQL is about 50% longer in
  an indexed table.
 
 Surprises me too.  Which PG version, and what are the test conditions
 exactly?  (Table and index declarations; is table empty initially;
 how is backend being driven, and what commands are issued exactly?
 How many shared buffers, platform, etc)
 
 Under PG 7.1, it's possible that your test caused no actual I/O except
 to the WAL log ... but I'd still think that the volume of WAL I/O
 would be greater when writing an index.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] inserting, index and no index - speed

2001-06-10 Thread zilch


I just rerun the application to confirm that it was really like that. So,
using the test-environment previously described i got the following output:

Database vacuumed
pg: Trying 1000 inserts with indexing on...
Time taken: 24 seconds

pg: Trying 1000 inserts with indexing off...
Time taken: 22 seconds

Database vacuumed
pg: Trying 1 inserts with indexing on...
Time taken: 220 seconds

pg: Trying 1 inserts with indexing off...
Time taken: 220 seconds

Daniel Akerud

 
 Yes, actually...
 forgot to say that... VACUUM  VACUUM ANALYZE before each test run...
 
 Thanks
 
 Daniel Akerud
 
  Did you VACUUM ANALYZE as well, after you created the tables/indexes?
  
  - Original Message -
  From: [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Sunday, June 10, 2001 2:15 PM
  Subject: Re: [GENERAL] inserting, index and no index - speed
  
  
  
   The test script that set up the tables is the following:
  
   ---
  
   /* Cleanup */
  
   DROP SEQUENCE index_with_id_seq;
   DROP SEQUENCE index_without_id_seq;
  
   DROP INDEX name_index;
  
   DROP TABLE index_with;
   DROP TABLE index_without;
  
   /* Create a table with an index */
  
   CREATE TABLE index_with (
  
id SERIAL,
name TEXT
  
   );
  
   CREATE INDEX name_index ON index_with(name);
  
   /* Create a table without an index */
  
   CREATE TABLE index_without (
  
 id SERIAL,
 name TEXT
  
   );
  
   ---
  
   This is run just before it is tested,
   then I have this little C++ program that inserts N rows into the tables,
  and
   meassures how long it takes.
  
   A DELETE * FROM table (both tables) followed by a VACCUUM is also run
   before each test run (which consists of regular INSERT statements).
  
   Do I do anything wrong?
  
   The postmaster (7.1.2) is run with then current Debian testing/unstable
   standard options.
  
   Daniel Akerud
  
 I just noticed that inserting 1 tuples in an indexed table took
  exactly
 the same amount of time as inserting 1 tuples in a non-indexed
  table
 (194 seconds). Why is this? The difference in MySQL is about 50%
  longer in
 an indexed table.
   
Surprises me too.  Which PG version, and what are the test conditions
exactly?  (Table and index declarations; is table empty initially;
how is backend being driven, and what commands are issued exactly?
How many shared buffers, platform, etc)
   
Under PG 7.1, it's possible that your test caused no actual I/O except
to the WAL log ... but I'd still think that the volume of WAL I/O
would be greater when writing an index.
   
regards, tom lane
   
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
   
  
   ---(end of broadcast)---
   TIP 6: Have you searched our list archives?
  
   http://www.postgresql.org/search.mpl
  
  
  
 
 
 
 ---
 Daniel Åkerud, [EMAIL PROTECTED]
 
 
 
 
 
 
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 



---
Daniel Åkerud, [EMAIL PROTECTED]








---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] inserting, index and no index - speed

2001-06-10 Thread zilch


Thanks Tom,
really appreciate it!

Daniel Akerud

 [EMAIL PROTECTED] writes:
  CREATE TABLE index_with (
   id SERIAL,
   name TEXT
  );
  CREATE INDEX name_index ON index_with(name);
   
  CREATE TABLE index_without (
id SERIAL,
name TEXT
  );
 
 Actually, what you are comparing here is a table with two indexes to a
 table with one index.  Moreover, both of them incur a sequence nextval()
 operation for each insert.  So it's not two files updated versus one,
 it's four versus three.
 
 Also, given the small size of these tables, it's likely that most of the
 updates occur in in-memory disk buffers.  If you are running with fsync
 on, nearly all the actual I/O per insert will be the write and fsync of
 the WAL log.  The time required for that is not going to be very
 sensitive to the amount of data written, as long as it's much less than
 one disk block per transaction, which will be true in both these cases.
 You end up writing one block to the log per transaction anyway.
 
 You might try running the ten thousand inserts as a single transaction
 (do begin and end around them).  It'd also be educational to try it
 with fsync disabled, or with id declared as plain int not serial.
 
   regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] inserting, index and no index - speed

2001-06-10 Thread zilch


 You might try running the ten thousand inserts as a single transaction
 (do begin and end around them).  

A HUGE difference (also completely took away the ID field (serial) having
only name):

Database vacuumed
pg: Trying 25000 inserts on index_with...
Time taken: 12 seconds

Database vacuumed
pg: Trying 25000 inserts on index_without...
Time taken: 12 seconds --- MIGHT BE BACUASE OF FSYNC!? (fsyncing also for the 
previous run)

Database vacuumed
pg: Trying 3 inserts on index_with...
Time taken: 15 seconds

Database vacuumed
pg: Trying 3 inserts on index_without...
Time taken: 12 seconds

Database vacuumed
pg: Trying 35000 inserts on index_with...
Time taken: 21 seconds

Database vacuumed
pg: Trying 35000 inserts on index_without...
Time taken: 14 seconds

I can't believe what a difference that made. How can it make it faster by
putting it in a transaction? I thought that would make it slower. Like only
a 100th of the time.

Daniel Akerud

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] inserting, index and no index - speed

2001-06-10 Thread Tom Lane

[EMAIL PROTECTED] writes:
 I can't believe what a difference that made. How can it make it faster by
 putting it in a transaction? I thought that would make it slower. Like only
 a 100th of the time.

Everything is always a transaction in Postgres.  If you don't say
begin/end, then there's an implicit begin and end around each individual
query.  So your first set of tests were paying transaction commit
overhead for each insert.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] Greetings, Thinking about converting

2001-06-10 Thread Miguel Angel Heredia



Hi, I have a database with the 2 principal tables 
using 716,819 rows and 43,157,442 rows each one, related each one, I have some 
triggers and stored procedures and views having a frontend made in Visual Basic 
using ODBC and ADO to connect to the database, Im considering to move from 
Microsoft SQL Server 7.0 to PostgreSQL because this stuff its getting big and 
getting slower, Im backing up must of things but I would like to know if I can 
get some improvement changing from Win NT 4.0-MSSQL to 
Linux-Postgresql.

What do you all think ? Should I change ? 



Re: [GENERAL] Greetings, Thinking about converting

2001-06-10 Thread GH

On Sun, Jun 10, 2001 at 06:32:58PM -0600, some SMTP stream spewed forth: 
 Hi, I have a database with the 2 principal tables using 716,819 rows and 43,157,442 
rows each one, related each one, I have some triggers and stored procedures and views 
having a frontend made in Visual Basic using ODBC and ADO to connect to the database, 
Im considering to move from Microsoft SQL Server 7.0 to PostgreSQL because this stuff 
its getting big and getting slower, Im backing up must of things but I would like to 
know if I can get some improvement changing from Win NT 4.0-MSSQL to Linux-Postgresql.
 
 What do you all think ? Should I change ? 

Yes.

gh


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] foreign keys constraints, depending on each other

2001-06-10 Thread Stephan Szabo


On Sun, 10 Jun 2001 [EMAIL PROTECTED] wrote:

 
   I was just creating this little database for demonstrating the use of
   foreign keys constraints.
   
   I was about the create 3 tables, namely mother, father and child. Mother has
   a foreign key pointing at father ( id ), and father has a foreign key
   pointing at mother ( id ). Child has one pointer to mother ( id ) and one
   pointer to father ( id ). How can I prevent the error message from occurring?
  
  You don't put the constraint at table creation time.  The table referenced
  by the references has to exist.  Use ALTER TABLE to add the constraint
  after creating table father.
  
 
 I tried:
 
 BEGIN;
 SET CONSTRAINTS ALL DEFERRED;
 INSERT INTO mother (fatherID, name) VALUES ( 1, 'mamma' ) ;
 INSERT INTO father (motherID, name) VALUES ( 1, 'pappa' ) ;
 INSERT INTO child (motherID, fatherID, name) VALUES (1, 1, 'barn 1') ;
 INSERT INTO child (motherID, fatherID, name) VALUES (1, 1, 'barn 2') ;
 COMMIT;
  
 ...which did not work. Still it complains about key referenced from mother not
 found in father.

Ah, that's because you didn't define the constraints DEFERRABLE.  SET
CONSTRAINTS ALL DEFERRED only changes the state of deferrable constraints.

If you don't specify a time, it's INITIALLY IMMEDIATE.  If it's initially
immediate, it's NOT DEFERRABLE unless DEFERRABLE is explicitly given.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Greetings, Thinking about converting

2001-06-10 Thread GH

On Sun, Jun 10, 2001 at 07:31:10PM -0600, some SMTP stream spewed forth: 
  I suggest you check out FreeBSD: www.freebsd.org.
  If you have no experience with either FreeBSD or some Linux variant, I
  would say switch to FreeBSD, otherwise use whatever you are comfortable
  with other than Windows. PostgreSQL will run under Windows, but you
  should get better overall performance using Linux and much better
  performance using FreeBSD. Of course, if you need crappy hardware
  support, Linux is a hardware slut; but if you are working on a server
  farm or a serious, heavily loaded server, you would be better using
  high-end hardware in a FreeBSD box if not using Sun equipment and
  software which is probably not in your budget.
 
 By now im using a ProliantLM350 with a SmartArray 431 and 4 SCSI non-hotswap
 ultra3 disks, 20gb each one, 1gb memory , 2 933mhz procesors, 
 I have had experience with Linux.

You should be fine on Linux. I normally would strongly *not* say to use
Linux over FreeBSD, but as I said, -current (like a beta, only better)
has been in sad state lately, and I do not know what later releases are
going to look like. It should still be better than Linux, but you
shouldn't have to learn something new if everything sucks anyway.

(Of course, the fine hackers behind PostgreSQL ensure that it runs well on
just about anything, so performance should not be an issue.)


gh

 
  Basically, all of the operating systems have been going to hell lately
  (FreeBSD included, I don't know what the developers are smoking, but damn...),
  so you should be fine on anything, even Windows.
 
 :/ ? too sad...
 
  It is unfortunate that everything has to suck so badly.
 Yes...
  Cheers.
  gh

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Greetings, Thinking about converting

2001-06-10 Thread Miguel Angel Heredia

 You should be fine on Linux. I normally would strongly *not* say to use
 Linux over FreeBSD, but as I said, -current (like a beta, only better)
 has been in sad state lately, and I do not know what later releases are
 going to look like. It should still be better than Linux, but you
 shouldn't have to learn something new if everything sucks anyway.


Ill be fine on FreeBSD either, I guess.

 (Of course, the fine hackers behind PostgreSQL ensure that it runs well on
 just about anything, so performance should not be an issue.)

Opinion that you dont share as I see.. but, OS and hardware appart, what
about the MSSQL vs MySQL vs PostgreSQL discussion. what you think ?

Speed is what I am lookin for


- Original Message -
From: GH [EMAIL PROTECTED]
To: Miguel Angel Heredia [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Sunday, June 10, 2001 6:36 PM
Subject: Re: [GENERAL] Greetings, Thinking about converting


 On Sun, Jun 10, 2001 at 07:31:10PM -0600, some SMTP stream spewed forth:
   I suggest you check out FreeBSD: www.freebsd.org.
   If you have no experience with either FreeBSD or some Linux variant, I
   would say switch to FreeBSD, otherwise use whatever you are
comfortable
   with other than Windows. PostgreSQL will run under Windows, but you
   should get better overall performance using Linux and much better
   performance using FreeBSD. Of course, if you need crappy hardware
   support, Linux is a hardware slut; but if you are working on a server
   farm or a serious, heavily loaded server, you would be better using
   high-end hardware in a FreeBSD box if not using Sun equipment and
   software which is probably not in your budget.
 
  By now im using a ProliantLM350 with a SmartArray 431 and 4 SCSI
non-hotswap
  ultra3 disks, 20gb each one, 1gb memory , 2 933mhz procesors,
  I have had experience with Linux.

 You should be fine on Linux. I normally would strongly *not* say to use
 Linux over FreeBSD, but as I said, -current (like a beta, only better)
 has been in sad state lately, and I do not know what later releases are
 going to look like. It should still be better than Linux, but you
 shouldn't have to learn something new if everything sucks anyway.

 (Of course, the fine hackers behind PostgreSQL ensure that it runs well on
 just about anything, so performance should not be an issue.)


 gh

  
   Basically, all of the operating systems have been going to hell lately
   (FreeBSD included, I don't know what the developers are smoking, but
damn...),
   so you should be fine on anything, even Windows.
 
  :/ ? too sad...
  
   It is unfortunate that everything has to suck so badly.
  Yes...
   Cheers.
   gh



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Win32 Compilation

2001-06-10 Thread Raymond

From the 7.1 documentation, it appears that PostgreSQL can be compiled on
the Win32 platform via Visual C++.

Has anybody has experience utilizing Watcom compilers for the Win32
environment???

I really don't wish to utilize MS in my current endeavors.

Raymond


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Greetings, Thinking about converting

2001-06-10 Thread GH

 Opinion that you dont share as I see.. but, OS and hardware appart, what
 about the MSSQL vs MySQL vs PostgreSQL discussion. what you think ?

General consensus is that MySQL is the fastest for simple selects. If you
have a decent number of (more than 2 or 3) concurrent users, 
PostgreSQL will serve you better because a site with significant 
traffic will benefit from transactions and strong data integrity.

If you care about your data, use PostgreSQL.

I would use PostgreSQL anyway, just because it is overall better (which
is a *very* subjective measure) and it /feels/ better. Furthermore, a
stronger DBMS is more likely to serve your needs in the future.

 Speed is what I am lookin for

In what type of situation are you planning to use this setup?


gh

(Not a very strong argument, but MSSQL /is/ made by Microsoft, afterall...)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster