Re: [GENERAL] Big Tables vs. many Tables vs. many Databases

2004-02-19 Thread Lincoln Yeoh
At 01:44 AM 2/19/2004 +0100, Dirk Olbertz wrote:
I'm currently about to redesign a database which you could compare with a
database for managing a library. Now this solution will not only manage one
library, but 100 to 500 of them. Currently, eg. all the data about the
inventory (books) is held in one table for all the libraries.
Is it useful to spread this to one table for each library, by eg. giving it
an id as a postfix?
If the libraries don't belong together (administrated and used 
independently) then it may be worth splitting them into different databases.

If you are really managing ALL libraries together then keep them in the 
same database and even in the same tables.

Of course you have to be careful when sharing tables - if you screw up, 
data could be exposed to the wrong parties.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Inner join question

2004-02-19 Thread Jan Poslusny
Hi,
try this on psql console:
explain analyze select tq1.*, tq2.* from
cal_quat_1 tq1, cal_quat_2 tq2
where tq1.timestamp = tq2.timestamp
and tq1.timestamp  '2004-01-12 09:47:56. +0'::timestamp with time zone
and tq1.timestamp  '2004-01-12 09:50:44.7187 +0'::timestamp with time zone
order by tq1.timestamp;
... and examine generated query plan (or post it)
regards, pajout
P.S.
And what about vacuum full analyze ? :)
Randall Skelton wrote:

Greetings all,

I am trying to do what should be a simple join but the tables are 
large and it is taking a long, long time.  I have the feeling that I 
have stuffed up something in the syntax.

Here is what I have:

telemetry= select (tq1.timestamp = tq2.timestamp) as timestamp, 
tq1.value as q1, tq2.value as q2 from cal_quat_1 tq1 inner join 
cal_quat_2 as tq2 using (timestamp) where timestamp  '2004-01-12 
09:47:56. +0' and timestamp  '2004-01-12 09:50:44.7187 +0' order 
by timestamp;

telemetry= \d cal_quat_1
Table cal_quat_1
  Column   |   Type   | Modifiers
---+--+---
 timestamp | timestamp with time zone |
 value | double precision |
telemetry= \d cal_quat_2
Table cal_quat_2
  Column   |   Type   | Modifiers
---+--+---
 timestamp | timestamp with time zone |
 value | double precision |
My understanding of an inner join is that the query above will 
restrict to finding tq1.timestamp, tq1.value and then move onto 
t12.value to search the subset.  I have tried this with and without 
the '=' sign and it isn't clear if it is making any difference at all 
(the timestamps are identical in the range of interest).  I have not 
allowed the query to finish as it seems to take more than 10 minutes.  
Both timestamps are indexed and I expect about 150 rows to be 
returned.  At the end of the day, I have four identical tables of 
quaternions (timestamp, value) and I need to extract them all for a 
range of timestamps.

Cheers,
Randall
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Inner join question

2004-02-19 Thread Nick Barr
Randall Skelton wrote:

Greetings all,

I am trying to do what should be a simple join but the tables are 
large and it is taking a long, long time.  I have the feeling that I 
have stuffed up something in the syntax.

Here is what I have:

telemetry= select (tq1.timestamp = tq2.timestamp) as timestamp, 
tq1.value as q1, tq2.value as q2 from cal_quat_1 tq1 inner join 
cal_quat_2 as tq2 using (timestamp) where timestamp  '2004-01-12 
09:47:56. +0' and timestamp  '2004-01-12 09:50:44.7187 +0' order 
by timestamp;

telemetry= \d cal_quat_1
Table cal_quat_1
  Column   |   Type   | Modifiers
---+--+---
 timestamp | timestamp with time zone |
 value | double precision |
telemetry= \d cal_quat_2
Table cal_quat_2
  Column   |   Type   | Modifiers
---+--+---
 timestamp | timestamp with time zone |
 value | double precision |
My understanding of an inner join is that the query above will 
restrict to finding tq1.timestamp, tq1.value and then move onto 
t12.value to search the subset.  I have tried this with and without 
the '=' sign and it isn't clear if it is making any difference at all 
(the timestamps are identical in the range of interest).  I have not 
allowed the query to finish as it seems to take more than 10 minutes.  
Both timestamps are indexed and I expect about 150 rows to be 
returned.  At the end of the day, I have four identical tables of 
quaternions (timestamp, value) and I need to extract them all for a 
range of timestamps.

Cheers,
Randall
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
We need more information to be able to help further. Can you supply:

1. Total number of rows in each table.
2. Results from explain analyze your query
3. key configuration values from postgresql.conf
4. Basic hardware config. (CPU type and number, Total RAM, HDD type, 
size and speed)

But in the mean time can you try the following query instead.

select (tq1.timestamp = tq2.timestamp) as timestamp, tq1.value as q1, 
tq2.value as q2 from cal_quat_1 tq1, cal_quat_2 as tq2 WHERE 
tq1.timestamp=tq2.timestamp AND tq1.timestamp BETWEEN '2004-01-12 
09:47:56. +0'::timestamp AND '2004-01-12 09:50:44.7187 
+0'::timestamp order by tq1.timestamp;

As far as I know, and someone please correct me, this allows the planner 
the most flexibility when figuring out the optimum plan.

Thanks

Nick



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] wishlist: dynamic log volume control

2004-02-19 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 The issue is that you can't set log_connections from psql.  Should we
 improve the doc wording?

I think the only thing missing is that in the introduction to section
16.4, where it says The configuration file is reread whenever the
postmaster process receives a SIGHUP signal, we should add something
along the line of Some parameters can only be set at server start;
any changes to their entries in the configuration file will be ignored
until the server is restarted.  While a reasonable person could
probably figure that out ;-) it would not hurt to say it explicitly.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] VACUUM Question

2004-02-19 Thread scott.marlowe
On Thu, 19 Feb 2004, Alex wrote:

 Hi,
 just a few questions on the Vaccum
 
 I run a vacuum analyze on the database every night as part of a
 maintenance job.
 
 During the day I have a job that loads 30-70,000 records into two tables
 (each 30-70k).
 This job runs 2-3 times a day; the first time mainly inserts, the 2nd,
 3rd time mostly updates.
 Both tables have in the area of 1-3Mio records
 
 How reasonable is it to run a Vacuum Analyze before and after the
 insert/update of the data.

Running it before probably gains you little.  In some circumstances 
(running analyze on an empty table is one) analyzing before loading data 
is counterproductive, because postgresql's query planner will be making 
decisions on the 30,000th of 70,000 inserts based on a table size of very 
few rows, and favoring seq scans when it should be using index scans.

vacuuming (and analyzing) after the import is a good thing.

 Also, I noticed that i get quite some performance improvement if I run a
 count(*) on the two tables before the insert. Any reasons for that?

Likely it is loading the whole table into kernel cache.

 One more question; on one server the Vacuum Analyze before the insert
 takes approx. 2min after that the same command takes 15min.

Normal.  Before hand, there are no dead tuples to harvest / put in the 
fsm, but afterward there are plenty to harvest.

Make sure your fsm settings are high enough to retain all the freed pages, 
or you'll wind up with table bloat.

Vacuum full every so often (off hours are best) to make sure.  Do a df on 
the database mount point before and after and see how much spave it 
recovers.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Edit Tables...

2004-02-19 Thread Richard Huxton
On Thursday 19 February 2004 15:50, Denny Morais wrote:
 Hi,

 I have a software that use a postgresql database. I am using SuSE 8.2 and
 would like to edit the program tables... Is there any software to do that?

 I have never used postgresql before, so I am sorry for the newbie question
 and thanks for help,

Well, the command-line interface is psql, but if you want something more 
like Access, take a look at http://www.pgadmin.org/

There's a web-based (php) admin tool at: http://phppgadmin.sourceforge.net/

There are loads of others, but I've used both of these at one time or another, 
and quite like them.
-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL on Windows

2004-02-19 Thread Jan Wieck
John Small wrote:

I want to use PostgreSQL on a project that will have to run on
both Linux and Windows.  But using cygwin is not a viable option.
Does the dbExpert version of PostgreSQL run natively under Windows?
Last time I checked the dbExperts version was the cygwin port, bundled 
with a stripped down and renamed cygwin library (something like dbx.dll 
or so containting tons and tons of cygwin symbols).

NuSphere just presented a new release of UltraSQL, which I have heard 
(but not confirmed it myself yet) is still a PostgreSQL 7.2.1 
underneath. I would have expected a new release to be at least based on 
7.2.4.

I hope Bruce Momjian can add some information about the other Win32 
activities we are aware of.

Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] PostgreSQL on Windows

2004-02-19 Thread Bruce Momjian
Jan Wieck wrote:
 John Small wrote:
 
  I want to use PostgreSQL on a project that will have to run on
  both Linux and Windows.  But using cygwin is not a viable option.
  
  Does the dbExpert version of PostgreSQL run natively under Windows?
 
 Last time I checked the dbExperts version was the cygwin port, bundled 
 with a stripped down and renamed cygwin library (something like dbx.dll 
 or so containting tons and tons of cygwin symbols).
 
 NuSphere just presented a new release of UltraSQL, which I have heard 
 (but not confirmed it myself yet) is still a PostgreSQL 7.2.1 
 underneath. I would have expected a new release to be at least based on 
 7.2.4.
 
 I hope Bruce Momjian can add some information about the other Win32 
 activities we are aware of.

The ports I know of are:

dbExperts
Connx
NuShphere
SRA

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] backup and restore questions

2004-02-19 Thread Sally Sally
I had a few questions concerning the backup/restore process for pg. 

1) Is it possible to dumpdata onto an existing database that contains data (assumning the schema of both are the same). Has anyone done this? I am thinking of this in order to expediate the data load process

2) I read that when dumping and restoring data the insert option is safer but slower than copy? Does anyone know from experience how much slower (especially for a database containing millions of records).

3) can pg_restore accept a file that is not archived like a zipped file or plain text file (file.gz or file)

4) Is the general practise to have onewholedump of a database or several separate dumps (by table etc...)?

Thanks a lot
Sally Take off on a romantic weekend or a family adventure to these great U.S. locations. 


Re: [GENERAL] Support.

2004-02-19 Thread Al Hulaton
Louis P. Boudville wrote:

1.Where can I get end user support for PostgreSQL ?

You could use this mailing list, the #postgresql channel on the Freenode 
IRC server or do a search for commercial Postgres support. The techdocs 
on the Postgresql.org site would be my first stop, and you can also 
search those techdocs and the O'Reilly 'Practical Postgresql' book on 
our site below.

2.What are the fees like ?

You can get support contracts, pay as you go, by the hour, or free free free on this and other mailing lists. We offer both contracts and pay-as-you-go support because different sized companies need different levels of support. I think you'll find many postgres consultants will offer the same.

--
Best,
Al Hulaton|  Sr. Account Engineer  |  Command Prompt, Inc.
503.667.4564  |  [EMAIL PROTECTED]
Home of Mammoth Replicator for PostgreSQL
Managed PostgreSQL, Linux services and consulting
Read and Search O'Reilly's 'Practical PostgreSQL' at
http://www.commandprompt.com
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] postgresql unicode

2004-02-19 Thread Pavel aka crazy
hi, all!
how can i use unicode in postgresql tables? 
createdb -E UNICODE dbname work correctly, but when i put data and then
read it i see only garbage instead cyrillic letters. what i do wrong?

-- 
FreeLSD
Best regards, Pavel aka crazy
Linux Registered User #303117

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Replication

2004-02-19 Thread Joshua D. Drake
Mammoth is something you would have to ask Joshua Drake about. Don't 
know anything about it.
Just FYI: There is very detailed information about Mammoth Replicator
located here:
http://www.commandprompt.com/entry.lxp?lxpe=304

Sincerely,

Joshua D. Drake




Is pgReplication ready for 7.4 yet? And are there any other *free*
options? If not maybe I'll have to find the money for a commercial
solution.


Doesn't look like based on the web site at gborg.

I'd appreciate any feedback from anyone who has a working replication
solution.
So far eRServer is working ok for me. If you use the unpatched version 
on gborg, be sure to
read the erserver list archives, as the current tarball has a problem or 
two. If you want to
work with my fork let me know. I like to think it works better, but I'm 
hardly unbiased...

Thanks

John Sidney-Woollett



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
  joining column's datatypes do not match



Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
begin:vcard
fn:Joshua D. Drake
n:Drake;Joshua D.
org:Command Prompt, Inc.
adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0034
note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We  provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored  the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl.
x-mozilla-html:FALSE
url:http://www.commandprompt.com/
version:2.1
end:vcard


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Best replication options

2004-02-19 Thread Joshua D. Drake

rserv doesn't work with 7.4 AFAIK.  erserver seems to; I haven't
actually used it in production, although I'm about to.  dbmirror has
had reports of success.


We have several clients running erserver w/ 7.4, with another deploying
this coming weekend, and haven't had any reports of problems ...
We (Command Prompt) also have several large database clients using 
PostgreSQL Replicator. On both 7.4.x and 7.3.x.

Sincerely,

Joshua D. Drake






Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
   http://www.postgresql.org/docs/faqs/FAQ.html


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
begin:vcard
fn:Joshua D. Drake
n:Drake;Joshua D.
org:Command Prompt, Inc.
adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0034
note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We  provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored  the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl.
x-mozilla-html:FALSE
url:http://www.commandprompt.com/
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] backup and restore questions

2004-02-19 Thread V i s h a l Kashyap @ [Sai Hertz And Control Systems]
Dear Sally Sally ,

I had a few questions concerning the backup/restore process for pg.
 
1) Is it possible to dump data onto an existing database that contains 
data (assumning the schema of both are the same). Has anyone done 
this? I am thinking of this in order to expediate the data load process
This can work provided the database are on same system and have have 
same schema  not tried though.

 
2) I read that when dumping and restoring data the insert option is 
safer but slower than copy? Does anyone know from experience how much 
slower (especially for a database containing millions of records).
If  you are real serious about your data best way AFAIK is insert 
because with insert statments you can move around in case
you upgrade your database or add a new colum in new table  but trying to 
restore a old  data of the same table.
On an
Celeron 900
PostgreSQL 7.3.4
RH 9.0
a 151Kb tared backup takes about 5 Minutes.
Though data restore depends 99 % on disk throughput 1% on CPU  in case 
of plain insert file
and  90 % on disk throughput and 10 % CPU in case of tared file.

 
3) can pg_restore accept a file that is not archived like a zipped 
file or plain text file (file.gz or file)
Can use both zipped and Plain. New versions of pg_restore i.e 7.3  
identify the file format automatically

 
4) Is the general practise to have one whole dump of a database or 
several separate dumps (by table etc...)?
One dump for  data and other dump for schema will always help.

--
Regards,
Vishal Kashyap
Director / Lead Developer,
Sai Hertz And Control Systems Pvt Ltd,
http://saihertz.rediffblogs.com
~*~*~*~*~*~*~*
You Guys start coding I will take care of what this
customers needs.
~*~*~*~*~*~*~*
I Know you believe my words so logon to Jabber.org
and add [EMAIL PROTECTED] to your roster.
OR
Seek Me at 264360076
~*~*~*~*~*~*~*
I am usually called as Vishal Kashyap
but my Girlfriend calls me as Vishal CASH UP.
This is because others identify me because of my
generosity but my Girlfriend identify me because
of my CASH.
~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] wishlist: dynamic log volume control

2004-02-19 Thread scott.marlowe
On Thu, 19 Feb 2004, Ed L. wrote:

 On Thursday February 19 2004 9:13, Bruce Momjian wrote:
  
   The 7.3.4 doc (doc/html/runtime-config.html) does say the
   log_connections option can only be set at server start or in the
   postgresql.conf configuration file.  That seems a little ambiguous on
   the SIGHUP'ability of that option.  My 7.3.4 install here shows all the
   log_* options are dynamically reloadable.
 
  The issue is that you can't set log_connections from psql.  Should we
  improve the doc wording?
 
 Ah, I had no idea that was the issue.  Maybe a quick reference 
 table in the docs would be helpful:
 
 Option SET option  Reloads on SIGHUP
 =
 shared_buffers no no
 log_connectionsno yes
 log_statement  yesyes
 ...
 
 Maybe it's already in there somewhere, I dunno.

The basic info is there, but not in a table.  That would come in quite 
handy...


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] backup and restore questions

2004-02-19 Thread scott.marlowe
On Thu, 19 Feb 2004, Sally Sally wrote:

 I had a few questions concerning the backup/restore process for pg.
  
 1) Is it possible to dump data onto an existing database that contains data 
 (assumning the schema of
 both are the same). Has anyone done this? I am thinking of this in order to 
 expediate the data load
 process

I do it all the time.  Note that if you have constraints that adding the 
new data would violate, it's likely to not import anything.

 2) I read that when dumping and restoring data the insert option is safer but slower 
 than copy? Does
 anyone know from experience how much slower (especially for a database containing 
 millions of
 records).

Depends, but usually about twice as slow to as much as ten times slower.  
It isn't really any safer just more portable to other databases.

 3) can pg_restore accept a file that is not archived like a zipped file or plain 
 text file (file.gz
 or file)

yes, plain text is fine.  to do a .gz file you might have to do a gunzip 
first.  I usually just stick to plain text.

 4) Is the general practise to have one whole dump of a database or several separate 
 dumps (by table
 etc...)?

It's normal to see a single large dump.  Where I work we run 80 databases 
(running on 7.2.x so no schemas) with each database belonging to a 
particular application.  I wrote a custom wrapper for pg_dump that acts 
something like pg_dumpall but dumps each database to a seperate file.  
Makes restoring one table or something like that for a single database 
much easier when you don't have to slog though gigabytes of unrelated 
data.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Select for update, locks and transaction levels

2004-02-19 Thread Keary Suska
on 2/16/04 10:51 AM, [EMAIL PROTECTED] purportedly said:

 I am trying to gather stats about how many times a resource in our web
 app is viewed, i.e. just a COUNT. There are potentially millions of
 resources within the system.
 
 I thought of two methods:
 
 1.An extra column in the resource table which contains a count.

Not a good idea if you expect a high concurrency rate--you will create a
superfluous bottleneck in your app.

 2.A separate table that contains a count using an algorithm similar
 to the method presented here:
 
 http://archives.postgresql.org/pgsql-performance/2004-01/msg00059.php
 
 a.Each time a resource is viewed a new row is inserted with a count
 of 1.
 b.Each time the view count is needed, rows from the table are SUMmed
 together.
 c.A compression script runs regularly to group and sum the rows
 together.

I am assuming that you are concerned about storage size, which is why you
want to compress. You are probably better off (both by performance and
storage) with something like the following approach:

CREATE TABLE view_res (
res_id int8,
stamp  timestamp
) WITHOUT OIDS;

CREATE TABLE view_res_arch (
res_id int8,
cycle  date,
hits   int8
);

By using a timestamp instead of count you can archive using a date/time
range and avoid any concurrency/locking issues:

INSERT INTO view_res_arch (res_id, cycle, hits)
SELECT res_id, '2003-12-31', COUNT(res_id) FROM view_res
WHERE stamp = '2003-12-01' AND stamp = '2003-12-31 23:59:59'
GROUP BY res_id;

then:

DELETE FROM view_res
WHERE stamp = '2003-12-01' AND stamp = '2003-12-31 23:59:59'

With this kind of approach you have historicity and extensibility, so you
could, for example, show historical trends with only minor modifications.

Best regards,

Keary Suska
Esoteritech, Inc.
Leveraging Open Source for a better Internet


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] backup and restore questions

2004-02-19 Thread Richard Huxton
On Thursday 19 February 2004 17:41, Sally Sally wrote:
 I had a few questions
 concerning the backup/restore process for pg.

Please don't post HTML if you can avoid it.

 class=RTEnbsp;/DIV
 DIV class=RTE1) Is it possible to dumpnbsp;data onto an existing
 database that contains data (assumning the schema of both are the same).
 Has anyone done this? I am thinking of this in order to expediate the data
 load process/DIV

No reason why you can't copy the data in - assuming primary keys don't clash 
of course.

 DIV class=RTE2) I read that when dumping and restoring data the insert
 option is safer but slower than copy? Does anyone know from experience how
 much slower (especially for a database containing millions of
 records)./DIV DIV class=RTEnbsp;/DIV

It's not safer so much as more standard - any database can handle a series of 
INSERT statements. Inserts are a lot slower.

 DIV class=RTE3) can pg_restore accept a file that is not archived like a
 zipped file or plain text file (file.gz or file)/DIV DIV

You can't restore a whole database. It's straightforward enough to use COPY or 
\copy with psql to handle a simple tab (or whatever) separated file though. 
If you want to unzip the file first, well, that's what scripting is for.

 DIV class=RTE4) Is the general practise to have onenbsp;wholenbsp;dump
 of a database or several separate dumps (by table etc...)?

Personally, I dump the data+schema in one file and the schema (again) in 
another file. Makes it easy to scan through the schema. You can restore just 
a single table from a full dump anyway - see the manuals for full details.

-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Replication options

2004-02-19 Thread Simon Windsor








Hi



I am used to using Oracle (15 years) and MySQL(5 years), but
I am planning to move an existing application from MySQL to Postgres. The
reasons are very simple,




 New requirements means we need
 views, or a significant re-write
 Better query/index performance
 essential.
 Postgres and MySQL share a very
 common implementation of SQL-92, and what is missing can easily be implanted
 in functions




The only negative issue is replication. I have checked
several Postgres Replication options and unsure which way to go. Can anyone
recommend a replication option that meets the following:




 Does not use triggers. Usually
 slow, and one action that modifies several records, can trigger many actions
 on slaves/peers.
 Does use WAL, or other log, so
 that SQL DDL/DML is copied to slave/peer, rather than the result of the
 DDL/DML.
 Must provide master-master and
 master-slave replication
 Simple to configure and
 maintain




Many Thanx





Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599









-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.



Re: [GENERAL] Replication options

2004-02-19 Thread Andrew Rawnsley
On Feb 19, 2004, at 1:24 PM, Simon Windsor wrote:

Hi



I am used to using Oracle (15 years) and MySQL(5 years), but I am 
planning to move an existing application from MySQL to Postgres. The 
reasons are very simple,


	 	New requirements means we need views, or a significant re-write
	 	Better query/index performance essential.
	 	Postgres and MySQL share a very common implementation of SQL-92, 
and what is missing can easily be implanted in functions



The only negative issue is replication. I have checked several 
Postgres Replication options and unsure which way to go. Can anyone 
recommend a replication option that meets the following:


	 	Does not use triggers. Usually slow, and one action that modifies 
several records, can trigger many actions on slaves/peers.
	 	Does use WAL, or other log, so that SQL DDL/DML is copied to 
slave/peer, rather than the result of the DDL/DML.
I think only Mammoth's ships WAL logs. Josh?

	 	Must provide master-master and master-slave replication
I don't think any of the solutions will do master-master.

	 	Simple to configure and maintain



Many Thanx



Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599



--
 This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.


Andrew Rawnsley
President
The Ravensfield Digital Resource Group, Ltd.
(740) 587-0114
www.ravensfield.com
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] book for postgresql

2004-02-19 Thread Al Hulaton
Stephen Howard wrote:

also, as much as i really like o'reilly books in general, i find the 
index in this one quite lacking.

Yeah, I agree, and I work with one of the authors and he says the same 
thing! (long story, not worth it)

In its stead, you can search and read the entire book at our website 
below. And since the book was written for 7.1, we put up a simultaneous 
search of the book with the current official PostgreSQL techdocs.

--
Best,
Al Hulaton|  Sr. Account Engineer  |  Command Prompt, Inc.
503.667.4564  |  [EMAIL PROTECTED]
Home of Mammoth Replicator for PostgreSQL
Managed PostgreSQL, Linux services and consulting
Read and Search O'Reilly's 'Practical PostgreSQL' at
http://www.commandprompt.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])