Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Paul Lambert

Ron Johnson wrote:


Dinosaurist?

The big systems we use were last upgraded 5ish years ago, and are
scheduled (eventually) to be replaced with Oracle on Linux.



We've got some pretty new Alpha servers (around a year old) running VMS 
8.3 which was released about the same time we got the servers...or 
shortly before.


Sure it's been around nearly since the dawn of time, but it's still an 
actively developed operating system.


I've finally got my Alpha server at home up and running now too, and I 
hope to be getting PG running on it as part of my thesis project when I 
start that in the near future, if my schedule allows.


--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [UNSURE] Re: [GENERAL] Streaming large data into postgres [WORM like applications]

2007-05-11 Thread Tom Allison
One approach would be to spool all the data to a flat file and then  
pull them into the database as you are able to.  This would give you  
extremely high peak capability.



On May 11, 2007, at 10:35 PM, Dhaval Shah wrote:


I do care about the following:

1. Basic type checking
2. Knowing failed inserts.
3. Non-corruption
4. Macro transactions. That is a minimal read consistency.

The following is not necessary

1. Referential integrity

In this particular scenario,

1. There is a sustained load and peak loads. As long as we can handle
peak loads, the sustained loads can be half of the quoted figure.
2.  The row size has limited columns. That is, it is spans at most a
dozen or so columns and most integer or varchar.

It is more data i/o heavy rather than cpu heavy.

Regards
Dhaval

On 5/11/07, Ben <[EMAIL PROTECTED]> wrote:

Inserting 50,000 rows a second is, uh... difficult to do, no matter
what database you're using. You'll probably have to spool the inserts
and insert them as fast as you can, and just hope you don't fall too
far behind.

But I'm suspecting that you aren't going to be doing much, if any,
referential integrity checking, at least beyond basic type checking.
You probably aren't going to care about multiple inserts affecting
each other, or worry about corruption if a given insert fails... in
fact, you probably aren't even going to need transactions at all,
other than as a way to insert faster. Is SQL the right tool for you?

On May 11, 2007, at 1:43 PM, Dhaval Shah wrote:

> Here is the straight dope, one of internal teams at my customer  
site

> is looking into MySql and replacing its storage engine so that they
> can store large amount of streamed data. The key here is that  
the data

> they are getting is several thousands of rows in an extremely short
> duration. They say that only MySql provides them the ability to
> replace the storage engine, which granted is easier.
>
> If I go with the statement that postgres can basically do what they
> intend to do for handling large datasets, I need to prepare my  
talking

> points.
>
> The requirements are as follows:
>
> 1. Large amount of streamed rows. In the order of @50-100k rows per
> second. I was thinking that the rows can be stored into a file  
and the
> file then copied into a temp table using copy and then appending  
those
> rows to the master table. And then dropping and recreating the  
index

> very lazily [during the first query hit or something like that]
>
> The table size can grow extremely large. Of course, if it can be
> partitioned, either by range or list.
>
> 2. Most of the streamed rows are very similar. Think syslog rows,
> where for most cases only the timestamp changes. Of course, if the
> data can be compressed, it will result in improved savings in  
terms of

> disk size.
>
> The key issue here is that the ultimate data usage is Write Once  
Read
> Many, and in that sense I am looking for a very optimal solution  
for

> bulk writes and maintaining indexes during bulk writes.
>
> So with some intelligent design, it is possible to use postgres.  
Any

> help in preparing my talking points is appreciated.
>
> Regards
> Dhaval
>
> ---(end of
> broadcast)---
> TIP 5: don't forget to increase your free space map settings





--
Dhaval Shah

---(end of  
broadcast)---

TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Streaming large data into postgres [WORM like applications]

2007-05-11 Thread Dhaval Shah

I do care about the following:

1. Basic type checking
2. Knowing failed inserts.
3. Non-corruption
4. Macro transactions. That is a minimal read consistency.

The following is not necessary

1. Referential integrity

In this particular scenario,

1. There is a sustained load and peak loads. As long as we can handle
peak loads, the sustained loads can be half of the quoted figure.
2.  The row size has limited columns. That is, it is spans at most a
dozen or so columns and most integer or varchar.

It is more data i/o heavy rather than cpu heavy.

Regards
Dhaval

On 5/11/07, Ben <[EMAIL PROTECTED]> wrote:

Inserting 50,000 rows a second is, uh... difficult to do, no matter
what database you're using. You'll probably have to spool the inserts
and insert them as fast as you can, and just hope you don't fall too
far behind.

But I'm suspecting that you aren't going to be doing much, if any,
referential integrity checking, at least beyond basic type checking.
You probably aren't going to care about multiple inserts affecting
each other, or worry about corruption if a given insert fails... in
fact, you probably aren't even going to need transactions at all,
other than as a way to insert faster. Is SQL the right tool for you?

On May 11, 2007, at 1:43 PM, Dhaval Shah wrote:

> Here is the straight dope, one of internal teams at my customer site
> is looking into MySql and replacing its storage engine so that they
> can store large amount of streamed data. The key here is that the data
> they are getting is several thousands of rows in an extremely short
> duration. They say that only MySql provides them the ability to
> replace the storage engine, which granted is easier.
>
> If I go with the statement that postgres can basically do what they
> intend to do for handling large datasets, I need to prepare my talking
> points.
>
> The requirements are as follows:
>
> 1. Large amount of streamed rows. In the order of @50-100k rows per
> second. I was thinking that the rows can be stored into a file and the
> file then copied into a temp table using copy and then appending those
> rows to the master table. And then dropping and recreating the index
> very lazily [during the first query hit or something like that]
>
> The table size can grow extremely large. Of course, if it can be
> partitioned, either by range or list.
>
> 2. Most of the streamed rows are very similar. Think syslog rows,
> where for most cases only the timestamp changes. Of course, if the
> data can be compressed, it will result in improved savings in terms of
> disk size.
>
> The key issue here is that the ultimate data usage is Write Once Read
> Many, and in that sense I am looking for a very optimal solution for
> bulk writes and maintaining indexes during bulk writes.
>
> So with some intelligent design, it is possible to use postgres. Any
> help in preparing my talking points is appreciated.
>
> Regards
> Dhaval
>
> ---(end of
> broadcast)---
> TIP 5: don't forget to increase your free space map settings





--
Dhaval Shah

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Streaming large data into postgres [WORM like applications]

2007-05-11 Thread Ben
Inserting 50,000 rows a second is, uh... difficult to do, no matter  
what database you're using. You'll probably have to spool the inserts  
and insert them as fast as you can, and just hope you don't fall too  
far behind.


But I'm suspecting that you aren't going to be doing much, if any,  
referential integrity checking, at least beyond basic type checking.  
You probably aren't going to care about multiple inserts affecting  
each other, or worry about corruption if a given insert fails... in  
fact, you probably aren't even going to need transactions at all,  
other than as a way to insert faster. Is SQL the right tool for you?


On May 11, 2007, at 1:43 PM, Dhaval Shah wrote:


Here is the straight dope, one of internal teams at my customer site
is looking into MySql and replacing its storage engine so that they
can store large amount of streamed data. The key here is that the data
they are getting is several thousands of rows in an extremely short
duration. They say that only MySql provides them the ability to
replace the storage engine, which granted is easier.

If I go with the statement that postgres can basically do what they
intend to do for handling large datasets, I need to prepare my talking
points.

The requirements are as follows:

1. Large amount of streamed rows. In the order of @50-100k rows per
second. I was thinking that the rows can be stored into a file and the
file then copied into a temp table using copy and then appending those
rows to the master table. And then dropping and recreating the index
very lazily [during the first query hit or something like that]

The table size can grow extremely large. Of course, if it can be
partitioned, either by range or list.

2. Most of the streamed rows are very similar. Think syslog rows,
where for most cases only the timestamp changes. Of course, if the
data can be compressed, it will result in improved savings in terms of
disk size.

The key issue here is that the ultimate data usage is Write Once Read
Many, and in that sense I am looking for a very optimal solution for
bulk writes and maintaining indexes during bulk writes.

So with some intelligent design, it is possible to use postgres. Any
help in preparing my talking points is appreciated.

Regards
Dhaval

---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Streaming large data into postgres [WORM like applications]

2007-05-11 Thread Dhaval Shah

Here is the straight dope, one of internal teams at my customer site
is looking into MySql and replacing its storage engine so that they
can store large amount of streamed data. The key here is that the data
they are getting is several thousands of rows in an extremely short
duration. They say that only MySql provides them the ability to
replace the storage engine, which granted is easier.

If I go with the statement that postgres can basically do what they
intend to do for handling large datasets, I need to prepare my talking
points.

The requirements are as follows:

1. Large amount of streamed rows. In the order of @50-100k rows per
second. I was thinking that the rows can be stored into a file and the
file then copied into a temp table using copy and then appending those
rows to the master table. And then dropping and recreating the index
very lazily [during the first query hit or something like that]

The table size can grow extremely large. Of course, if it can be
partitioned, either by range or list.

2. Most of the streamed rows are very similar. Think syslog rows,
where for most cases only the timestamp changes. Of course, if the
data can be compressed, it will result in improved savings in terms of
disk size.

The key issue here is that the ultimate data usage is Write Once Read
Many, and in that sense I am looking for a very optimal solution for
bulk writes and maintaining indexes during bulk writes.

So with some intelligent design, it is possible to use postgres. Any
help in preparing my talking points is appreciated.

Regards
Dhaval

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] increasing of the shared memory does not solve the problem of "OUT of shared memory"

2007-05-11 Thread Tom Lane
"Sorin N. Ciolofan" <[EMAIL PROTECTED]> writes:
> Also I increased the max_locks_per_transaction from  64  to 10 000.

> I still receive the same error form Postgres:
> org.postgresql.util.PSQLException: ERROR: out of shared memory

Did you remember to restart the postmaster after doing that?  (pg_ctl
reload isn't good enough, you need an actual restart to enlarge shared
memory.)  You can confirm whether it's really taken effect with
"show max_locks_per_transaction".

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/11/07 12:08, Joshua D. Drake wrote:
> Geoffrey wrote:
>> Ron Johnson wrote:
>>
>>> Call me elitist, but I've used OpenVMS for so long that if it's not
>>> a VMS-style shared-disk cluster, it's a false usage of the word.
>>
>> Okay, you're an elitist...
> 
> People still use OpenVMS? ...

Sure.  We pump 6 million INSERT statements per day thru some of our
big OLTP systems.

>   elitist isn't the word I would choose ;)

Dinosaurist?

The big systems we use were last upgraded 5ish years ago, and are
scheduled (eventually) to be replaced with Oracle on Linux.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGRPofS9HxQb37XmcRAov1AKDZibBDHq0SmV2fzuN+Mj6uPFcCYwCfUmPr
fk3eTqpMNs4YasiYzMNdJjE=
=XMU0
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Paul Lambert

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/11/07 08:31, Geoffrey wrote:

Call me elitist, but I've used OpenVMS for so long that if it's not
a VMS-style shared-disk cluster, it's a false usage of the word.

Compute-clusters excluded, of course.



Hear here!
(I guess I'm elitist too)
:)


- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGRHbXS9HxQb37XmcRAg04AKC5btWR3CVebNM2HbMQG+6IeiSZqQCfRMst
RkulQKSefuR04O6D/3xlbaY=
=7cNv
-END PGP SIGNATURE-




--
Paul Lambert
Database Administrator
AutoLedgers

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Database transaction related

2007-05-11 Thread Jasbinder Singh Bali

I have a transaction in postgres database as follows:

1 Insert in table xyz
2 trigger fired on table xyz after insert
3 Trigger calls a function func
4 func is an implementation of a client socket in perl

1-4 happens in database environment only

at 4, a socket connection is opened to a unix tools server, that
altogether a different machine.
some unix tools are run in the unix tools machine, an ODBC connection
is opened back to the database server (where 1-4 took place) and
result of running the tools are stored in various tables in the
database.

Now my question is, what all happens in the unix tools server, is that
a part of the database transaction that started from step 1 above?

My assumption was yes because unix tools server was invoked from the
client socket thats the part of the database transaction. Don't know
if my assumption was correct.

One more thing that I would like to mention here is that as soon as
the unix tools server is done with its tool running job, it never
writes back to the client socket. Communication from Unix tools server
to Database server is done using ODBC connection and not the socket
connection.

Why I'm concerned about all this is because off late, a database
insert failed in the unix tool server and the whole transaction start
from step 1 above was not rolled back. It was still successful till
step 4.

So I'm just wondering and confused about the whole transaction
behaviour in such a scenario

Any kind of help would be highly appreciated.

Thanks
~Jas

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] What about SkyTools?

2007-05-11 Thread Dmitry Koterov

Still looking for a possible (more simple) replacement for Slony in the
background.

1. Seems SkyTools does not have an utility to spread DDL changes among all
the replicas (like slonik does). So, if I want to ALTER TABLE, I have to run
this command manually on each node?

2. The architecture of the system is not clear enough. What daemon should
run in what machine? Seems we must have one PgQ daemon on each machine
(master and all slaves), but should we have a londiste daemon on each
machine too or not? (If yes, we have to care about londiste configs
synchronization on all replicas, which is not very handy.)

Seems SkyTools developed for static schemas only, without an ability to
modify the schema.
Am I wrong?

On 4/12/07, Robert Treat <[EMAIL PROTECTED]> wrote:


On Wednesday 11 April 2007 12:08, Dmitry Koterov wrote:
> Hello.
>
> Have anybody used SkyTools in production environment?
> What's the impression? In practice - is it now more preferrable than
Slony
> or not yet?

Well, skype using them in production...   I think the general consensus of
the
postgresql community is that slony is still the preferred choice, but on
number of deployments and general community knowledge, assuming you need
master/slave style replication.  Everything else is still considered
fairly
green technology, though that's no reason not to test it in your
environment.
IMHO YMMV

--
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Bruno Wolff III
On Thu, May 10, 2007 at 20:43:20 -0500,
  John Gateley <[EMAIL PROTECTED]> wrote:
> Sorry if this is a FAQ, I did search and couldn't find much.
> 
> I need to make my Postgresql installation fault tolerant.
> I was imagining a RAIDed disk array that is accessible from two
> (or multiple) computers, with a postmaster running on each computer.
> (Hardware upgrades could then be done to each computer at different
> times without losing access to the database).
> 
> Is this possible?

You can't have two postmasters accessing the same data. Doing so will cause
corruption. You can have a failover system where another postmaster starts
after the normal one has stopped. But you need to be completely sure the
normal postmaster has stopped before starting the backup one.

> Is there another way to do this I should be looking at?

Depending on your needs replication might be useful.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread Alvaro Herrera
Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Why on earth is it talking about MULE_INTERNAL?
> 
> IIRC, a lot of the conversions translate through some common
> intermediate charset to save on code/table space.  In such cases
> the problem will usually be detected on the backend conversion...

Interesting, but it doesn't explain why the conversion doesn't work.
AFAICS the operation I am requesting is valid.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] table change

2007-05-11 Thread Jeff Davis
On Wed, 2007-05-09 at 14:20 +0100, Darren Bird wrote:
> Is there a way of asking Postgresql if a particular table was changed
> without going back through the logs - like a last modified flag or
> even just a changed flag?

If the table change caused a rewrite of the table it will have a new
relfilenode (and a new file in the data directory). 

You might look at that file's creation time. This will not catch all
modifications to the table.

Hope that helps,
Jeff Davis


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Geoffrey

Joshua D. Drake wrote:

Geoffrey wrote:

Ron Johnson wrote:


Call me elitist, but I've used OpenVMS for so long that if it's not
a VMS-style shared-disk cluster, it's a false usage of the word.


Okay, you're an elitist...


People still use OpenVMS? ... elitist isn't the word I would choose ;)


):

--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] tokenize string for tsearch?

2007-05-11 Thread Ottavio Campana
Magnus Hagander wrote:
>> I'm using 8.1.8 and I don't find plainto_tsquery in tsearch2.sql
>>
>> What can I do?
> 
> Yeah, you need 8.2 for that function. I don't think anybody has tried
> backpatching it, but if you want to you can look at the code in 8.2 and
> see if you can backpatch it yourself. But the easiest way is certainly
> to upgrade to 8.2.

doh! that's not possible. :-(

I'm solving with a custom stored procedure.

Thanks

-- 
Non c'e' piu' forza nella normalita', c'e' solo monotonia.



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Why on earth is it talking about MULE_INTERNAL?

IIRC, a lot of the conversions translate through some common
intermediate charset to save on code/table space.  In such cases
the problem will usually be detected on the backend conversion...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Joshua D. Drake

Geoffrey wrote:

Ron Johnson wrote:


Call me elitist, but I've used OpenVMS for so long that if it's not
a VMS-style shared-disk cluster, it's a false usage of the word.


Okay, you're an elitist...


People still use OpenVMS? ... elitist isn't the word I would choose ;)


Sincerely,

Joshua D. Drake





Compute-clusters excluded, of course.

- --
Ron Johnson, Jr.
Jefferson LA  USA






--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread Martin Gainty

Apparently you will need to implement a UNICODE aware JDBC driver
http://archives.postgresql.org/pgsql-general/2004-01/msg01649.php
Martín

This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please notify
the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

- Original Message - 
From: "Alvaro Herrera" <[EMAIL PROTECTED]>

To: "Martin Marques" <[EMAIL PROTECTED]>
Cc: 
Sent: Friday, May 11, 2007 9:33 AM
Subject: Re: [GENERAL] UTF-8 to ASCII



Martin Marques escribió:

I have a doubt about the function to_ascii() and what the documentation
says.

Basically, I passed my DB from latin1 to UTF-8, and I started getting an
error when using the to_ascii() function on a field of one of my DB [1]:

ERROR:  la conversión de codificación de UTF8 a ASCII no está soportada


Well, the to_ascii() documentation says that it only supports LATIN1,
LATIN2, LATIN9, and WIN1250.  This is on a footnote.

I do think that there's something strange on the vicinity anyway,
because using convert() expliciting the conversion function gives a
mismatching error for me (local environment is UTF8, as is
client_encoding):

alvherre=# select convert('Martín' using utf8_to_ascii);
ERROR:  character 0xc3 of encoding "MULE_INTERNAL" has no equivalent in 
"SQL_ASCII"



Why on earth is it talking about MULE_INTERNAL?

--
Alvaro Herrera 
http://www.CommandPrompt.com/

PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-11 Thread Peter Childs

On 09/05/07, Lew <[EMAIL PROTECTED]> wrote:


Felix Kater wrote:
> I am not bound to indexes, however, wonder if foreign keys itself are
> non-atomic functionality. I mean: if foreign keys are based on some
> other lower level functionality like indexes or anything else which I
> could use as a substitute--in what way ever. Of course, I want to
> gain the same (referential integrity etc.).
>
> If foreign keys are, however, something unique which can't be replaced
> by any other pg function (I am of course not taking into account things
> like multiple queries bound together by transactions...) then I have to
> go though it and implement it into my pg interface (looking at the
> information_schema: This seems to be quite a bunch of work...).

Semantics are not a trivial thing.

Foreign keys are a fundamental semantic of the relational model.  They do
not
mean the same thing as an index at all.

I find it strange that anyone would resist the notions of primary and
foreign
keys, when they are the basis of the relational model.  Indexes aren't
even
part of the relational model - they are a hack to enhance performance.

Sure they ultimately break down to machine instructions, but that's in a
whole
different domain of discourse.  A data model is built up from primary
keys,
foreign keys and dependent data.  They are fundamental.  They /are/ the
building blocks of your database.  Expressing these molecular concepts in
terms of their constituent atoms will not convey the molecular properties;
you
lose a tremendous amount of information.

Just use the syntax that best expresses your structure: PRIMARY KEY and
FOREIGN KEY.





Apart from anything a unique constraint is NOT the same as a unique index,
as you need a not null constraint on the column as well.

Peter.


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Geoffrey

Ron Johnson wrote:


Call me elitist, but I've used OpenVMS for so long that if it's not
a VMS-style shared-disk cluster, it's a false usage of the word.


Okay, you're an elitist...


Compute-clusters excluded, of course.

- --
Ron Johnson, Jr.
Jefferson LA  USA



--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/11/07 08:31, Geoffrey wrote:
> Ron Johnson wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>>
>> On 05/11/07 07:32, Geoffrey wrote:
>>> John Gateley wrote:
 Sorry if this is a FAQ, I did search and couldn't find much.

 I need to make my Postgresql installation fault tolerant.
 I was imagining a RAIDed disk array that is accessible from two
 (or multiple) computers, with a postmaster running on each computer.
 (Hardware upgrades could then be done to each computer at different
 times without losing access to the database).
>>> We are doing this, more or less.  We use the RH cluster suite on two
>>> machines that share a common data silo.  Basically, if one machine
>>> fails, the other fires up a postmaster and picks up where the other left
>>> off.
>>>
>>> That's real simple description because we actually have an active/active
>>> configuration with multiple postmasters running on each machine. Machine
>>> A is the active machine for databases 1-3 and machine B is the active
>>> machine for databases 4-6.   If machine A fails, postmasters are fired
>>> up on machine B to attend to databases 1-3.
>>
>> That's still not a cluster in the traditional sense.
>>
>> On a cluster-aware OS and RDBMS (like Rdb/VMS and Oracle RAC, which
>> imperfectly got it's technology from VMS), all the databases would
>> be open on both nodes and they would share locking over a (usually
>> dedicated, and used-to-be-proprietary) network link.
> 
> Regardless of what you want to call it, it certainly seems to reflect a
> solution the user might consider.  I don't believe I called it a
> cluster.  I stated we were using software called the 'cluster suite.'

Call me elitist, but I've used OpenVMS for so long that if it's not
a VMS-style shared-disk cluster, it's a false usage of the word.

Compute-clusters excluded, of course.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGRHbXS9HxQb37XmcRAg04AKC5btWR3CVebNM2HbMQG+6IeiSZqQCfRMst
RkulQKSefuR04O6D/3xlbaY=
=7cNv
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] increasing of the shared memory does not solve the problem of "OUT of shared memory"

2007-05-11 Thread Sorin N. Ciolofan
 

Hello!

 

I increased significantly the number of shared buffers from 3000 to 100 000
(80Mb)

Also I increased the max_locks_per_transaction from  64  to 10 000.

I still receive the same error form Postgres:
org.postgresql.util.PSQLException: ERROR: out of shared memory

Is this message appropriate to the real cause of the problem or the reason
of the failure is actually other than what is displayed in this message?

 

With best regards,

Sorin N. Ciolofan

 

 



Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread Alvaro Herrera
Martin Marques escribió:
> I have a doubt about the function to_ascii() and what the documentation 
> says.
> 
> Basically, I passed my DB from latin1 to UTF-8, and I started getting an 
> error when using the to_ascii() function on a field of one of my DB [1]:
> 
> ERROR:  la conversión de codificación de UTF8 a ASCII no está soportada

Well, the to_ascii() documentation says that it only supports LATIN1,
LATIN2, LATIN9, and WIN1250.  This is on a footnote.

I do think that there's something strange on the vicinity anyway,
because using convert() expliciting the conversion function gives a
mismatching error for me (local environment is UTF8, as is
client_encoding):

alvherre=# select convert('Martín' using utf8_to_ascii);
ERROR:  character 0xc3 of encoding "MULE_INTERNAL" has no equivalent in 
"SQL_ASCII"


Why on earth is it talking about MULE_INTERNAL?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Geoffrey

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/11/07 07:32, Geoffrey wrote:

John Gateley wrote:

Sorry if this is a FAQ, I did search and couldn't find much.

I need to make my Postgresql installation fault tolerant.
I was imagining a RAIDed disk array that is accessible from two
(or multiple) computers, with a postmaster running on each computer.
(Hardware upgrades could then be done to each computer at different
times without losing access to the database).

We are doing this, more or less.  We use the RH cluster suite on two
machines that share a common data silo.  Basically, if one machine
fails, the other fires up a postmaster and picks up where the other left
off.

That's real simple description because we actually have an active/active
configuration with multiple postmasters running on each machine. Machine
A is the active machine for databases 1-3 and machine B is the active
machine for databases 4-6.   If machine A fails, postmasters are fired
up on machine B to attend to databases 1-3.


That's still not a cluster in the traditional sense.

On a cluster-aware OS and RDBMS (like Rdb/VMS and Oracle RAC, which
imperfectly got it's technology from VMS), all the databases would
be open on both nodes and they would share locking over a (usually
dedicated, and used-to-be-proprietary) network link.


Regardless of what you want to call it, it certainly seems to reflect a 
solution the user might consider.  I don't believe I called it a 
cluster.  I stated we were using software called the 'cluster suite.'


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread Martin Marques

Albe Laurenz wrote:

[2]: 
http://www.postgresql.org/docs/8.1/interactive/functions-string.html#FTN.AEN7625


Well, the documentation for to_ascii states clearly:
  "The to_ascii function supports conversion from LATIN1, LATIN2,
   LATIN9, and WIN1250 encodings only."


Sorry, didn't see the footnote on the table.

--
 21:50:04 up 2 days,  9:07,  0 users,  load average: 0.92, 0.37, 0.18
-
Lic. Martín Marqués |   SELECT 'mmarques' ||
Centro de Telemática|   '@' || 'unl.edu.ar';
Universidad Nacional|   DBA, Programador,
del Litoral |   Administrador
-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/11/07 07:32, Geoffrey wrote:
> John Gateley wrote:
>> Sorry if this is a FAQ, I did search and couldn't find much.
>>
>> I need to make my Postgresql installation fault tolerant.
>> I was imagining a RAIDed disk array that is accessible from two
>> (or multiple) computers, with a postmaster running on each computer.
>> (Hardware upgrades could then be done to each computer at different
>> times without losing access to the database).
> 
> We are doing this, more or less.  We use the RH cluster suite on two
> machines that share a common data silo.  Basically, if one machine
> fails, the other fires up a postmaster and picks up where the other left
> off.
> 
> That's real simple description because we actually have an active/active
> configuration with multiple postmasters running on each machine. Machine
> A is the active machine for databases 1-3 and machine B is the active
> machine for databases 4-6.   If machine A fails, postmasters are fired
> up on machine B to attend to databases 1-3.

That's still not a cluster in the traditional sense.

On a cluster-aware OS and RDBMS (like Rdb/VMS and Oracle RAC, which
imperfectly got it's technology from VMS), all the databases would
be open on both nodes and they would share locking over a (usually
dedicated, and used-to-be-proprietary) network link.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGRG2rS9HxQb37XmcRAjRYAJ9UB4nvoFAbvWPBt70eY5kGuhL45ACgnnJE
IuC72gtrsS/+aaWphZzU3QQ=
=lHlt
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread Albe Laurenz
> I have a doubt about the function to_ascii() and what the
> documentation says.
> 
> Basically, I passed my DB from latin1 to UTF-8, and I started

What do you mean by 'passed the DB from Latin1 to UTF8'?

> getting an error when using the to_ascii() function on a field
> of one of my DB [1]:
> 
> ERROR:  la conversión de codificación de UTF8 a ASCII no está soportada
> 
> OK, it's in spanish, but basically it says that the conversion
> UTF8 to ASCII is not supported, but in the documentation [2] I see 
> this in the "Table 9-7. Built-in Conversions":
> 
> utf8_to_ascii UTF8SQL_ASCII
> 
> Is the documentation wrong or something?
> 
> I'm on postgresql-8.1.8, and as you can see, I'm checking the 
> corresponding documentation.
>
> [1]: This I already solved using convert() to pass from UTF 
> to Latin1, and after that I do a to_ascii().
> [2]: 
> http://www.postgresql.org/docs/8.1/interactive/functions-string.html#FTN.AEN7625

Well, the documentation for to_ascii states clearly:
  "The to_ascii function supports conversion from LATIN1, LATIN2,
   LATIN9, and WIN1250 encodings only."

The table of conversions you quote belongs to the function convert().

So that should answer your question.

I am not sure what you are trying to achieve.
If you tell us, I might be able to tell you HOW to achieve it.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread Arnaud Lesauvage

Martin Marques a écrit :
I have a doubt about the function to_ascii() and what the documentation 
says.


Basically, I passed my DB from latin1 to UTF-8, and I started getting an 
error when using the to_ascii() function on a field of one of my DB [1]:


ERROR:  la conversión de codificación de UTF8 a ASCII no está soportada

OK, it's in spanish, but basically it says that the conversion UTF8 to 
ASCII is not supported, but in the documentation [2] I see this in the 
"Table 9-7. Built-in Conversions":


utf8_to_ascii   UTF8SQL_ASCII

Is the documentation wrong or something?


Hi Martin,
I think the documentation of 8.1 is wrong.
It looks different indocumentation of 8.2 : 
to_ascii : Convert string to ASCII from another encoding *(only supports conversion from LATIN1, LATIN2, LATIN9, and WIN1250 encodings)*


Hi ran into this problem too, and I wrote a function that converts from DB 
encoding to LATIN9 before doing the to_ascii conversion : 
/to_ascii(convert(mystring, 'LATIN9'), 'LATIN9')/

Regards
--
Arnaud

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread Martin Marques

LEGEAY Jérôme wrote:

for convert my DB, i use this process:


createdb -T "old_DB" "copy_old_DB"
dropdb "old_DB"
createdb -E LATIN1 -T "copy_old_DB" "new_DB_name"

maybe this process will help you.


As I said in my original mail, the DB conversion went OK, but I see some 
discrepancies in the documentation.


My question is if the documentation is correct, and if so, why don't I 
get the right behavior?



--
 21:50:04 up 2 days,  9:07,  0 users,  load average: 0.92, 0.37, 0.18
-
Lic. Martín Marqués |   SELECT 'mmarques' ||
Centro de Telemática|   '@' || 'unl.edu.ar';
Universidad Nacional|   DBA, Programador,
del Litoral |   Administrador
-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Geoffrey

John Gateley wrote:

Sorry if this is a FAQ, I did search and couldn't find much.

I need to make my Postgresql installation fault tolerant.
I was imagining a RAIDed disk array that is accessible from two
(or multiple) computers, with a postmaster running on each computer.
(Hardware upgrades could then be done to each computer at different
times without losing access to the database).


We are doing this, more or less.  We use the RH cluster suite on two 
machines that share a common data silo.  Basically, if one machine 
fails, the other fires up a postmaster and picks up where the other left 
off.


That's real simple description because we actually have an active/active 
configuration with multiple postmasters running on each machine. 
Machine A is the active machine for databases 1-3 and machine B is the 
active machine for databases 4-6.   If machine A fails, postmasters are 
fired up on machine B to attend to databases 1-3.


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] UTF-8 to ASCII

2007-05-11 Thread LEGEAY Jérôme

for convert my DB, i use this process:


createdb -T "old_DB" "copy_old_DB"
dropdb "old_DB"
createdb -E LATIN1 -T "copy_old_DB" "new_DB_name"

maybe this process will help you.

regards

Jérôme LEGEAY

Le 14:13 11/05/2007, vous avez écrit:

I have a doubt about the function to_ascii() and what the documentation says.

Basically, I passed my DB from latin1 to UTF-8, and I started getting an 
error when using the to_ascii() function on a field of one of my DB [1]:


ERROR:  la conversión de codificación de UTF8 a ASCII no está soportada

OK, it's in spanish, but basically it says that the conversion UTF8 to 
ASCII is not supported, but in the documentation [2] I see this in the 
"Table 9-7. Built-in Conversions":


utf8_to_ascii   UTF8SQL_ASCII

Is the documentation wrong or something?

I'm on postgresql-8.1.8, and as you can see, I'm checking the 
corresponding documentation.


[1]: This I already solved using convert() to pass from UTF to Latin1, and 
after that I do a to_ascii().
[2]: 
http://www.postgresql.org/docs/8.1/interactive/functions-string.html#FTN.AEN7625


--
 21:50:04 up 2 days,  9:07,  0 users,  load average: 0.92, 0.37, 0.18
-
Lic. Martín Marqués |   SELECT 'mmarques' ||
Centro de Telemática|   '@' || 'unl.edu.ar';
Universidad Nacional|   DBA, Programador,
del Litoral |   Administrador
-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] UTF-8 to ASCII

2007-05-11 Thread Martin Marques
I have a doubt about the function to_ascii() and what the documentation 
says.


Basically, I passed my DB from latin1 to UTF-8, and I started getting an 
error when using the to_ascii() function on a field of one of my DB [1]:


ERROR:  la conversión de codificación de UTF8 a ASCII no está soportada

OK, it's in spanish, but basically it says that the conversion UTF8 to 
ASCII is not supported, but in the documentation [2] I see this in the 
"Table 9-7. Built-in Conversions":


utf8_to_ascii   UTF8SQL_ASCII

Is the documentation wrong or something?

I'm on postgresql-8.1.8, and as you can see, I'm checking the 
corresponding documentation.


[1]: This I already solved using convert() to pass from UTF to Latin1, 
and after that I do a to_ascii().
[2]: 
http://www.postgresql.org/docs/8.1/interactive/functions-string.html#FTN.AEN7625


--
 21:50:04 up 2 days,  9:07,  0 users,  load average: 0.92, 0.37, 0.18
-
Lic. Martín Marqués |   SELECT 'mmarques' ||
Centro de Telemática|   '@' || 'unl.edu.ar';
Universidad Nacional|   DBA, Programador,
del Litoral |   Administrador
-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Devrim GÜNDÜZ
Hi,

On Fri, 2007-05-11 at 06:24 -0400, Bill Moran wrote:
> PGCluster II does this.  I don't know if it's out of beta yet.

Mitani is injured (left thumb) and he has been out of touch since 2
months. Last time we talked (1 month before) he said that he would
continue working on PGCluster-II after he feels better -- but no up2date
news until then. AFAIK, PGCluster-II is ready for testing, but SRA
Europe guys will be doing an internal test before making the code
public.

He will be talking at PGCon, so we may expect to see some piece of code
by the end of this month.

Regards,
-- 
Devrim GÜNDÜZ
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/




signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Fault Tolerant Postgresql (two machines, two postmasters, one disk array)

2007-05-11 Thread Bill Moran
In response to John Gateley <[EMAIL PROTECTED]>:

> Sorry if this is a FAQ, I did search and couldn't find much.
> 
> I need to make my Postgresql installation fault tolerant.
> I was imagining a RAIDed disk array that is accessible from two
> (or multiple) computers, with a postmaster running on each computer.
> (Hardware upgrades could then be done to each computer at different
> times without losing access to the database).
> 
> Is this possible?

PGCluster II does this.  I don't know if it's out of beta yet.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Windows installation; why is PL/python shown but disabled?

2007-05-11 Thread Magnus Hagander
Dave Page wrote:
> novnov wrote:
>> I have python 2.4 installed; python24\lib in the path; why is the PL\python
>> option presented but disabled in 8.2 postgres installer? There must be some
>> conditions under which it (and the other langs) are enabled. The only one of
>> the 7 show that is enabled is PL/pgsql. I've found this to be true for both
>> windows xp and vista.
> 
> To enable the PLs, the following files must be in the path:
> 
> ONEDLL("perl58.dll","HAS_PERL");
> ONEDLL("tcl84.dll","HAS_TCL");
> ONEDLL("python24.dll","HAS_PYTHON");
> ONEDLL("jvm.dll","HAS_JAVA");

Two more comments around that:

1) Those files *and any dependencies they have* must be in the PATH.

2) Remember that you have a per-user PATH and a per-system PATH. They
need to be in either the per-system PATH (recommended) or in the
per-user path *for the postgresql service account*. Having them in the
per-user path for Administrator or whatever account you are installing
from will not help.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Issue with database Postgresql :(

2007-05-11 Thread Albe Laurenz
> how can I set the client_encoding to what I need?

With SQL:

SET client_encoding = 

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Windows installation; why is PL/python shown but disabled?

2007-05-11 Thread Dave Page
novnov wrote:
> I have python 2.4 installed; python24\lib in the path; why is the PL\python
> option presented but disabled in 8.2 postgres installer? There must be some
> conditions under which it (and the other langs) are enabled. The only one of
> the 7 show that is enabled is PL/pgsql. I've found this to be true for both
> windows xp and vista.

To enable the PLs, the following files must be in the path:

ONEDLL("perl58.dll","HAS_PERL");
ONEDLL("tcl84.dll","HAS_TCL");
ONEDLL("python24.dll","HAS_PYTHON");
ONEDLL("jvm.dll","HAS_JAVA");

Regards, Dave.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Views- Advantages and Disadvantages

2007-05-11 Thread Alban Hertroys
Tom Lane wrote:
> Ron Johnson <[EMAIL PROTECTED]> writes:
>> On 05/10/07 21:24, Tom Lane wrote:
>>> My instinctive CS-major reply to that is "only if you've found a
>>> solution to the halting problem".  However, it's possible that we could
>>> detect this case for a useful subset of real-world functions ... not
>>> sure offhand what could be covered.
> 
>> If there are no INSERT, UPDATE or DELETE statements in the function?

And no EXECUTE or PERFORM statements probably, or you'd have to
determine if the succeeding string might evaluate to INSERT, UPDATE or
DELETE.

And of course DDL statements would be out of the question too...

> Nor any function calls ... which leaves about nothing ...

Wouldn't that only be the case if those were volatile?

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] backup and restore

2007-05-11 Thread anhtin

Thank reply for me
Can u show me 
How will i backup database on Function or proc on Database??
-- 
View this message in context: 
http://www.nabble.com/backup-and-restore-tf3714247.html#a10427265
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] tokenize string for tsearch?

2007-05-11 Thread Magnus Hagander
Ottavio Campana wrote:
> Magnus Hagander wrote:
>> On Mon, May 07, 2007 at 05:31:02PM -0700, Ottavio Campana wrote:
>>> Hi, I'm trying to use tsearch2 for the first time and I'm having a
>>> problem setting up a query
>>>
>>> If I execute
>>>
>>> SELECT * from test_table where ts_desc @@ to_tsquery ('hello&world');
>>>
>>> it works, but I'm having the problem that the string used for the query
>>> is not 'hello&world' but 'hello world', Moreover, it can have an
>>> arbitrary number of spaces between the words, so I cannot just
>>> substitute the spaces with &, because 'hello&&world' gives error.
>>>
>>> What is the safest way transform a string into a list of words "anded"
>>> together?
>> Look at plainto_tsquery().
> 
> db=# SELECT plainto_tsquery('default', 'hello word');
> ERROR:  function plainto_tsquery("unknown", "unknown") does not exist
> HINT:  No function matches the given name and argument types. You may
> need to add explicit type casts.
> 
> I'm using 8.1.8 and I don't find plainto_tsquery in tsearch2.sql
> 
> What can I do?

Yeah, you need 8.2 for that function. I don't think anybody has tried
backpatching it, but if you want to you can look at the code in 8.2 and
see if you can backpatch it yourself. But the easiest way is certainly
to upgrade to 8.2.

//Magnus


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] backup and restore

2007-05-11 Thread Chris

anhtin wrote:

Anybody show for me ?
i want backup database and i read on Internet have function
pg_start_backup(C:\Program Files\MicrosoftSQLServer\MSSQL\BACKUP\abc.backup)
but i not run
Some body show me. How will i do run this function ??


Read the documentation!

http://www.postgresql.org/docs/current/static/backup-dump.html


pg_start_backup is NOT for creating full backups - it is for creating 
partial backups (as you would see here: 
http://www.postgresql.org/docs/8.2/static/continuous-archiving.html).


--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[Fwd: Re: [GENERAL] backup and restore]

2007-05-11 Thread Paul Lambert

anhtin wrote:

Anybody show for me ?
i want backup database and i read on Internet have function
pg_start_backup(C:\Program Files\MicrosoftSQLServer\MSSQL\BACKUP\abc.backup)
but i not run
Some body show me. How will i do run this function ??


See:

http://www.postgresql.org/docs/8.2/static/continuous-archiving.html

Are you trying to use PG to backup an MS SQL server database?
--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] backup and restore

2007-05-11 Thread Paul Lambert

anhtin wrote:

Anybody show for me ?
i want backup database and i read on Internet have function
pg_start_backup(C:\Program Files\MicrosoftSQLServer\MSSQL\BACKUP\abc.backup)
but i not run
Some body show me. How will i do run this function ??


See:

http://www.postgresql.org/docs/8.2/static/continuous-archiving.html

--
Paul Lambert
Database Administrator
AutoLedgers


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] backup and restore

2007-05-11 Thread anhtin

Anybody show for me ?
i want backup database and i read on Internet have function
pg_start_backup(C:\Program Files\MicrosoftSQLServer\MSSQL\BACKUP\abc.backup)
but i not run
Some body show me. How will i do run this function ??
-- 
View this message in context: 
http://www.nabble.com/backup-and-restore-tf3714247.html#a10426704
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] backup and restore

2007-05-11 Thread anhtin

Anybody show for me ?
i want backup database and i read on Internet have function
pg_start_backup(C:\Program Files\MicrosoftSQLServer\MSSQL\BACKUP\abc.backup)
but i not run
Some body show me. How will i do run this function ??
-- 
View this message in context: 
http://www.nabble.com/backup-and-restore-tf3714247.html#a10426702
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match