On Wednesday 03 May 2006 04:31, pgdb wrote:
I like to know if I can replicate a PG db to another
lightweight platform like SQLite? The objective is to push
updates to remote devices(laptops, PDAs etc) from a central
server, so that the data can be used offline. These devices are
not expected to
I'm running Postgresql 8.1.3 on Windows 2003.
logfiles fill with the following line:
SSL SYSCALL error: A blocking operation was interrupted by a call to
WSACancelBlockingCall.
Any ideas?
Thanks.
---(end of broadcast)---
TIP 9: In versions
Is there a way to make seperate replacements in 1 field in
one command in SQL?
I need to remove all vowels (a,e,i,o,u) in a field. How would
I go about that?
Try something like
SELECT regexp_replace(your_string,'[aeiou]','','g') ...
(btw, if you want all vowels, don't forget 'y' :-P)
On Wed, May 03, 2006 at 09:38:47AM +0200, Magnus Hagander wrote:
SELECT regexp_replace(your_string,'[aeiou]','','g') ...
I'll be darned. I've been looking for that function. I expected to
find it in the docs under String Functions with the other replace
functions. I'm surprised to find it
I will add to my initial post:
This is a fresh install with a small database (30MB over 12 tables), no other
applications on this server and only one user - it's a development machine.
And when I say the logfiles fill up I mean the entire partition fills,
rapidly, until there is no room left
Do you actively use SSL on this server?
//Magnus
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Erik
Myllymaki
Sent: Wednesday, May 03, 2006 3:36 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] logfiles filling up
I will add to
yes I do.
Magnus Hagander wrote:
Do you actively use SSL on this server?
//Magnus
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Erik
Myllymaki
Sent: Wednesday, May 03, 2006 3:36 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL]
Hi all,
I've been searching around for an answer to this, but I coulnd't find
anything. So here we go.
I am running PostgreSQL 8.1.3 on Red Hat on an Intel server with 2GB
of RAM and lot of free HD space.
I have a very large dump file, more then 4GB, to recreate a database.
When I run:
psql
Javier de la Torre wrote:
Hi all,
I've been searching around for an answer to this, but I coulnd't find
anything. So here we go.
I am running PostgreSQL 8.1.3 on Red Hat on an Intel server with 2GB
of RAM and lot of free HD space.
I have a very large dump file, more then 4GB, to
On May 3, 2006, at 12:00 AM, David Fetter wrote:
Would be glad to hear if there are examples of existing
commercial/OSS products to serve the purpose.
If you're not using PostgreSQL for the smaller systems, you might be
able to rig something up with pg_dump using the --inserts option. The
It is inserts.
I create the inserts myself with a Python programmed I hace created to
migrate MySQL databases to PostgreSQL (by th way if someone wants
it...)
Thanks.
Javier.
On 5/3/06, Larry Rosenman [EMAIL PROTECTED] wrote:
Javier de la Torre wrote:
Hi all,
I've been searching around
Javier de la Torre wrote:
It is inserts.
I create the inserts myself with a Python programmed I hace created to
migrate MySQL databases to PostgreSQL (by th way if someone wants
it...)
Ok, that makes *EACH* insert a transaction, with all the overhead.
You need to batch the inserts between
You might want to look at Knoda (www.knoda.org). In particular the following
link-
http://hk-classes.sourceforge.net/tutorials/knodascriptingtutorial/bk01ch05s12.html
On Tuesday 02 May 2006 07:31 pm, pgdb wrote:
like to know if I can replicate a PG db to another
lightweight platform like SQLite?
Yes,
Thanks. I am doing this now...
Is definetly faster, but I will also discover now if there is a limit
in a transaction side... I am going to try to insert into one single
transaction 60 million records in a table.
In any case I still don't understand how why PostgreSQL was not taking
Javier de la Torre wrote:
Yes,
Thanks. I am doing this now...
Is definetly faster, but I will also discover now if there is a limit
in a transaction side... I am going to try to insert into one single
transaction 60 million records in a table.
In any case I still don't understand how
On Wed, May 03, 2006 at 04:28:10PM +0200, Leif B. Kristensen wrote:
However, I'm wondering if there's a practical limit to how many rows you
can insert within one transaction?
There's a limit of (I think) 2-4 billion commands per transaction. Each
command can insert any number of tuples.
So
Great! Then there will be no problems.
I would use COPY but I think I can not. While moving from MySQL to
PostgreSQL I am also transforming a pair of fields, latitude,
longitude, into a geometry field, POINT, that is understood for
Potgis. I though I will not be able to use COPY when inserting
Hi all,
I wonder why this happens:
- postgres: 8.1.3
- the table has ~200 million rows;
- there is a primary key on (col_1, col_2);
- the table was ANALYZEd;
- the planner chooses seqscan+sort for the following query even with
enable_seqscan=off:
select * from table order by col_1;
Isn't
Csaba Nagy wrote:
select * from table order by col_1;
Isn't it supposed to choose the index scan at least when
enable_seqscan=off ? Even if it is indeed not faster to do the index
scan than seqscan+sort.
I think because you've asked for every row, it's going to have to scan
the whole table
Csaba Nagy [EMAIL PROTECTED] schrieb:
select * from table order by col_1;
Without a WHERE you get the whole table.
A Index-Scan is, in this case, expensive.
HTH, Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.
On Wed, 2006-05-03 at 17:48, John D. Burger wrote:
Csaba Nagy wrote:
select * from table order by col_1;
Isn't it supposed to choose the index scan at least when
enable_seqscan=off ? Even if it is indeed not faster to do the index
scan than seqscan+sort.
I think because you've
Without a WHERE you get the whole table.
A Index-Scan is, in this case, expensive.
I know that, and I would agree if I wouldn't have asked for _ordered_
result, and wouldn't have turned enable_seqscan=off.
In these conditions I would have expected an index scan, even if it is
more expensive
hm. Ok.
The log file contains a PID field, IIRC. Does that PID:
1) Stay the same, or change?
2) Match postmaster, or postgres.exe? If postgres.exe, is it just a
backend?
3) If it's just a backend, is it enough to kill off that backend?
//Magnus
-Original Message-
From: Erik Myllymaki
I don't see any kind of PID in the logfiles at all, just lines and lines of:
2006-05-02 23:03:28 LOG: SSL SYSCALL error: A blocking operation was
interrupted by a call to WSACancelBlockingCall.
2006-05-02 23:03:28 LOG: SSL SYSCALL error: A blocking operation was
interrupted by a
But I also asked for _ordered_ results, which the seq scan is not
covering, but the index does... and I specifically disabled sequential
scan.
Docs say:
Enables or disables the query planner's use of sequential scan plan
types. It's not possible to suppress sequential scans entirely, but
Docs say:
Enables or disables the query planner's use of sequential scan plan
types. It's not possible to suppress sequential scans entirely, but
turning this variable off discourages the planner from using one if
there are other methods available.
Note the second sentence.
On Wed, May 03, 2006 at 06:42:00PM +0200, Csaba Nagy wrote:
OK, maybe that's the point... the cost bust given to the sequential
scan by enable_seqscan=off is not enough in this case to exceed the cost
of the index scan ? The table is quite big, might be possible. I still
wonder why would be
Tom Lane wrote:
I hope it said rules, because you can't put a trigger on a view.
regression=# create table t(f1 int, f2 text);
CREATE TABLE
regression=# create view v as select * from t;
CREATE VIEW
regression=# insert into v values(22, 'foo');
ERROR: cannot insert into a view
HINT:
On Sat, 2006-04-22 at 15:08 -0400, Tom Lane wrote:
Francisco Reyes [EMAIL PROTECTED] writes:
What resource do I need to increase to avoid the error above?
Process memory allowed to the client; this is not a server-side error.
I am experiencing an out of memory situation as well on large
Sven Willenberger [EMAIL PROTECTED] writes:
On Sat, 2006-04-22 at 15:08 -0400, Tom Lane wrote:
Francisco Reyes [EMAIL PROTECTED] writes:
What resource do I need to increase to avoid the error above?
Process memory allowed to the client; this is not a server-side error.
I am
On Wed, May 03, 2006 at 10:02:17AM -0700, Karen Hill wrote:
Tom Lane wrote:
Thanks Tom,
I tried it and it worked. Is it possible to do something a bit more
complex? Can you use rules to insert into a view that has multiple
tables as the source? For example:
CREATE VIEW v AS SELECT
Csaba Nagy wrote:
Docs say:
Enables or disables the query planner's use of sequential scan plan
types. It's not possible to suppress sequential scans entirely, but
turning this variable off discourages the planner from using one if
there are other methods available.
Note the second sentence.
Csaba Nagy [EMAIL PROTECTED] writes:
OK, maybe that's the point... the cost bust given to the sequential
scan by enable_seqscan=off is not enough in this case to exceed the cost
of the index scan ?
Looks that way to me. You could try setting enable_sort off as well,
which will penalize the
On Wed, 2006-05-03 at 13:34, Tom Lane wrote:
Csaba Nagy [EMAIL PROTECTED] writes:
OK, maybe that's the point... the cost bust given to the sequential
scan by enable_seqscan=off is not enough in this case to exceed the cost
of the index scan ?
Looks that way to me. You could try setting
How do folks handle the death of the postmaster in their applications?
Assuming the postmaster dies after an application has connected to the
database, but before it makes a request. What should I look for?
Currently our application that's in development does not handle the
situation well.
am 03.05.2006, um 20:20:55 +0200 mailte Florian G. Pflug folgendes:
of the index scan ? The table is quite big, might be possible. I still
wonder why would be seqscan+sort faster than index scan... the sort will
for sure have to write to disk too given the size of the table...
When using an
On Wed, 2006-05-03 at 13:16 -0400, Douglas McNaught wrote:
Sven Willenberger [EMAIL PROTECTED] writes:
On Sat, 2006-04-22 at 15:08 -0400, Tom Lane wrote:
Francisco Reyes [EMAIL PROTECTED] writes:
What resource do I need to increase to avoid the error above?
Process memory allowed to
On Wed, May 03, 2006 at 02:44:03PM -0400, Geoffrey wrote:
How do folks handle the death of the postmaster in their applications?
Assuming the postmaster dies after an application has connected to the
database, but before it makes a request. What should I look for?
Currently our application
Sven Willenberger wrote:
OK, that I do see; I guess I never noticed it on the other (i386)
machine as the memory never exceeded the max amount allowed by the
tunables. That raises a question though:
Using identical data and identical queries, why would the amd64 system
using postgresql
Geoffrey wrote:
How do folks handle the death of the postmaster in their applications?
Assuming the postmaster dies after an application has connected to the
database, but before it makes a request. What should I look for?
Currently our application that's in development does not handle the
On Wed, May 03, 2006 at 02:52:57PM -0400, Sven Willenberger wrote:
Using identical data and identical queries, why would the amd64 system
using postgresql 8.1.3 be using some 2/3s more memory to store the query
results before output than the i386 system using postgresql 8.0.4?
Is the amd64
Hi!When I try to connect to my database dspace,the following error message occur:FATAL: database dspace does not exist
My PostgreSQL version is, 8.1.3, although the same errorappear no mather what version of psql that I have.My system is Windows XP Professional Ed.If someone got any ideas, then
Martijn van Oosterhout kleptog@svana.org writes:
However, I'm wondering if there's a practical limit to how many rows you
can insert within one transaction?
There's a limit of (I think) 2-4 billion commands per transaction. Each
command can insert any number of tuples.
So if you're doing
Christo Romberg [EMAIL PROTECTED] writes:
When I try to connect to my database dspace,
the following error message occur:
FATAL: database dspace does not exist
The error message seems perfectly clear to me ;-).
You might be getting bit by case sensitivity --- perhaps the
database is really
Hmm. I tohught we had PID as default, apparntly we don't.
Try setting log_line_prefix to something containing %p (for process pid)
- probably %t %p , unless you have changed things from the default.
Then restart the server and wait for the next time it happens :-(
//Magnus
-Original
It does not get mail for a long time. ;^)
It also can not establish a connection to the listener.
If you are on the same machine as the db, you could check
to see if the process is running. You could also setup
an inetd listener that indicates the status of the
postmaster. I have not done that
Hello,
I´m trying to recreate my cluster using a restore a dump created by
pg_dumpall 8.0.3 but it´s not working.
The problem is that just after the connection to template1 the script is
trying to use a schema that doesn´t exist yet. The creation of the schema
exist, but is far after.
Is this
How do you know that the database exists? If you load up psql, and
then \c dspace, does it let you? If you \l in psql, do you see dspace?On 5/2/06, Christo Romberg [EMAIL PROTECTED]
wrote:
Hi!I have some problems with PostgreSQL v8.1.3.My system is Windows XP Professional Edition.An error occur
Javier de la Torre wrote:
Great! Then there will be no problems.
I would use COPY but I think I can not. While moving from MySQL to
PostgreSQL I am also transforming a pair of fields, latitude,
longitude, into a geometry field, POINT, that is understood for
Potgis. I though I will not be able
Hi,
Is there any way of mimicking the IS STRICT behavior
*without* declaring the function as STRICT?
E.g., I could use PG_ARGISNULL() to check for invocation
with a NULL argument. But, the function has already been
invoked at that point. I.e. I still have to *return*
something -- which might
50 matches
Mail list logo