[GENERAL] pgloader - Can't find textreader/csvreader

2007-08-27 Thread Ow Mun Heng
I'm trying to see if pgloader will make my work easier for bulkloads.
I'm testing it out and I'm stucked basically because it can't find the
module TextReader or CSVreader. 

Googling doesn't help as there seems to be no reference to a module
named textreader or csvreader.

I'm on Python 2.4.4

Thanks.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] LDAP service lookup

2007-08-27 Thread Albe Laurenz
> I've just accidently stumbled upon
> http://www.postgresql.org/docs/8.2/static/libpq-ldap.html
> and thought "hey, this is what my friend, a huge BigRDBMS fan, was
> telling me about.
> 
> Now that I've read it, I think it could be very useful in an
> enterpisish sort of way
> (addressing databases as "services" not as 
> "host+port+database name"), and
> it could be used as an advocacy lever ("you think LDAP directory with
> DB-services
> is neat?  PostgreSQL already has it").

I'm glad that *somebody* else appreciates it :^)

> Then again, apart from libpq I don't see it mentioned 
> anywhere.  I would like
> to have [1] a Setting-up-LDAP-for-PgSQL-HOWTO.

Being the author of the code, I'd be willing to compose one if
there is a demand.
I thought that the documentation you quoted above would be enough -
it has samples and everything.
What information is missing in your opinion?

>  I would like to use it
> from DBD::Pg (I _guess_ its a matter of DBI->connect('dbi:Pg',
> 'service=foo', ...);,
> but its a wild guess).  And I would like to use it from psql 
> (this is a tricky
> part, since \c expects DBNAME, not a service name, and using both in
> this context would introduce ambiguity).

Good news - you can do all this.
Every client API and interface that uses libpq will be able to do
LDAP service lookup.

In 8.3 you will be able to say "psql service=myname", in 8.2 you
have to resort to "PGSERVICE=myname psql" because there is no
support for the service parameter.

Your DBD::Pg sample should work.

> Could anyone of you tell me about your setups if you use LDAP for
> this?  How do you feel about it?  Which LDAP server do you use?

We use Oracle's Internet Directory, but any LDAP server should do.
It's just a matter of organizing your databases in LDAP entries and
writing the corresponding LDAP searches into pg_service.conf.

The sample in the documentation works with standard LDAP objectclasses,
but you may want to use your own with more reasonable names.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pgsql functions: get column headers; optional arguments

2007-08-27 Thread Albe Laurenz
Mark wrote:
> I am writing a function to extract data either from a table or a query

> and output it in xml. I have the logic down, but I cannot work out a
few 
> things.
> 
> 1. How can I read the column headings from the returned data set? I
have 
> resorted to writing the same function in tcl in which I can work out
how 
> to do this, but what about pgsql? I can't use the system tables for 
> this, since the data may not come from a table.

I don't think there is a way.

> 2. It it possible, either in tcl or pgsql, to have optional function 
> arguments?

No.

Before you do a lot of work on this, check out the xml2 contrib module.
It may provide a lot of what you need.

There will also be XML stuff in 8.3 core.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Converting non-null unique idx to pkey

2007-08-27 Thread Kristo Kaiv


On 23.08.2007, at 16:10, Michael Glaesemann wrote:



On Aug 23, 2007, at 7:44 , Kristo Kaiv wrote:



On 23.08.2007, at 11:23, Alban Hertroys wrote:

Since you're setting up replication to another database, you  
might as
well try replicating to a newer release and swap them around once  
it's
done. I've seen that method of upgrading mentioned on this list a  
few times.
Don't try this. Belive me you don't want to do it. We have had our  
fun with this 1.5 y ago


Care to share? Were you using Slony? AIUI, one of the motivations  
for Slony was to be able to do exactly that, so I'm sure there's  
interest in what didn't go as expected.
I think slony didn't have any problems with it. Something broke  
internally and we got rid of this by switching over (via Slony) to  
another server.


Kristo Kaiv
http://kaiv.wordpress.com (PostgreSQL blog)




Re: [GENERAL] Undetected corruption of table files

2007-08-27 Thread Albe Laurenz
Tom Lane wrote:
>> Would it be an option to have a checksum somewhere in each
>> data block that is verified upon read?
> 
> That's been proposed before and rejected before.  See the archives ...

I searched for "checksum" and couldn't find it. Could someone
give me a pointer? I'm not talking about WAL files here.

Thanks,
Laurenz Albe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Geographic High-Availability/Replication

2007-08-27 Thread Marko Kreen
On 8/26/07, Bill Moran <[EMAIL PROTECTED]> wrote:
> I'm curious as to how Postgres-R would handle a situation where the
> constant throughput exceeded the processing speed of one of the nodes.

Such situation is not a specific problem to Postgres-R or to
synchronous replication in general.  Asyncronous replication
will break down too.

-- 
marko

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: {Spam} [GENERAL] pgloader - Can't find textreader/csvreader

2007-08-27 Thread Dimitri Fontaine
Le lundi 27 août 2007, Ow Mun Heng a écrit :
> I'm trying to see if pgloader will make my work easier for bulkloads.
> I'm testing it out and I'm stucked basically because it can't find the
> module TextReader or CSVreader.
>
> Googling doesn't help as there seems to be no reference to a module
> named textreader or csvreader.

It's a pgloader provided module, and the error arise because I forgot to make 
sure you can use pgloader without installing it properly in the system.
The only installation supported now is via the debian package, if you're using 
debian either use the .deb provided on pgfoundry or issue a debuild command.

You can also tweak the makefile to add some mkdir -p commands before it 
installs files in non-existing directories, too, and then install pgloader 
with make install. (debian packaging cares about system dirs, hence not the 
provided makefile)

I'll see about having pgloader usable without any system-wide installation...

Regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Local authentication/security

2007-08-27 Thread Lange Marcus
> 
> Lange Marcus wrote:
> >> Not that it matters in your case. The password might as well be 
> >> "password" - if they get access to the files/application, 
> it's game 
> >> over.
> > 
> > What about having some of the columns encrypted in the database ? 
> > Will that improve things abit ?
> 
> Not unless you can keep the key protected. If you prevent 
> access to a file containing the key then you can do the same 
> to the database and application files.
> 
> 
> Can you tell us what you're trying to do? What are you going 
> to deploy, and what threats do you want to protect against?
> 

I have some data in an encrypted external dataformat that i wish to 
insert into the database and keep some level of protection/security 
and prevent/complicate the possibility to read and copy the data, that
is basically it.

Regards Marcus



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: {Spam} [GENERAL] pgloader - Can't find textreader/csvreader

2007-08-27 Thread Ow Mun Heng
On Mon, 2007-08-27 at 12:22 +0200, Dimitri Fontaine wrote:
> Le lundi 27 août 2007, Ow Mun Heng a écrit :
> > I'm trying to see if pgloader will make my work easier for bulkloads.
> > I'm testing it out and I'm stucked basically because it can't find the
> > module TextReader or CSVreader.
> >
> It's a pgloader provided module, and the error arise because I forgot to make 
> sure you can use pgloader without installing it properly in the system.
> The only installation supported now is via the debian package, if you're 
> using 
> debian either use the .deb provided on pgfoundry or issue a debuild command.

I use gentoo.. so

> 
> You can also tweak the makefile to add some mkdir -p commands before it 
> installs files in non-existing directories, too, and then install pgloader 

I Downloaded the 2.2.0 tarball and can't find any reference to it from
this tarball except for the import statement.

I'll try again with the (is there are 2.2.1, I thought I saw it) or take
a peek at the deb package..

Thanks.

ps : (no internet at the momemt.. only email access)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: {Spam} [GENERAL] pgloader - Can't find textreader/csvreader

2007-08-27 Thread Dimitri Fontaine
Le lundi 27 août 2007, Ow Mun Heng a écrit :
> > It's a pgloader provided module, and the error arise because I forgot to
> > make sure you can use pgloader without installing it properly in the
> > system. 

After some testing, it seems pgloader is still usable without system 
installation at all. But I forgot some new files in the 2.2.0 release.

> I'll try again with the (is there are 2.2.1, I thought I saw it) or take
> a peek at the deb package..

We've just made some tests here with 2.2.1 and as this release contains the 
missing files, it works fine without any installation.

Regards,
-- 
dim


signature.asc
Description: This is a digitally signed message part.


PgLoader unable to handle pkey dups Was [Re: {Spam} [GENERAL] pgloader - Can't find textreader/csvreader]

2007-08-27 Thread Ow Mun Heng
On Mon, 2007-08-27 at 11:27 +0200, Dimitri Fontaine wrote:

> We've just made some tests here with 2.2.1 and as this release contains the 
> missing files, it works fine without any installation.

Yep.. I can confirm that it works.. I am using the csv example.

Goal : similar functionality much like mysql's mysqlimport --replace
(overwrite any rows which has duplicate primary keys)

$ psql pgloader < csv/csv.sql
$ ../pgloader.py -Tvc examples/pgloader.conf csv

pgloader=# alter table csv add primary key (a,b,c);
pgloader=# \d csv
Table "public.csv"
 Column | Type | Modifiers
+--+---
 a  | bigint   | not null
 b  | bigint   | not null
 c  | character(2) | not null
 d  | text |
Indexes:
"csv_pkey" PRIMARY KEY, btree (a, b, c)

pgloader=# select * from csv;
a |b | c  |   d
--+--++
 33996344 | 33996351 | GB | United Kingdom
 50331648 | 68257567 | US | United States
 68257568 | 68257599 | CA | Canada
 68257600 | 68259583 | US | United States
 68259584 | 68259599 | CA | Canada

$cat csv/csv.data
"2.6.190.56","2.6.190.63","33996344","33996351","GB","Error Kingdom"
"4.17.143.0","4.17.143.15","68259584","68259599","CA","new Country"


[GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot

Hi guys!

I use pgsql for some time already and am happy with it. Heh, sure this 
post has it's big BUT :-)


Starting few months ago, one of our projects encoutered loss of one DB 
table (in that time, it was version 8.0 or so...) I did some research 
and found out, the vacuuming was set wrong and transaction wraparound 
failure occured... OK, I upgraded to 8.2.1 (latest in that time) and set 
the auto vacuum... (till now I think it was set correctly and 
working)... but after less than one month, the same happened... but this 
time, the age of all databases looked OK...
So I did upgrade to 8.2.3 (the project is running this version till now) 
but the same happened today...
This time I didn't rely on the autovacuuming and did code cron scheduled 
job to do vacuum full on all tables during nightly maintenance. I'm now 
sure, the wraparound didn't occure (am I?)... in fact, I'm lost...


One more thing:
The project runs proprietal CMS system and there are more instances of 
it with the same database layout in different databases. Every time the 
"lost" table is the same one - the bussiest one (mostly read)... and 
everytime the "lost" table is gone from more projects at once... today 
it was 2 projects affected out of 7 in the same pgsql server... it looks 
also like the problem happens everytime short after midnight - scarry - 
ha ha :-)


I have the backup of DB files as they was this morning, but it's alter 
the vacuum script...


My system is:
OS: Debian 3.0
Kernel: 2.4.26
PgSQL: 8.2.3

Any advice and/or hint to the right direction is greatly welcome!

Regards,
--
Kamil


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Geographic High-Availability/Replication

2007-08-27 Thread Markus Schiltknecht

Hi,

Marko Kreen wrote:

Such situation is not a specific problem to Postgres-R or to
synchronous replication in general.  Asyncronous replication
will break down too.


Agreed, except that I don't consider slowness as 'breaking down'.

Regards

Markus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] create or replace for tables and schemas

2007-08-27 Thread Marcelo de Moraes Serpa
For some weid reason, I can't use a stored function nor return data from
sql, just send sql to the database, that's my constraint for now and I have
to deal with it.

I have to create a schema and just after a table in this schema. I can't
check for the existence of the table nor the schema. If the sql code to
create the table and the schema is ran when the schema and or the table
exist, a sql excepetion is thrown and my application halts.

With this in mind, I'd like to know if there is something like CREATE OR
REPLACE for tables and schemas so that if the object already exists, it will
just replace it.

For those curious, my company uses an application generator called GeneXus,
it has a "sql" "command" which allows you to send direct sql sentences to
the database **as long as** they don't return anything. Oh life.

Anyway, thanks in advance,

Marcelo.


Re: [GENERAL] create or replace for tables and schemas

2007-08-27 Thread Martijn van Oosterhout
On Mon, Aug 27, 2007 at 08:24:51AM -0300, Marcelo de Moraes Serpa wrote:
> With this in mind, I'd like to know if there is something like CREATE OR
> REPLACE for tables and schemas so that if the object already exists, it will
> just replace it.

Looks lke DROP IF EXISTS was made for you.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] create or replace for tables and schemas

2007-08-27 Thread Marcelo de Moraes Serpa
Thanks Martijn

On 8/27/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
>
> On Mon, Aug 27, 2007 at 08:24:51AM -0300, Marcelo de Moraes Serpa wrote:
> > With this in mind, I'd like to know if there is something like CREATE OR
> > REPLACE for tables and schemas so that if the object already exists, it
> will
> > just replace it.
>
> Looks lke DROP IF EXISTS was made for you.
>
> Have a nice day,
> --
> Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> > From each according to his ability. To each according to his ability to
> litigate.
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.1 (GNU/Linux)
>
> iD8DBQFG0rflIB7bNG8LQkwRAsOZAJ0QyEMaNtMR0Sa1xmuzPDT58So35wCgg8qw
> 5r27tWKhBnYAkspu34z91Og=
> =KhPb
> -END PGP SIGNATURE-
>
>


[GENERAL] [ERROR] syntax error at end of input

2007-08-27 Thread Marcelo de Moraes Serpa
Hello list,

I'm trying to execute the following sentences in a pl/pgsql function.
aNomeProcAudita and pTabAudit are both variables.

   DROP FUNCTION IF EXISTS aNomeProcAudita;
   DROP TRIGGER IF EXISTS 'Audita_' || pTabAudit || '_trigger';


When I try to create this function without these two sentences, everything
goes ok, however, when I've got these two sql senteces, I get the following
error:

ERROR: syntax error at end of input

Is there anything wrong with the code?

Thanks in advance,

Marcelo.


Re: [GENERAL] SQL Diff ?

2007-08-27 Thread Alban Hertroys
Kevin Kempter wrote:
> Hi List;
> 
> I have a very large table (52million rows) - I'm creating a copy of it to rid 
> it of 35G worth of dead space, then I'll do a sync, drop the original table 
> and rename table2.
> 
> Once I have the table2 as a copy of table1 what's the best way to select all 
> rows that have been changed, modified in table1  since the initial laod from 
> table1 into table2?

I think you could get smart having a few rules for insert/update/delete
on 'table' that "keep track" of what happens during your work on table2.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [ERROR] syntax error at end of input

2007-08-27 Thread A. Kretschmer
am  Mon, dem 27.08.2007, um  9:40:45 -0300 mailte Marcelo de Moraes Serpa 
folgendes:
> Hello list,
> 
> I'm trying to execute the following sentences in a pl/pgsql function.
> aNomeProcAudita and pTabAudit are both variables.
> 
>DROP FUNCTION IF EXISTS aNomeProcAudita;

Which version? DROP object IF EXISTS is a new feature since 8.2. Do you
have 8.2?



>DROP TRIGGER IF EXISTS 'Audita_' || pTabAudit || '_trigger';

I guess you need to rewrite this to use EXECUTE for dynamic querys like
this.



Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] LDAP service lookup

2007-08-27 Thread Dawid Kuroczko
On 8/27/07, Albe Laurenz <[EMAIL PROTECTED]> wrote:
> > it could be used as an advocacy lever ("you think LDAP directory with
> > DB-services
> > is neat?  PostgreSQL already has it").
> I'm glad that *somebody* else appreciates it :^)

Oh, I do, I do. :)

> > Then again, apart from libpq I don't see it mentioned
> > anywhere.  I would like
> > to have [1] a Setting-up-LDAP-for-PgSQL-HOWTO.
> Being the author of the code, I'd be willing to compose one if
> there is a demand.
> I thought that the documentation you quoted above would be enough -
> it has samples and everything.
> What information is missing in your opinion?

Looking at the 8.3devel documentation...

I think it should be mentioned in 18. Server Configuration. probably
   somewhere in 18.3 Connections and Authentication, that there is
   a possibility of using Service names instead of traditional connect
   strings -- and a link pointing to libpq-ldap documentation.
   This would make people much less likely to miss this point, especially
   if they don't plan to code in libpq C library. :-)
(personally I think it would fit in more places, like Managing Databases
(though LSAP is more generic in scope) or even HA (makes it much
easier to promote slave to master, just one update in one place), though
I wouldn't want to overpromote it ;)).

[...]
> In 8.3 you will be able to say "psql service=myname", in 8.2 you
> have to resort to "PGSERVICE=myname psql" because there is no
> support for the service parameter.

Somehow I've missed it while skimming through 8.2 docs.  I think
http://www.postgresql.org/docs/8.2/static/libpq-pgservice.html
could use this specific example to send a subliminal message:
  PGSERVICE=myname psql

> > Could anyone of you tell me about your setups if you use LDAP for
> > this?  How do you feel about it?  Which LDAP server do you use?
> We use Oracle's Internet Directory, but any LDAP server should do.
> It's just a matter of organizing your databases in LDAP entries and
> writing the corresponding LDAP searches into pg_service.conf.

And, while not belonging to PostgreSQL documentation, but defenately
belonging iin techdocs, whould be a step-by-step guide of setting up
pg_services in OID and other LDAP servers.  Funny thing, I think I'll
be looking at putting pg_services in OID as well.

   Regards,
  Dawid

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] EXPLAIN ANALYZE not considering primary and unique indices!

2007-08-27 Thread Joseph S

How many rows are in this table?

Sanjay wrote:

Hi All,

Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
VARCHAR(30)). While I try this:

EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1

the output is:
--
Seq Scan on website  (cost=0.00..1.31 rows=1 width=162) (actual
time=0.047..0.051 rows=1 loops=1)
  Filter: (website_id = 1)
Total runtime: 0.102 ms
---
Wondering why it is not using the index, which would have been
automatically created for the primary key.

Needing help.

thanks
Sanjay



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Jeff Amiel

--- "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:


> Having log_line_prefix with at least %p and %m (or
> %t) plus a
> log_min_messages of DEBUG2 would be great.

i am getting the additional timestampt/pid on my log
lines nowbut no additional debug output...
is log_min_messages one of them that requires a
restart?



   

Be a better Heartthrob. Get better relationship answers from someone who knows. 
Yahoo! Answers - Check it out. 
http://answers.yahoo.com/dir/?link=list&sid=396545433

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [ANNOUNCE] == PostgreSQL Weekly News - August 26 2007 ==

2007-08-27 Thread Joseph S



David Fetter wrote:


Tom Lane committed:
> 
- Restrict pg_relation_size to relation owner, pg_database_size to DB

  owner, and pg_tablespace_size to superusers.  Perhaps we could
  weaken the first case to just require SELECT privilege, but that
  doesn't work for the other cases, so use ownership as the common
  concept.

Is there going to be a way to turn this off easily?  This is going to 
break my admin tool.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [PERFORM] Partioning tsearch2 a table into chunks and accessing via views

2007-08-27 Thread Vivek Khera


On Aug 25, 2007, at 1:34 AM, Benjamin Arai wrote:

There has to be another way to do incremental indexing without  
loosing that much performance.


This is the killer feature that prevents us from using the tsearch2  
full text indexer on postgres.  we're investigating making a foreign  
table from a SOLR full text index so our app only talks to Pg but the  
text search is held in a good index.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] LDAP service lookup

2007-08-27 Thread Albe Laurenz
Dawid Kuroczko wrote:
>>> Then again, apart from libpq I don't see it mentioned anywhere.
[...]
> Looking at the 8.3devel documentation...
> 
> I think it should be mentioned in 18. Server Configuration. probably
> somewhere in 18.3 Connections and Authentication, that there is
> a possibility of using Service names instead of traditional connect
> strings -- and a link pointing to libpq-ldap documentation.
> This would make people much less likely to miss this point, especially
> if they don't plan to code in libpq C library. :-)

The server config options are not a good place.

But it could be mentioned in the 'psql' man page, under
'Connection to a database':

$ psql "service=myservice sslmode=require"


This way you can also use LDAP for connection parameter lookup (See
Section 30.15).

See Section 30.1 for more information on all the available connection
options. 

If there are no objections, I'd create a documentation patch for this.

> And, while not belonging to PostgreSQL documentation, but defenately
> belonging iin techdocs, whould be a step-by-step guide of setting up
> pg_services in OID and other LDAP servers.  Funny thing, I think I'll
> be looking at putting pg_services in OID as well.

Hmm, a tutorial for configuring LDAP servers would be quite off topic.
I think that the examples in Section 30.15 are sufficient for somebody
who is familiar with LDAP.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Bigtime scaling of Postgresql (cluster and stuff I suppose)

2007-08-27 Thread Vivek Khera


On Aug 25, 2007, at 8:12 AM, Phoenix Kiula wrote:


The sentence that caught my attention is "Nokia, Alcatel and Nortel
are all building real-time network nodes on top of MySQL Cluster."

My experiences with MySQL so far have been less than exhilerating
(only tried it for our web stuff, which is not much so far but
increasingly rapidly) but I have seen a lot of talk about MySQL
clusters.

Is there something similar in the PG world? PG Clusters?


MySQL Cluster is a specific product.  It is not just mysql databases  
lumped together to make a big mysql DB.  It is a memory-based  
database that requires at least 2 machines to run, and is not  
usefully redundant and distributed until you have many machines.  The  
telco's use it for their logging and switching infrastructure where  
you need 100% uptime and wicked fast response (thus memory based.)   
And you'll note it was developed by Ericsson...


There is no equivalent in Postgres.

There are several replication choices for Postgres. Google will find  
them for you, or just look on the postgres.org pages for the list.   
We use slony1 and it works very well for our need.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Restore v. Running COPY/INDEX seperatly

2007-08-27 Thread Tom Lane
Benjamin Arai <[EMAIL PROTECTED]> writes:
> Why is a trigger faster than doing a ALTER after table is created?  I  
> thought a trigger would be slower because it would be invoked every  
> iteration (a new row is inserted) during the COPY process.

Yeah, you'd have the trigger overhead, but the above argument ignores
the costs of the full-table UPDATE --- not to mention the VACUUM
you'll need after the UPDATE to clean up the dead rows.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Bigtime scaling of Postgresql (cluster and stuff I suppose)

2007-08-27 Thread Andrew Sullivan
On Sat, Aug 25, 2007 at 11:13:45AM -0400, Tom Lane wrote:
> In case you hadn't noticed the disconnect between these statements:
> if they have to be that close together, there *will* be a single point
> of failure.  Fire in your data center, for instance, will take out every
> copy of your data.  So as a "high availability" solution I don't find
> it all that compelling.

Indeed.  There's another issue, too, which you have to spend some
time reading the manual to get.  The clustering stuff is _yet
another_ table type, with subtly different semantics from other table
types.  As usual, this means that you can blow off your foot by
mixing table types in a transaction.  As near as I can tell, the
cluster table type (I disremeber the name of it) cannot be run in
strict mode, either.

To answer the OP's question, you can do some "cluster-like" things by
doing hardware clustering -- two machines attached to a RAID with
some sort of hardware fail-over in place.

I think that the MySQL cluster stuff wasn't intended as an HA
feature, though (although they might well be selling it that way). 
It was a way to scale many small systems for certain kinds of
workloads.  My impression is that in most cases, it's a SQL-ish
solution to a problem where someone decided to use the SQL nail
because that's the hammer they had.  I can think of ways you could
use it, and I'm not surprised that Some Giant Corp is doing so.  But
I'd be astonished if someone used it for truly valuable data.  I
would think very hard about the qualifications of someone who
proposed using it for financial data.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Users never remark, "Wow, this software may be buggy and hard 
to use, but at least there is a lot of code underneath."
--Damien Katz

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Undetected corruption of table files

2007-08-27 Thread Tom Lane
"Albe Laurenz" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>>> Would it be an option to have a checksum somewhere in each
>>> data block that is verified upon read?

>> That's been proposed before and rejected before.  See the archives ...

> I searched for "checksum" and couldn't find it. Could someone
> give me a pointer? I'm not talking about WAL files here.

"CRC" maybe?  Also, make sure your search goes all the way back; I think
the prior discussions were around the same time WAL was initially put
in, and/or when we dropped the WAL CRC width from 64 to 32 bits.
The very measurable overhead of WAL CRCs are the main thing that's
discouraged us from having page CRCs.  (Well, that and the lack of
evidence that they'd actually gain anything.)

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Tom Lane
Kamil Srot <[EMAIL PROTECTED]> writes:
> One more thing:
> The project runs proprietal CMS system and there are more instances of 
> it with the same database layout in different databases. Every time the 
> "lost" table is the same one - the bussiest one (mostly read)... and 
> everytime the "lost" table is gone from more projects at once... today 
> it was 2 projects affected out of 7 in the same pgsql server... it looks 
> also like the problem happens everytime short after midnight - scarry - 
> ha ha :-)

Tables don't just "disappear" like that.  Check your application code
to see if it's not dropping the table someplace.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] Undetected corruption of table files

2007-08-27 Thread Jonah H. Harris
On 8/27/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> that and the lack of evidence that they'd actually gain anything

I find it somewhat ironic that PostgreSQL strives to be fairly
non-corruptable, yet has no way to detect a corrupted page.  The only
reason for not having CRCs is because it will slow down performance...
which is exactly opposite of conventional PostgreSQL wisdom (no
performance trade-off for durability).

-- 
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED]
Iselin, New Jersey 08830| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [ERROR] syntax error at end of input

2007-08-27 Thread Tom Lane
"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:
>DROP FUNCTION IF EXISTS aNomeProcAudita;
>DROP TRIGGER IF EXISTS 'Audita_' || pTabAudit || '_trigger';

Neither of those match the documented syntax for the commands: you have
left off required information.  Also, as Andreas noted, if you want to
construct a name dynamically then you have to use EXECUTE.  The second
one should go something like

EXECUTE 'DROP TRIGGER IF EXISTS Audita_' || pTabAudit || '_trigger ON ' || 
tableName;

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread brian

Tom Lane wrote:

Kamil Srot <[EMAIL PROTECTED]> writes:


One more thing:
The project runs proprietal CMS system and there are more instances of 
it with the same database layout in different databases. Every time the 
"lost" table is the same one - the bussiest one (mostly read)... and 
everytime the "lost" table is gone from more projects at once... today 
it was 2 projects affected out of 7 in the same pgsql server... it looks 
also like the problem happens everytime short after midnight - scarry - 
ha ha :-)



Tables don't just "disappear" like that.  Check your application code
to see if it's not dropping the table someplace.

regards, tom lane



grep -i drop your_application_dir

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Tom Lane
Jeff Amiel <[EMAIL PROTECTED]> writes:
> is log_min_messages one of them that requires a
> restart?

No, SIGHUP (pg_ctl reload) should be sufficient.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [ANNOUNCE] == PostgreSQL Weekly News - August 26 2007 ==

2007-08-27 Thread Tom Lane
Joseph S <[EMAIL PROTECTED]> writes:
>> Tom Lane committed:
>> - Restrict pg_relation_size to relation owner, pg_database_size to DB
>> owner, and pg_tablespace_size to superusers.  Perhaps we could
>> weaken the first case to just require SELECT privilege, but that
>> doesn't work for the other cases, so use ownership as the common
>> concept.
>> 
> Is there going to be a way to turn this off easily?

No.  If you want to make an argument for weaker restrictions than these,
argue away, but security restrictions that can be "easily turned off"
are no security at all.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] Undetected corruption of table files

2007-08-27 Thread Trevor Talbot
On 8/27/07, Jonah H. Harris <[EMAIL PROTECTED]> wrote:
> On 8/27/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > that and the lack of evidence that they'd actually gain anything
>
> I find it somewhat ironic that PostgreSQL strives to be fairly
> non-corruptable, yet has no way to detect a corrupted page.  The only
> reason for not having CRCs is because it will slow down performance...
> which is exactly opposite of conventional PostgreSQL wisdom (no
> performance trade-off for durability).

But how does detecting a corrupted data page gain you any durability?
All it means is that the platform underneath screwed up, and you've
already *lost* durability.  What do you do then?

It seems like the same idea as an application trying to detect RAM errors.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Undetected corruption of table files

2007-08-27 Thread Gregory Stark
"Tom Lane" <[EMAIL PROTECTED]> writes:

> "Albe Laurenz" <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
 Would it be an option to have a checksum somewhere in each
 data block that is verified upon read?
>
>>> That's been proposed before and rejected before.  See the archives ...
>
>> I searched for "checksum" and couldn't find it. Could someone
>> give me a pointer? I'm not talking about WAL files here.
>
> "CRC" maybe?  Also, make sure your search goes all the way back; I think
> the prior discussions were around the same time WAL was initially put
> in, and/or when we dropped the WAL CRC width from 64 to 32 bits.
> The very measurable overhead of WAL CRCs are the main thing that's
> discouraged us from having page CRCs.  (Well, that and the lack of
> evidence that they'd actually gain anything.)

I thought we determined the reason WAL CRCs are expensive is because we have
to checksum each WAL record individually. I recall the last time this came up
I ran some microbenchmarks and found that the cost to CRC an entire 8k block
was on the order of tens of microseconds.

The last time it came up was in the context of allowing turning off
full_page_writes but offering a guarantee that torn pages would be detected on
recovery and no later. I was a proponent of using writev to embed bytes in
each 512 byte block and Jonah said it would be no faster than a CRC (and
obviously considerably more complicated). My benchmarks showed that Jonah was
right and the CRC was cheaper than a the added cost of using writev.

I do agree the benefits of having a CRC are overstated. Most times corruption
is caused by bad memory and a CRC will happily checksum the corrupted memory
just fine. A checksum is no guarantee. But I've also seen data corruption
caused by bad memory in an i/o controller, for example. There are always going
to be cases where it could help.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [HACKERS] [GENERAL] Undetected corruption of table files

2007-08-27 Thread Alban Hertroys
Jonah H. Harris wrote:
> On 8/27/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>> that and the lack of evidence that they'd actually gain anything
> 
> I find it somewhat ironic that PostgreSQL strives to be fairly
> non-corruptable, yet has no way to detect a corrupted page.  The only
> reason for not having CRCs is because it will slow down performance...
> which is exactly opposite of conventional PostgreSQL wisdom (no
> performance trade-off for durability).

Why? I can't say I speak for the developers, but I think the reason is
that data corruption can (with the very rare exception of undetected
programming errors) only be caused by hardware problems.

If you have a "proper" production database server, your memory has error
checking, and your RAID controller has something of the kind as well. If
not you would probably be running the database on a filesystem that has
reliable integrity verification mechanisms.

In the worst case (all the above mechanisms fail), you have backups.

IMHO the problem is covered quite adequately. The operating system and
the hardware cover for the database, as they should; it's _their_ job.

-- 
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] Undetected corruption of table files

2007-08-27 Thread Tom Lane
"Trevor Talbot" <[EMAIL PROTECTED]> writes:
> On 8/27/07, Jonah H. Harris <[EMAIL PROTECTED]> wrote:
>> I find it somewhat ironic that PostgreSQL strives to be fairly
>> non-corruptable, yet has no way to detect a corrupted page.

> But how does detecting a corrupted data page gain you any durability?
> All it means is that the platform underneath screwed up, and you've
> already *lost* durability.  What do you do then?

Indeed.  In fact, the most likely implementation of this (refuse to do
anything with a page with a bad CRC) would be a net loss from that
standpoint, because you couldn't get *any* data out of a page, even if
only part of it had been zapped.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Bigtime scaling of Postgresql (cluster and stuff I suppose)

2007-08-27 Thread Vivek Khera


On Aug 27, 2007, at 11:04 AM, Andrew Sullivan wrote:


It was a way to scale many small systems for certain kinds of
workloads.  My impression is that in most cases, it's a SQL-ish
solution to a problem where someone decided to use the SQL nail
because that's the hammer they had.  I can think of ways you could


The underlying table type can be used directly, similarly to how the  
"DB" table type is built on the DB library.  The issue is whether you  
can do that without some strange licensing problems.


I wouldn't use it for "permanent" data.  It would be awesome for a  
cache of the "hot" data in your app, and perhaps even for realized  
views that  speed up your interactive app.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [ANNOUNCE] == PostgreSQL Weekly News - August 26 2007 ==

2007-08-27 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
> Joseph S <[EMAIL PROTECTED]> writes:
>>> Tom Lane committed:
>>> - Restrict pg_relation_size to relation owner, pg_database_size to DB
>>> owner, and pg_tablespace_size to superusers.  Perhaps we could
>>> weaken the first case to just require SELECT privilege, but that
>>> doesn't work for the other cases, so use ownership as the common
>>> concept.
>>>
>> Is there going to be a way to turn this off easily?
> 
> No.  If you want to make an argument for weaker restrictions than these,
> argue away, but security restrictions that can be "easily turned off"
> are no security at all.

Sure, but you haven't made a security adjustment. You have made a
behavioral adjustment that is guaranteed to break remote applications.

Joshua D. Drake

> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG0vYlATb/zqfZUUQRAj+9AJ9Mz7tXXgrtibJMY/WLmL7x3wja3gCeP0Kw
gi91a+6oxgT+ziI9mwLHlfI=
=wxN+
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Stephen Ince
Postgres can't be embedded or serverless. Firebird has the embedded feature. 
Most of the databases have this capability (hsqldb, derby,oracle,mysql, 
firebird, and db2). Derby and hsqldb are the only free embedded databases 
for commercial use.


I recently ported a schema from postgres to firebird and found name size 
limitations. Firebird has a limitation on the size of it's column names, 
table names, constraint names and index names. I think the size limitation 
on firebird is 31 characters. Postgres doesn't have this limitation.


Steve 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Jeff Amiel
--- Tom Lane <[EMAIL PROTECTED]> wrote:

> Jeff Amiel <[EMAIL PROTECTED]> writes:
> > is log_min_messages one of them that requires a
> > restart?
> 
> No, SIGHUP (pg_ctl reload) should be sufficient.

Weird
looks like some items are going to syslog and some to
my defined postgres logfile (from -L option). 
Bizarre.
AnywayI hope this helps someone.

At 10:46, I find this in my syslog files..

Aug 27 10:46:01 db-1 postgres[27008]: [ID 748848
local0.warning] [85355-1] 2007-08-27 10:46:01.888 CDT 
  27008ERROR:  out of memory
Aug 27 10:46:01 db-1 postgres[27008]: [ID 748848
local0.warning] [85355-2] 2007-08-27 10:46:01.888 CDT 
  27008DETAIL:  Failed on request of size 536870910.

and at the same time in my postgres logfile I see this
(and only this)

TopMemoryContext: 169608 total in 10 blocks; 18832
free (34 chunks); 150776 used
TopTransactionContext: 8192 total in 1 blocks; 7648
free (9 chunks); 544 used
CFuncHash: 8192 total in 1 blocks; 4936 free (0
chunks); 3256 used
Type information cache: 8192 total in 1 blocks; 1800
free (0 chunks); 6392 used
Operator class cache: 8192 total in 1 blocks; 4872
free (0 chunks); 3320 used
Autovacuum context: 8192 total in 1 blocks; 5416 free
(8 chunks); 2776 used
smgr relation table: 8192 total in 1 blocks; 2808 free
(0 chunks); 5384 used
TransactionAbortContext: 32768 total in 1 blocks;
32752 free (0 chunks); 16 used
Portal hash: 8192 total in 1 blocks; 3912 free (0
chunks); 4280 used
PortalMemory: 0 total in 0 blocks; 0 free (0 chunks);
0 used
Relcache by OID: 8192 total in 1 blocks; 3376 free (0
chunks); 4816 used
CacheMemoryContext: 659000 total in 19 blocks; 264904
free (15 chunks); 394096 used
sl_seqlog_idx: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
PartInd_istream_replication_cluster_sl_log_1-node-1:
1024 total in 1 blocks; 392 free (0 chunks); 632 used
sl_log_1_idx1: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
pg_index_indrelid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1
blocks; 288 free (0 chunks); 736 used
pg_autovacuum_vacrelid_index: 1024 total in 1 blocks;
392 free (0 chunks); 632 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328
free (0 chunks); 696 used
pg_type_oid_index: 1024 total in 1 blocks; 352 free (0
chunks); 672 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1
blocks; 288 free (0 chunks); 736 used
pg_statistic_relid_att_index: 1024 total in 1 blocks;
288 free (0 chunks); 736 used
pg_auth_members_member_role_index: 1024 total in 1
blocks; 328 free (0 chunks); 696 used
pg_auth_members_role_member_index: 1024 total in 1
blocks; 328 free (0 chunks); 696 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks;
328 free (0 chunks); 696 used
pg_proc_proname_args_nsp_index: 1024 total in 1
blocks; 256 free (0 chunks); 768 used
pg_proc_oid_index: 1024 total in 1 blocks; 352 free (0
chunks); 672 used
pg_operator_oprname_l_r_n_index: 1024 total in 1
blocks; 192 free (0 chunks); 832 used
pg_operator_oid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_opclass_oid_index: 1024 total in 1 blocks; 352 free
(0 chunks); 672 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks;
216 free (0 chunks); 808 used
pg_namespace_oid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_namespace_nspname_index: 1024 total in 1 blocks;
392 free (0 chunks); 632 used
pg_language_oid_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
pg_language_name_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks;
328 free (0 chunks); 696 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_authid_oid_index: 1024 total in 1 blocks; 352 free
(0 chunks); 672 used
pg_authid_rolname_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
pg_database_oid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_conversion_oid_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks;
328 free (0 chunks); 696 used
pg_conversion_default_index: 1024 total in 1 blocks;
192 free (0 chunks); 832 used
pg_class_relname_nsp_index: 1024 total in 1 blocks;
328 free (0 chunks); 696 used
pg_class_oid_index: 1024 total in 1 blocks; 352 free
(0 chunks); 672 used
pg_cast_source_target_index: 1024 total in 1 blocks;
288 free (0 chunks); 736 used
pg_attribute_relid_attnum_index: 1024 total in 1
blocks; 288 free (0 chunks); 736 used
pg_attribute_relid_attnam_index: 1024 total in 1
blocks; 328 free (0 chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 216
free (0 chunks); 808 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 288
free (0 chunks); 736 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 216
free (0 chunks); 808 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
Per-database table: 122880 total in 4 blocks; 44680
free (19 chunks); 7

Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Jeff Amiel
--- Tom Lane <[EMAIL PROTECTED]> wrote:

> Jeff Amiel <[EMAIL PROTECTED]> writes:
> > is log_min_messages one of them that requires a
> > restart?
> 
> No, SIGHUP (pg_ctl reload) should be sufficient.

Weird
looks like some items are going to syslog and some to
my defined postgres logfile (from -L option). 
Bizarre.
AnywayI hope this helps someone.

At 10:46, I find this in my syslog files..

Aug 27 10:46:01 db-1 postgres[27008]: [ID 748848
local0.warning] [85355-1] 2007-08-27 10:46:01.888 CDT 
  27008ERROR:  out of memory
Aug 27 10:46:01 db-1 postgres[27008]: [ID 748848
local0.warning] [85355-2] 2007-08-27 10:46:01.888 CDT 
  27008DETAIL:  Failed on request of size 536870910.

and at the same time in my postgres logfile I see this
(and only this)

TopMemoryContext: 169608 total in 10 blocks; 18832
free (34 chunks); 150776 used
TopTransactionContext: 8192 total in 1 blocks; 7648
free (9 chunks); 544 used
CFuncHash: 8192 total in 1 blocks; 4936 free (0
chunks); 3256 used
Type information cache: 8192 total in 1 blocks; 1800
free (0 chunks); 6392 used
Operator class cache: 8192 total in 1 blocks; 4872
free (0 chunks); 3320 used
Autovacuum context: 8192 total in 1 blocks; 5416 free
(8 chunks); 2776 used
smgr relation table: 8192 total in 1 blocks; 2808 free
(0 chunks); 5384 used
TransactionAbortContext: 32768 total in 1 blocks;
32752 free (0 chunks); 16 used
Portal hash: 8192 total in 1 blocks; 3912 free (0
chunks); 4280 used
PortalMemory: 0 total in 0 blocks; 0 free (0 chunks);
0 used
Relcache by OID: 8192 total in 1 blocks; 3376 free (0
chunks); 4816 used
CacheMemoryContext: 659000 total in 19 blocks; 264904
free (15 chunks); 394096 used
sl_seqlog_idx: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
PartInd_istream_replication_cluster_sl_log_1-node-1:
1024 total in 1 blocks; 392 free (0 chunks); 632 used
sl_log_1_idx1: 1024 total in 1 blocks; 256 free (0
chunks); 768 used
pg_index_indrelid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1
blocks; 288 free (0 chunks); 736 used
pg_autovacuum_vacrelid_index: 1024 total in 1 blocks;
392 free (0 chunks); 632 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 328
free (0 chunks); 696 used
pg_type_oid_index: 1024 total in 1 blocks; 352 free (0
chunks); 672 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1
blocks; 288 free (0 chunks); 736 used
pg_statistic_relid_att_index: 1024 total in 1 blocks;
288 free (0 chunks); 736 used
pg_auth_members_member_role_index: 1024 total in 1
blocks; 328 free (0 chunks); 696 used
pg_auth_members_role_member_index: 1024 total in 1
blocks; 328 free (0 chunks); 696 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks;
328 free (0 chunks); 696 used
pg_proc_proname_args_nsp_index: 1024 total in 1
blocks; 256 free (0 chunks); 768 used
pg_proc_oid_index: 1024 total in 1 blocks; 352 free (0
chunks); 672 used
pg_operator_oprname_l_r_n_index: 1024 total in 1
blocks; 192 free (0 chunks); 832 used
pg_operator_oid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_opclass_oid_index: 1024 total in 1 blocks; 352 free
(0 chunks); 672 used
pg_opclass_am_name_nsp_index: 1024 total in 1 blocks;
216 free (0 chunks); 808 used
pg_namespace_oid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_namespace_nspname_index: 1024 total in 1 blocks;
392 free (0 chunks); 632 used
pg_language_oid_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
pg_language_name_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks;
328 free (0 chunks); 696 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_authid_oid_index: 1024 total in 1 blocks; 352 free
(0 chunks); 672 used
pg_authid_rolname_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
pg_database_oid_index: 1024 total in 1 blocks; 352
free (0 chunks); 672 used
pg_conversion_oid_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks;
328 free (0 chunks); 696 used
pg_conversion_default_index: 1024 total in 1 blocks;
192 free (0 chunks); 832 used
pg_class_relname_nsp_index: 1024 total in 1 blocks;
328 free (0 chunks); 696 used
pg_class_oid_index: 1024 total in 1 blocks; 352 free
(0 chunks); 672 used
pg_cast_source_target_index: 1024 total in 1 blocks;
288 free (0 chunks); 736 used
pg_attribute_relid_attnum_index: 1024 total in 1
blocks; 288 free (0 chunks); 736 used
pg_attribute_relid_attnam_index: 1024 total in 1
blocks; 328 free (0 chunks); 696 used
pg_amproc_opc_proc_index: 1024 total in 1 blocks; 216
free (0 chunks); 808 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 288
free (0 chunks); 736 used
pg_amop_opc_strat_index: 1024 total in 1 blocks; 216
free (0 chunks); 808 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392
free (0 chunks); 632 used
Per-database table: 122880 total in 4 blocks; 44680
free (19 chunks); 7

Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Jeff Amiel
--- "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:

> We are actually diagnosing a similar problem on this
> end, where we get a
> failure at 1920... I am currently trying to get some
> DEBUG output.

Tracking for last few days.
Does not appear to happen when little or no user
activity (like Saturday)  I don't know if that rules
out autovacuum or not (if no update threshholds are
reached, no vacuuming will take place anyway)

Aug 23 11:11:51 db-1 postgres[8455]: [ID 748848
local0.warning] [2-1] 2007-08-23 11:11:51 CDT   ERROR:
 out of memory
Aug 23 11:11:51 db-1 postgres[8455]: [ID 748848
local0.warning] [2-2] 2007-08-23 11:11:51 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 23 11:34:49 db-1 postgres[8910]: [ID 748848
local0.warning] [2-1] 2007-08-23 11:34:49 CDT   ERROR:
 out of memory
Aug 23 11:34:49 db-1 postgres[8910]: [ID 748848
local0.warning] [2-2] 2007-08-23 11:34:49 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 23 12:06:47 db-1 postgres[9562]: [ID 748848
local0.warning] [2-1] 2007-08-23 12:06:47 CDT   ERROR:
 out of memory
Aug 23 12:06:47 db-1 postgres[9562]: [ID 748848
local0.warning] [2-2] 2007-08-23 12:06:47 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 23 12:58:47 db-1 postgres[10617]: [ID 748848
local0.warning] [2-1] 2007-08-23 12:58:47 CDT   ERROR:
 out of memory
Aug 23 12:58:47 db-1 postgres[10617]: [ID 748848
local0.warning] [2-2] 2007-08-23 12:58:47 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 23 15:15:35 db-1 postgres[13400]: [ID 748848
local0.warning] [2-1] 2007-08-23 15:15:35 CDT   ERROR:
 out of memory
Aug 23 15:15:35 db-1 postgres[13400]: [ID 748848
local0.warning] [2-2] 2007-08-23 15:15:35 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 23 16:50:47 db-1 postgres[15422]: [ID 748848
local0.warning] [2-1] 2007-08-23 16:50:47 CDT   ERROR:
 out of memory
Aug 23 16:50:47 db-1 postgres[15422]: [ID 748848
local0.warning] [2-2] 2007-08-23 16:50:47 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 24 10:46:46 db-1 postgres[10508]: [ID 748848
local0.warning] [2-1] 2007-08-24 10:46:46 CDT   ERROR:
 out of memory
Aug 24 10:46:46 db-1 postgres[10508]: [ID 748848
local0.warning] [2-2] 2007-08-24 10:46:46 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 24 11:29:00 db-1 postgres[11539]: [ID 748848
local0.warning] [2-1] 2007-08-24 11:29:00 CDT   ERROR:
 out of memory
Aug 24 11:29:00 db-1 postgres[11539]: [ID 748848
local0.warning] [2-2] 2007-08-24 11:29:00 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 24 11:50:04 db-1 postgres[12051]: [ID 748848
local0.warning] [2-1] 2007-08-24 11:50:04 CDT   ERROR:
 out of memory
Aug 24 11:50:04 db-1 postgres[12051]: [ID 748848
local0.warning] [2-2] 2007-08-24 11:50:04 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 24 12:00:33 db-1 postgres[12310]: [ID 748848
local0.warning] [2-1] 2007-08-24 12:00:33 CDT   ERROR:
 out of memory
Aug 24 12:00:33 db-1 postgres[12310]: [ID 748848
local0.warning] [2-2] 2007-08-24 12:00:33 CDT  
DETAIL:  Failed on request of size 536870910.
--
Aug 24 16:03:19 db-1 postgres[18263]: [ID 748848
local0.warning] [2493-1] 2007-08-24 16:03:19.296 CDT  
 18263ERROR:  out of memory
Aug 24 16:03:19 db-1 postgres[18263]: [ID 748848
local0.warning] [2493-2] 2007-08-24 16:03:19.296 CDT  
 18263DETAIL:  Failed on request of size 536870910.
--
Aug 24 16:45:46 db-1 postgres[19313]: [ID 748848
local0.warning] [3356-1] 2007-08-24 16:45:46.804 CDT  
 19313ERROR:  out of memory
Aug 24 16:45:46 db-1 postgres[19313]: [ID 748848
local0.warning] [3356-2] 2007-08-24 16:45:46.804 CDT  
 19313DETAIL:  Failed on request of size 536870910.
--
Aug 24 17:29:16 db-1 postgres[20379]: [ID 748848
local0.warning] [4238-1] 2007-08-24 17:29:16.926 CDT  
 20379ERROR:  out of memory
Aug 24 17:29:16 db-1 postgres[20379]: [ID 748848
local0.warning] [4238-2] 2007-08-24 17:29:16.926 CDT  
 20379DETAIL:  Failed on request of size 536870910.
--
Aug 24 17:40:02 db-1 postgres[20651]: [ID 748848
local0.warning] [4452-1] 2007-08-24 17:40:02.682 CDT  
 20651ERROR:  out of memory
Aug 24 17:40:02 db-1 postgres[20651]: [ID 748848
local0.warning] [4452-2] 2007-08-24 17:40:02.682 CDT  
 20651DETAIL:  Failed on request of size 536870910.
--
Aug 26 11:14:56 db-1 postgres[22161]: [ID 748848
local0.warning] [56115-1] 2007-08-26 11:14:56.077 CDT 
  22161ERROR:  out of memory
Aug 26 11:14:56 db-1 postgres[22161]: [ID 748848
local0.warning] [56115-2] 2007-08-26 11:14:56.077 CDT 
  22161DETAIL:  Failed on request of size 536870910.
--
Aug 26 11:27:41 db-1 postgres[22477]: [ID 748848
local0.warning] [56381-1] 2007-08-26 11:27:41.141 CDT 
  22477ERROR:  out of memory
Aug 26 11:27:41 db-1 postgres[22477]: [ID 748848
local0.warning] [56381-2] 2007-08-26 11:27:41.141 CDT 
  22477DETAIL:  Failed on request of size 536870910.
--
Aug 26 11:37:27 db-1 postgres[22729]: [ID 748848
local0.warning] [56603-1] 2007-08-26 11:37:27.476 CDT 
  22729ERROR:  out of memory
Aug 26 11:37:27 db-1 postgres[22729]: [ID 74

Re: [HACKERS] [GENERAL] Undetected corruption of table files

2007-08-27 Thread Jonah H. Harris
On 8/27/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> Indeed.  In fact, the most likely implementation of this (refuse to do
> anything with a page with a bad CRC) would be a net loss from that
> standpoint, because you couldn't get *any* data out of a page, even if
> only part of it had been zapped.

At least you would know it was corrupted, instead of getting funky
errors and/or crashes.

-- 
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation| fax: 732.331.1301
33 Wood Ave S, 3rd Floor| [EMAIL PROTECTED]
Iselin, New Jersey 08830| http://www.enterprisedb.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Audit-trail engine inner-workings

2007-08-27 Thread Manuel Sugawara
"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> I know that this PostgreSQL C module has a static var that in turn keeps the
> integer set by the function "set_session_id" - but is this var global to the
> server's service ? Does PostgreSQL mantain one "instance" of this var per
> requested connection ? I would like to know how this works.

PostgreSQL works using process, each backend that serves a connection
is a process and each process keeps and instance of the variable, so,
no, this variable is not shared globally (else it will not work as
expected).

> Take the following scenario:
>  - user enters the web app;
>  - web app takes a reference to a connection from the db connection pool;
>  - web app does some db operation
>
> When the app takes a reference to a connection and does the operation, just
> after that, in the application, I set the session id. Then the db operation
> is performed and the audit stored procedure is ran. The audit-trail engine
> performs its work and logs the operation and modifications the user did as
> well as **which application user did it** (which is the purpose of the
> set_session_id technique - being able to pass the application user who did
> the operation to the server so that that the audit-trail can log it
> altogether).
>
> Once the operation is done and the connection is back to the pool, does
> PostgreSQL discart the associated var ? Does it mantain one "instance" per
> request made ? That's what I would like to know.

PostgreSQL does not discard the associated var and its value its keep
until the connection is closed (really closed not just returned to the
pool) and the process destroyed. That's why is important to reset its
value, just to be sure that the next time the connection is used (by
probably some other session) it does not get confused by some value
previously left.

BTW, having a C module has proved to be a PITA over the years, but
when this was constructed we didn't have any other option. Now I think
that the same idea can be constructed using custom variable clasess

  http://www.postgresql.org/docs/8.1/static/runtime-config-custom.html

but haven't got to the details yet.

Regards,
Manuel.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot


brian wrote:

Tom Lane wrote:

Kamil Srot <[EMAIL PROTECTED]> writes:


One more thing:
The project runs proprietal CMS system and there are more instances 
of it with the same database layout in different databases. Every 
time the "lost" table is the same one - the bussiest one (mostly 
read)... and everytime the "lost" table is gone from more projects 
at once... today it was 2 projects affected out of 7 in the same 
pgsql server... it looks also like the problem happens everytime 
short after midnight - scarry - ha ha :-)



Tables don't just "disappear" like that.  Check your application code
to see if it's not dropping the table someplace.

regards, tom lane



grep -i drop your_application_dir

This was the first thing, I checked... if something, this is the thing, 
I'm 100% sure is not the case.
The application doesn't manipulate with the database schema at all... so 
no CREATE/ALTER/DROP statements anywhere.
On top there is the "dissaperance" from more databases at once (they are 
absolutelly no interconnected in any way - all live their own life)


I don't say, it's gone by itself, I'm asking for help debuging this 
situation and hopefully find a solution. For the first time it happened, 
it had the same symptoms - this specific table was missing and 
transaction counter was wrapped-around. Now it seems to be the same but 
the transaction counters are far below the critical value...


It happened four times already and the customer is starting to become 
crazy... the same as I do...


Thank you, Tom, Brian and all who read it, for your time!

Regards,
--
Kamil


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Tony Caduto

Stephen Ince wrote:
Postgres can't be embedded or serverless. Firebird has the embedded 
feature. Most of the databases have this capability (hsqldb, 
derby,oracle,mysql, firebird, and db2). Derby and hsqldb are the only 
free embedded databases for commercial use.




A lot of Firebird users have been saying this as well, but the 
comparison if more for Enterprise use.
Plus if you need a embedded database wouldn't it be better to use one 
built specifically for that purpose?  i.e. SQLite for example.


Good call on the name limit, I remember running into that when porting 
something from MS SQL server to Firebird about 4 years ago.

I will have to check and see if this still applies to version 2.0

Later,

Tony

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Martijn van Oosterhout
On Mon, Aug 27, 2007 at 06:37:17PM +0200, Kamil Srot wrote:
> I don't say, it's gone by itself, I'm asking for help debuging this 
> situation and hopefully find a solution. For the first time it happened, 
> it had the same symptoms - this specific table was missing and 
> transaction counter was wrapped-around. Now it seems to be the same but 
> the transaction counters are far below the critical value...

You don't say explicitly, but when you say "vacuum all tables" you
didn't specify any tables names right? You just typed "vacuum". On each
database...

As for debugging, maybe something like: 

select xmin, age(xmin) from pg_class;

Just to check the wraparound issue...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Tom Lane
Jeff Amiel <[EMAIL PROTECTED]> writes:
> Tracking for last few days.
> Does not appear to happen when little or no user
> activity (like Saturday)  I don't know if that rules
> out autovacuum or not (if no update threshholds are
> reached, no vacuuming will take place anyway)

Can you correlate these occurrences with anything in the regular system
logs (kernel log in particular)?  The Postgres log shows nothing out of
the ordinary --- it's simply that the kernel won't give us 512M for some
reason.  I'm guessing it's got something to do with overall system load.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot

Martijn van Oosterhout wrote:

On Mon, Aug 27, 2007 at 06:37:17PM +0200, Kamil Srot wrote:
  
I don't say, it's gone by itself, I'm asking for help debuging this 
situation and hopefully find a solution. For the first time it happened, 
it had the same symptoms - this specific table was missing and 
transaction counter was wrapped-around. Now it seems to be the same but 
the transaction counters are far below the critical value...



You don't say explicitly, but when you say "vacuum all tables" you
didn't specify any tables names right? You just typed "vacuum". On each
database...

  
Correct...the script does echo "vacuum full;" | $PGDIR/bin/psql -U 
postgres $db for each database...

Hope it's correct?
As for debugging, maybe something like: 


select xmin, age(xmin) from pg_class;

Just to check the wraparound issue...
  

Maybe it's it... I did check the wraparound with:
SELECT datname, age(datfrozenxid) FROM pg_database;

In your SQL I see several rows with too high numbers!

They are all "internal" like pg_toast_618854, views and also some 
application level indices etc.


So it seems it's not enough to vacuum all DB's (templateX inclusive)

Can you please advice how to correctly manually vacuum the pgsql?

Thank you!

Regards,
--
Kamil



Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Michael Glaesemann


On Aug 27, 2007, at 11:47 , Tony Caduto wrote:

Good call on the name limit, I remember running into that when  
porting something from MS SQL server to Firebird about 4 years ago.


Just a quick note: PostgreSQL's identifiers are limited to  
NAMEDATALEN - 1 (IIRC), which by default is 64 - 1 = 63 characters:


test=# create table a  
(a23456789112345678921234567893123456789412345678951234567896123 text  
primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index  
"a_pkey" for table "a"

CREATE TABLE
test=# create table b  
(a234567891123456789212345678931234567894123456789512345678961234  
text primary key);
NOTICE:  identifier  
"a234567891123456789212345678931234567894123456789512345678961234"  
will be truncated to  
"a23456789112345678921234567893123456789412345678951234567896123"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index  
"b_pkey" for table "b"

CREATE TABLE

The upshot is that PostgreSQL does have a limit, but it's pretty big  
initially and is configurable at compilation by changing the  
definition of NAMEDATALEN in in src/include/postgres_ext.h.


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot


Martijn van Oosterhout wrote:

On Mon, Aug 27, 2007 at 06:57:54PM +0200, Kamil Srot wrote:
  
Correct...the script does echo "vacuum full;" | $PGDIR/bin/psql -U 
postgres $db for each database...

Hope it's correct?



Well, I'd drop the "full" part, it tends to bloat indexes. Also, did
you check it was actually completing (no errors)?

  

Yes, it completes w/o errors... just VACUUM as output...
OK, I'll drop the full part and do it less often...

Maybe it's it... I did check the wraparound with:
SELECT datname, age(datfrozenxid) FROM pg_database;

In your SQL I see several rows with too high numbers!

They are all "internal" like pg_toast_618854, views and also some 
application level indices etc.



Depends what you mean by too high. Anything with XID 1 and 2 is not a
problem, and age returns a really big number for them. Can you give
some examples?
  

High numbers is near by the bigint limit... 2 billions.
Here you go:
# select xmin, age(xmin) from pg_class;
  xmin|age
---+
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
2 | 2147483647
236838019 | 539453
2 | 2147483647
2 | 2147483647
231899309 |5478163
etc.

Basically everythin has the same age 2147483647 with xmin = 1 or xmin = 
2.. but the two lines shown...


Actually the same problem happened several minutes ago :-(

Thank you for any hint!

Regards,
--
Kamil


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Martijn van Oosterhout
On Mon, Aug 27, 2007 at 09:12:17AM -0700, Jeff Amiel wrote:
> Tracking for last few days.
> Does not appear to happen when little or no user
> activity (like Saturday)  I don't know if that rules
> out autovacuum or not (if no update threshholds are
> reached, no vacuuming will take place anyway)

I don't think I've seen it so far this thread, but what are your memory
overcommit settings and allocated swap? At least on Linux you would
need a significant chunk of swap to be able to work with that much
memory, even with overcommit off. Check the rules for your system.

Another thing I havn't seen mentioned: you appear to be on a 32-bit
architecture and with 2GB shared_buffers you've lost half your address
space on that alone. Perhaps you simply don't have enough contiguous
address space to alloc 512MB.

Hope this helps,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Erik Jones


On Aug 27, 2007, at 12:15 PM, Martijn van Oosterhout wrote:


On Mon, Aug 27, 2007 at 09:12:17AM -0700, Jeff Amiel wrote:

Tracking for last few days.
Does not appear to happen when little or no user
activity (like Saturday)  I don't know if that rules
out autovacuum or not (if no update threshholds are
reached, no vacuuming will take place anyway)


I don't think I've seen it so far this thread, but what are your  
memory

overcommit settings and allocated swap? At least on Linux you would
need a significant chunk of swap to be able to work with that much
memory, even with overcommit off. Check the rules for your system.

Another thing I havn't seen mentioned: you appear to be on a 32-bit
architecture and with 2GB shared_buffers you've lost half your address
space on that alone. Perhaps you simply don't have enough contiguous
address space to alloc 512MB.


The X4600 runs with 64-bit Dual Opterons.

Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Scott Marlowe
On 8/27/07, Stephen Ince <[EMAIL PROTECTED]> wrote:
> I recently ported a schema from postgres to firebird and found name size
> limitations. Firebird has a limitation on the size of it's column names,
> table names, constraint names and index names. I think the size limitation
> on firebird is 31 characters. Postgres doesn't have this limitation.

Note that postgresql does have a limit.  It's just not as short as the
one in firebird.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Dave Page


> --- Original Message ---
> From: "Stephen Ince" <[EMAIL PROTECTED]>
> To: "Tony Caduto" <[EMAIL PROTECTED]>, "Greg Smith" <[EMAIL PROTECTED]>, 
> pgsql-general@postgresql.org
> Sent: 27/08/07, 17:02:21
> Subject: Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished
> 
> Postgres can't be embedded or serverless. Firebird has the embedded feature. 
> Most of the databases have this capability (hsqldb, derby,oracle,mysql, 
> firebird, and db2). Derby and hsqldb are the only free embedded databases 
> for commercial use.
> 

SQL Server CE is also free for commercial use iirc.

Regards, Dave

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Martijn van Oosterhout
On Mon, Aug 27, 2007 at 06:57:54PM +0200, Kamil Srot wrote:
> Correct...the script does echo "vacuum full;" | $PGDIR/bin/psql -U 
> postgres $db for each database...
> Hope it's correct?

Well, I'd drop the "full" part, it tends to bloat indexes. Also, did
you check it was actually completing (no errors)?

> Maybe it's it... I did check the wraparound with:
> SELECT datname, age(datfrozenxid) FROM pg_database;
> 
> In your SQL I see several rows with too high numbers!
> 
> They are all "internal" like pg_toast_618854, views and also some 
> application level indices etc.

Depends what you mean by too high. Anything with XID 1 and 2 is not a
problem, and age returns a really big number for them. Can you give
some examples?

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to 
> litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] [ANNOUNCE] == PostgreSQL Weekly News - August 26 2007 ==

2007-08-27 Thread Joseph S

Tom Lane wrote:

Joseph S <[EMAIL PROTECTED]> writes:

Tom Lane committed:
- Restrict pg_relation_size to relation owner, pg_database_size to DB
owner, and pg_tablespace_size to superusers.  Perhaps we could
weaken the first case to just require SELECT privilege, but that
doesn't work for the other cases, so use ownership as the common
concept.


Is there going to be a way to turn this off easily?


No.  If you want to make an argument for weaker restrictions than these,
argue away, but security restrictions that can be "easily turned off"
are no security at all.


I don't see how letting the size of a database or relation is a big 
security risk.  I do see how forcing me to login as the superuser to see 
my db stats creates more of a security risk.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] PickSplit method of 2 columns ... error

2007-08-27 Thread Kevin Neufeld

Has anyone come across this error before?

LOG:  PickSplit method of 2 columns of index 
'asset_position_lines_asset_cubespacetime_idx' doesn't support secondary 
split


This is a multi-column GiST index on an integer and a cube (a data type 
from the postgres cube extension module).


I traced the error to the gistUserPicksplit 
 function in the 
gistsplit.c ... I surmise that this method is called whenever a page 
split is necessary.


So, I know when this error occurs, but I don't know why.

Thoughts anyone?
Cheers,
Kevin

--
Kevin Neufeld
Software Developer
Refractions Research Inc.
300-1207 Douglas St.
Victoria, B.C., V8W 2E7

Phone: (250) 383-3022
Email: [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Alvaro Herrera
Kamil Srot wrote:
>
> Martijn van Oosterhout wrote:
>> On Mon, Aug 27, 2007 at 06:57:54PM +0200, Kamil Srot wrote:
>>   
>>> Correct...the script does echo "vacuum full;" | $PGDIR/bin/psql -U 
>>> postgres $db for each database...
>>> Hope it's correct?
>>> 
>>
>> Well, I'd drop the "full" part, it tends to bloat indexes. Also, did
>> you check it was actually completing (no errors)?
>>
> Yes, it completes w/o errors... just VACUUM as output...
> OK, I'll drop the full part and do it less often...

What user is doing it?  Is it a superuser?  If it's not, then there's
your problem.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1", W 73º 13' 56.4"
"If it wasn't for my companion, I believe I'd be having
the time of my life"  (John Dunbar)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot

Alvaro Herrera wrote:

Kamil Srot wrote:
  

Martijn van Oosterhout wrote:


On Mon, Aug 27, 2007 at 06:57:54PM +0200, Kamil Srot wrote:
  
  
Correct...the script does echo "vacuum full;" | $PGDIR/bin/psql -U 
postgres $db for each database...

Hope it's correct?



Well, I'd drop the "full" part, it tends to bloat indexes. Also, did
you check it was actually completing (no errors)?

  

Yes, it completes w/o errors... just VACUUM as output...
OK, I'll drop the full part and do it less often...



What user is doing it?  Is it a superuser?  If it's not, then there's
your problem.

  

It's user postgres... the vacuum completes correctly...

Thank you,
--
Kamil


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Thomas Kellerer

Stephen Ince wrote on 27.08.2007 18:02:

Derby and hsqldb are the only free embedded databases for commercial use.


Well, there are some more:
H2 Database, OneDollarDB (OpenSource version of DaffodilDB), Berkely DB and 
McKoi are free as well (although McKoi seems to be dead).


Then there are a couple of other Java based engines (SmallSQL, TinySQL, Axioin) 
but they do not compare feature-wise to the "big names".


And of course Firebird is free for commercial use as well.

Thomas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] simple query runs 26 seconds

2007-08-27 Thread Andrus
> The time seems entirely spent in fetching rows from table "rid".
> Perhaps that table is bloated by lack of vacuuming --- can you
> show the output from "vacuum verbose rid"?

INFO:  vacuuming "firma1.rid"
INFO:  scanned index "rid_pkey" to remove 7375 row versions
DETAIL:  CPU 0.01s/0.39u sec elapsed 5.46 sec.
INFO:  scanned index "rid_dokumnr_idx" to remove 7375 row versions
DETAIL:  CPU 0.04s/0.39u sec elapsed 2.42 sec.
INFO:  scanned index "rid_toode_idx" to remove 7375 row versions
DETAIL:  CPU 0.20s/0.43u sec elapsed 15.59 sec.
INFO:  scanned index "rid_inpdokumnr_idx" to remove 7375 row versions
DETAIL:  CPU 0.03s/0.32u sec elapsed 4.37 sec.
INFO:  "rid": removed 7375 row versions in 2578 pages
DETAIL:  CPU 0.32s/0.20u sec elapsed 87.93 sec.
INFO:  index "rid_pkey" now contains 1517900 row versions in 4624 pages
DETAIL:  4210 index row versions were removed.
61 index pages have been deleted, 61 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  index "rid_dokumnr_idx" now contains 1517900 row versions in 4598 
pages
DETAIL:  3219 index row versions were removed.
145 index pages have been deleted, 142 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.01 sec.
INFO:  index "rid_toode_idx" now contains 1517900 row versions in 9950 pages
DETAIL:  7375 index row versions were removed.
245 index pages have been deleted, 232 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "rid_inpdokumnr_idx" now contains 1517900 row versions in 5257 
pages
DETAIL:  7375 index row versions were removed.
319 index pages have been deleted, 311 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "rid": found 7375 removable, 1517900 nonremovable row versions in 
82560 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 292585 unused item pointers.
18375 pages contain useful free space.
0 pages are entirely empty.
CPU 1.98s/2.18u sec elapsed 157.82 sec.

Query returned successfully with no result in 158187 ms.

I installed Postgres using standard msi file.
postgresql.conf contains
autovacuum = on   # enable autovacuum subprocess?

However, log files does not show any autovacuum messages.
So I expect that autovacuum in not running.

Any idea why autovacuum is not running ?

Andrus. 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] simple query runs 26 seconds

2007-08-27 Thread Andrus
> Perhaps that table is bloated by lack of vacuuming --- can you
> show the output from "vacuum verbose rid"?

Thank you.

After running vacuum and analyze commands the query takes 18 seconds.
This is still very slow because my tables are indexed.
How to speed up this ?

set search_path to firma1,public;explain analyze select sum(taitmata) as 
ukogus
   from rid join dok using (dokumnr)
   where toode='NE TR'
 and doktyyp='U'

now returns:

"Aggregate  (cost=52251.18..52251.19 rows=1 width=8) (actual 
time=17898.967..17898.967 rows=1 loops=1)"
"  ->  Hash Join  (cost=1182.35..52249.39 rows=715 width=8) (actual 
time=4891.581..17898.897 rows=9 loops=1)"
"Hash Cond: (rid.dokumnr = dok.dokumnr)"
"->  Bitmap Heap Scan on rid  (cost=777.92..51694.85 rows=22874 
width=12) (actual time=24.079..17876.282 rows=21383 loops=1)"
"  Recheck Cond: (toode = 'NE TR'::bpchar)"
"  ->  Bitmap Index Scan on rid_toode_idx  (cost=0.00..772.20 
rows=22874 width=0) (actual time=11.474..11.474 rows=21388 loops=1)"
"Index Cond: (toode = 'NE TR'::bpchar)"
"->  Hash  (cost=365.33..365.33 rows=3128 width=4) (actual 
time=0.104..0.104 rows=7 loops=1)"
"  ->  Index Scan using dok_tasudok_unique_idx on dok 
(cost=0.00..365.33 rows=3128 width=4) (actual time=0.058..0.089 rows=7 
loops=1)"
"Index Cond: (doktyyp = 'U'::bpchar)"
"Total runtime: 17899.582 ms"

Andrus. 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Removing pollution from log files

2007-08-27 Thread Andrus
I'm using PostgreSQL  PostgreSQL 8.2.4   from ODBC 08.02.0300 client.

Postgres  log files are polluted with messages

2007-08-27 06:10:38 WARNING:  nonstandard use of \\ in a string literal at
character 190
2007-08-27 06:10:38 HINT:  Use the escape string syntax for backslashes,
e.g., E'\\'.
2007-08-27 06:10:39 WARNING:  nonstandard use of \\ in a string literal at
character 197
2007-08-27 06:10:39 HINT:  Use the escape string syntax for backslashes,
e.g., E'\\'.
2007-08-27 06:10:47 WARNING:  nonstandard use of \\ in a string literal at
character 190
2007-08-27 06:10:47 HINT:  Use the escape string syntax for backslashes,
e.g., E'\\'.
2007-08-27 06:10:48 WARNING:  nonstandard use of \\ in a string literal at
character 197
2007-08-27 06:10:48 HINT:  Use the escape string syntax for backslashes,
e.g., E'\\'.

How to force Postgres not to write those messages to log file ?
Should I configure ODBC driver, Postgres or change my application ?

Andrus.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] EXPLAIN ANALYZE not considering primary and unique indices!

2007-08-27 Thread Sanjay
Hi All,

Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
VARCHAR(30)). While I try this:

EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1

the output is:
--
Seq Scan on website  (cost=0.00..1.31 rows=1 width=162) (actual
time=0.047..0.051 rows=1 loops=1)
  Filter: (website_id = 1)
Total runtime: 0.102 ms
---
Wondering why it is not using the index, which would have been
automatically created for the primary key.

Needing help.

thanks
Sanjay


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] psql \copy command runs as a transcation?

2007-08-27 Thread Ow Mun Heng
Hi,

Does the psql's \copy command run as a transaction? I think it does, but
somehow when I cancel (in a script) a running import, "seems" (I can't
seem to duplicate it on the cli though) like a few lines/rows gets
inserted anyway..



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Detecting whether a point is in a box.

2007-08-27 Thread HST
Hello

I am trying to write a query to find all points that fall within a
given box. However, I cannot seem to find the functionality for
determining whether a point is within a box.

e.g.  select box '((0,0),(1,1))' @> point '(0.5,0.5)';

operator does not exist: box @> point

Is this operator for circles only? Is there a similar operator for
boxes? I have not been able to find it in the documentation.

Any help would be much appreciated
H


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How to restore my database ?

2007-08-27 Thread Franck Roubieu - AXCENTEO
Hello,

We tried many things but didn’t succeed.  

Our DB crashed without any recent backup.

We have 3 elements:
- a backup we did in February,
- 4 WAL files in the pg_xlog folder created in august,
- the “base” folder (in which there are table files created in august)

Q1 : Is it possible to copy (or modify) “base” folder files in order to
restore  ?

 

Other solution: Wal files 

Q2 : Is it possible to use a “restore_command” with these 4 WAL files and
our February backup ?  


We have already tried editing the recovery.conf file:

restore_command = "copy C:\archives\000100BC "%p""



… But, we had this error message:

«
2007-08-24 16:32:55 LOG: database system was shut down at 2007-08-24
16:32:01 Paris, Madrid
2007-08-24 16:32:55 LOG: starting archive recovery
2007-08-24 16:32:55 LOG: restore_command = "copy
C:\archives\000100BC "%p""
2007-08-24 16:32:56 LOG: restored log file "0001.history" from archive
2007-08-24 16:32:56 PANIC: syntax error in history file: \Ð
2007-08-24 16:32:56 HINT: Expected a numeric timeline ID.
»

Note:  every WAL file begin by “\Д

Q3 : Do you know this error ? 

thanks

Best regards

 

 

Cordialement,

 

Franck R. - Axcenteo

 

__
Franck Roubieu
[EMAIL PROTECTED]
tel :08 74 59 20 20 (tarif d'un appel local)
port :  06 12 58 80 62

 

-
AXCENTEO
Cap Alpha - Av. de L'Europe
34 940 Montpellier Cedex 9
--
www.axcenteo.com
www.eo-demo.com  

www.expresseo.com

 

 

 



Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Erik Jones wrote:
> 
> On Aug 27, 2007, at 12:15 PM, Martijn van Oosterhout wrote:
> 
>> On Mon, Aug 27, 2007 at 09:12:17AM -0700, Jeff Amiel wrote:
>>> Tracking for last few days.
>>> Does not appear to happen when little or no user
>>> activity (like Saturday)  I don't know if that rules
>>> out autovacuum or not (if no update threshholds are
>>> reached, no vacuuming will take place anyway)
>>
>> I don't think I've seen it so far this thread, but what are your memory
>> overcommit settings and allocated swap? At least on Linux you would
>> need a significant chunk of swap to be able to work with that much
>> memory, even with overcommit off. Check the rules for your system.
>>
>> Another thing I havn't seen mentioned: you appear to be on a 32-bit
>> architecture and with 2GB shared_buffers you've lost half your address
>> space on that alone. Perhaps you simply don't have enough contiguous
>> address space to alloc 512MB.
> 
> The X4600 runs with 64-bit Dual Opterons.

The machine we are tracking this problem on is also 64bit.

Joshua D. Drake

> 
> Erik Jones
> 
> Software Developer | Emma®
> [EMAIL PROTECTED]
> 800.595.4401 or 615.292.5888
> 615.292.0777 (fax)
> 
> Emma helps organizations everywhere communicate & market in style.
> Visit us online at http://www.myemma.com
> 
> 
> 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFG0xomATb/zqfZUUQRAnfUAJ4jQhMV9vEqL8I7zyT59qo0vhbxuACeLH9d
+PpbVOWYxMkrNC/+V4meHSs=
=DK8s
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] EXPLAIN ANALYZE not considering primary and unique indices!

2007-08-27 Thread Bill Moran
In response to Sanjay <[EMAIL PROTECTED]>:

> Hi All,
> 
> Say I have a simple table WEBSITE(website_id int4 PRIMARY KEY, name
> VARCHAR(30)). While I try this:
> 
> EXPLAIN ANALYZE SELECT * FROM WEBSITE WHERE website_id = 1
> 
> the output is:
> --
> Seq Scan on website  (cost=0.00..1.31 rows=1 width=162) (actual
> time=0.047..0.051 rows=1 loops=1)
>   Filter: (website_id = 1)
> Total runtime: 0.102 ms
> ---
> Wondering why it is not using the index, which would have been
> automatically created for the primary key.

Because PG thinks the seq scan is faster than an index scan.

Depending on other factors, it may be right.  If there's only a few
rows in the table, then a seq scan is going to be faster than
scanning the index, only to grab most of the table in to memory
anyway.

If it's wrong, it's either because your analyze data isn't up to date,
or your tuning parameters don't match your hardware.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] EXPLAIN ANALYZE not considering primary and unique indices!

2007-08-27 Thread Alan Hodgson
On Monday 27 August 2007 05:21, Sanjay <[EMAIL PROTECTED]> wrote:
>Wondering why it is not using the index, which would have
> been
> automatically created for the primary key.

Because you not only have just one row in the whole table, 100% of them will 
match the query. In short, one page fetch for a seq scan is faster than 
first looking it up in an index and then fetching the same page.

set enable_seqscan=false;


-- 
"Government big enough to supply everything you need is big enough to take
everything you have ... the course of history shows that as a government
grows, liberty decreases." -- Thomas Jefferson


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] Undetected corruption of table files

2007-08-27 Thread Decibel!
On Mon, Aug 27, 2007 at 12:08:17PM -0400, Jonah H. Harris wrote:
> On 8/27/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > Indeed.  In fact, the most likely implementation of this (refuse to do
> > anything with a page with a bad CRC) would be a net loss from that
> > standpoint, because you couldn't get *any* data out of a page, even if
> > only part of it had been zapped.

I think it'd be perfectly reasonable to have a mode where you could
bypass the check so that you could see what was in the corrupted page
(as well as deleting everything on the page so that you could "fix" the
corruption). Obviously, this should be restricted to superusers.

> At least you would know it was corrupted, instead of getting funky
> errors and/or crashes.

Or worse, getting what appears to be perfectly valid data, but isn't.
-- 
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


pgp33ocMCEwPO.pgp
Description: PGP signature


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Jeff Amiel
--- "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
> 
> The machine we are tracking this problem on is also 64bit.

H.looks like 3 different people are tracking a similar issue on 64 bit 
platforms.you,
Erik and myself. 







   
Ready
 for the edge of your seat? 
Check out tonight's top picks on Yahoo! TV. 
http://tv.yahoo.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Andrew Sullivan
On Mon, Aug 27, 2007 at 07:15:44PM +0200, Kamil Srot wrote:
> OK, I'll drop the full part and do it less often...

This doesn't address your problem, but when you move from VACUUM FULL
to VACUUM, you want to do it _more_ often, not less.

But given what you've posted, I am not even a little bit convinced
your script is working as you think it is.  You're clearly not
managing to vacuum the entire database sometimes.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
--Alexander Hamilton

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Removing pollution from log files

2007-08-27 Thread Andrew Sullivan
On Mon, Aug 27, 2007 at 02:00:02PM +0300, Andrus wrote:
> Postgres  log files are polluted with messages
> 
> 2007-08-27 06:10:38 WARNING:  nonstandard use of \\ in a string literal at
> character 190
> 2007-08-27 06:10:38 HINT:  Use the escape string syntax for backslashes,
> e.g., E'\\'.

That's not pollution; it's telling you you need to fix your
application to escape the backslashes differently.  If you want to
suppress them, though, you can change your logging level to be higher
than "WARNING".

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
"The year's penultimate month" is not in truth a good way of saying
November.
--H.W. Fowler

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] problem with transactions in VB.NET using npgsql

2007-08-27 Thread Owen Hartnett


Maybe someone here can figure it out.  Everything updates fine with 
this code, except where there's an exception, it's not rolling back 
by the transaction.  What I'm trying to do:


Begin a transaction
Do the update, insert, delete checks on each of the data tables, 
using a different npgsqlcommandbuilder for each of the tables.

Commit
if any failure happens, roll back all the changes to the transaction beginning.

I assign the transaction object to each of the commands, but it seems 
that some tables will get updated, even when I call rollback.  Is 
something I'm calling secretly calling "commit" somewhere?


My code follows.  Thanks for checking it over.  Sorry about the 
length, but I wanted you to see that I'm updating multiple tables 
with multiple dataadapters.


-Owen

Option Explicit On
Imports System.Windows.Forms
Imports npgsql
Imports System.Xml.Serialization
Imports System.IO
Imports System.Collections.Generic
Imports System.Configuration
' Note: some controls, in the forms designer, cover other controls, 
i.e. CommUsageCB covers styleCB

Public Class ParcelDisplayFrm

Public Sub WriteAllData()
Dim trans As NpgsqlTransaction = Nothing
Dim cmd As NpgsqlCommandBuilder
Dim i As Integer
Dim success As Boolean

Try
If Not statusReadOnly Then
i = vbCancel
success = priceIt(Me, full_DataSet, True, True, pc)
dt = full_DataSet.Tables(currentSchema & ".parcel")

dt.Rows(0).EndEdit()
dt = full_DataSet.Tables(currentSchema & ".accounts")
dt.Rows(0).EndEdit()
dt = full_DataSet.Tables(currentSchema & ".bldg")
For i = 0 To dt.Rows.Count - 1
dt.Rows(i).EndEdit()
Next i
dt = full_DataSet.Tables(currentSchema & ".commcost")
For i = 0 To dt.Rows.Count - 1
dt.Rows(i).EndEdit()
Next i
dt = full_DataSet.Tables(currentSchema & ".outbuildings")
For i = 0 To dt.Rows.Count - 1
If dt.Rows(i).RowState = DataRowState.Added Then
dt.Rows(i).Item("maplot") = Form1.currentMapLot
End If
Debug.Print(dt.Rows.Count)
dt.Rows(i).EndEdit()
Next i
If Not dirtySketch And Not full_DataSet.HasChanges Then
Exit Sub' Nothing to change
End If

Dim dg As New SaveChangesDlog
If dg.ShowDialog = Windows.Forms.DialogResult.Cancel 
Then Exit Sub ' don't save

writeFinalize()
dt = full_DataSet.Tables(currentSchema & ".parcel")
m_SqlConnection.Open()
' create a transaction for the rest of all the changes

trans = m_SqlConnection.BeginTransaction

cmd = New NpgsqlCommandBuilder(parcel_DataAdapter)


Dim parcelchanges As DataTable = 
dt.GetChanges(DataRowState.Modified)


If parcelchanges IsNot Nothing Then
parcel_DataAdapter.UpdateCommand = 
cmd.GetUpdateCommand(dt.Rows(0))

parcel_DataAdapter.UpdateCommand.Transaction = trans

parcel_DataAdapter.Update(parcelchanges)
End If
parcelchanges = dt.GetChanges(DataRowState.Deleted)
If parcelchanges IsNot Nothing Then
parcel_DataAdapter.DeleteCommand = 
cmd.GetDeleteCommand(dt.Rows(0))

parcel_DataAdapter.DeleteCommand.Transaction = trans

parcel_DataAdapter.Update(parcelchanges)
End If
parcelchanges = dt.GetChanges(DataRowState.Added)
If parcelchanges IsNot Nothing Then
parcel_DataAdapter.InsertCommand = 
cmd.GetInsertCommand(dt.Rows(0))


parcel_DataAdapter.InsertCommand.Transaction = trans

parcel_DataAdapter.Update(parcelchanges)
End If

' accounts table
cmd = New NpgsqlCommandBuilder(accts_DataAdapter)
dt = full_DataSet.Tables(currentSchema & ".accounts")
Dim acctchanges As DataTable = 
dt.GetChanges(DataRowState.Modified)


If acctchanges IsNot Nothing Then
accts_DataAdapter.UpdateCommand = 
cmd.GetUpdateCommand(dt.Rows(0))

accts_DataAdapter.UpdateCommand.Transaction = trans

accts_DataAdapter.Update(acctchanges)
End If
acctchanges = dt.GetChanges(DataRowState.Deleted)
If acctchanges IsNot Nothing Then
accts_DataAdapter.DeleteCommand = 
cmd.GetDeleteCommand(dt.Rows(0))

accts_DataAdapter.DeleteCommand.Transaction = trans

accts_DataAdapt

[GENERAL] Windows Download

2007-08-27 Thread John K Masters
I have setup a Postgres server on Debian Etch and successfully connected
to it with various *nix clients but I now have to connect a WinXP
client. On accessing the Postgres site I am directed to a download page,
click on the appropriate link and get automatically directed to a
University of Kent site. No problem as I am on the Kent/Surrey border so
this is probably the nearest site. However, on looking at the U of K
page that has come up I see in the top left hand corner  the words "This
page does not exist". Now I could have sworn this page existed else how
could I have known it didn't exist?

Whilst I can appreciate the funny side of this situation I do need to be
able to install a PGSQL client on a windows machine. At the moment,
whichever 'national flag' I select from the Postgres site, I am
redirected to the Kent University page which doesn't exist.

Could the philosophy students please return control of the site to the
IT dept.? Thanks!

John
-- 
War is God's way of teaching Americans geography
Ambrose Bierce (1842 - 1914)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot

Andrew Sullivan wrote:

On Mon, Aug 27, 2007 at 07:15:44PM +0200, Kamil Srot wrote:
  

OK, I'll drop the full part and do it less often...



This doesn't address your problem, but when you move from VACUUM FULL
to VACUUM, you want to do it _more_ often, not less.

  
Sure, I ment it like I'll do the FULL vacuum less often than daily and 
do daily the plain vacuum command.

But given what you've posted, I am not even a little bit convinced
your script is working as you think it is.  You're clearly not
managing to vacuum the entire database sometimes.
  
Well, I do list all databases with the command "psql -U postgres -l -t" 
and with some shell coding I do vacuum all shown databases...

I don't think there is some problem in it...

If you do see some, can you give me a hint?

Thank you,
--
Kamil


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Stephen Ince
Point taken for the enterprise comparison. The reason for having the 
embedded database is to hide the complexity for installing, using, and 
configuration of the database from the user of the application. You don't 
want a scaled version of the database.





- Original Message - 
From: "Tony Caduto" <[EMAIL PROTECTED]>

Cc: 
Sent: Monday, August 27, 2007 12:47 PM
Subject: Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished



Stephen Ince wrote:
Postgres can't be embedded or serverless. Firebird has the embedded 
feature. Most of the databases have this capability (hsqldb, 
derby,oracle,mysql, firebird, and db2). Derby and hsqldb are the only 
free embedded databases for commercial use.




A lot of Firebird users have been saying this as well, but the comparison 
if more for Enterprise use.
Plus if you need a embedded database wouldn't it be better to use one 
built specifically for that purpose?  i.e. SQLite for example.


Good call on the name limit, I remember running into that when porting 
something from MS SQL server to Firebird about 4 years ago.

I will have to check and see if this still applies to version 2.0

Later,

Tony

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Andrew Sullivan
On Mon, Aug 27, 2007 at 10:03:04PM +0200, Kamil Srot wrote:
> >  
> Sure, I ment it like I'll do the FULL vacuum less often than daily and 
> do daily the plain vacuum command.

If you have your servers set up correctly, you should never need to
perform VACUUM FULL. 

> Well, I do list all databases with the command "psql -U postgres -l -t" 
> and with some shell coding I do vacuum all shown databases...
> I don't think there is some problem in it...
> 
> If you do see some, can you give me a hint?

I don't think I've seen the script itself.  Without seeing it, I can
only guess.  But you don't actually need to do that.  You should be
able to run vacuumdb -a, and get the result you need.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
This work was visionary and imaginative, and goes to show that visionary
and imaginative work need not end up well. 
--Dennis Ritchie

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Removing pollution from log files

2007-08-27 Thread Rainer Bauer
Andrew Sullivan wrote:

>On Mon, Aug 27, 2007 at 02:00:02PM +0300, Andrus wrote:
>> Postgres  log files are polluted with messages
>> 
>> 2007-08-27 06:10:38 WARNING:  nonstandard use of \\ in a string literal at
>> character 190
>> 2007-08-27 06:10:38 HINT:  Use the escape string syntax for backslashes,
>> e.g., E'\\'.
>
>That's not pollution; it's telling you you need to fix your
>application to escape the backslashes differently.  If you want to
>suppress them, though, you can change your logging level to be higher
>than "WARNING".

Or lookup
.

Apart from that: there was a bug in the ODBC driver prior 08.02.0402 which
resulted in this error message whenever binary data of type SQL_LONGVARBINARY
was send.

Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot

Andrew Sullivan wrote:

On Mon, Aug 27, 2007 at 10:03:04PM +0200, Kamil Srot wrote:
  
 
  
Sure, I ment it like I'll do the FULL vacuum less often than daily and 
do daily the plain vacuum command.



If you have your servers set up correctly, you should never need to
perform VACUUM FULL. 

  

I see. Thank you for the hint...
Well, I do list all databases with the command "psql -U postgres -l -t" 
and with some shell coding I do vacuum all shown databases...

I don't think there is some problem in it...

If you do see some, can you give me a hint?



I don't think I've seen the script itself.  Without seeing it, I can
only guess.  But you don't actually need to do that.  You should be
able to run vacuumdb -a, and get the result you need.

  

The script is very simple one:

#!/bin/sh
PGDIR=/opt/pgsql

DATABASES=`$PGDIR/bin/psql -U postgres -l -t | cut -f 1 -d '|'`

for db in $DATABASES
do
   echo "vacuum;" | $PGDIR/bin/psql -U postgres $db
done

I can easily rewrite it to use the vacuumdb command, but I doubt it'll 
make any difference.


Is there a way how to find out the problem is here? I scanned the log 
output of pgsql for wrap around but didn't found aby recent relevant 
entries... I'm pretty sure, I miss something stupid :-(


Thank you very much for your help,
--
Kamil



Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Stephen Ince

Dave,
  Thx I will take a look. I was trying to port a postgres schema to a 
database that had embedded capability. I could not find any non-commerical 
databases that supported triggers, sequences, udf function, and stored 
procedure. I as I remembered firebird has pretty weak UDF function 
capability(only C/C++) and the name size limitation was a killer.


Steve





--- Original Message ---
From: "Stephen Ince" <[EMAIL PROTECTED]>
To: "Tony Caduto" <[EMAIL PROTECTED]>, "Greg Smith" 
<[EMAIL PROTECTED]>, pgsql-general@postgresql.org

Sent: 27/08/07, 17:02:21
Subject: Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

Postgres can't be embedded or serverless. Firebird has the embedded 
feature.

Most of the databases have this capability (hsqldb, derby,oracle,mysql,
firebird, and db2). Derby and hsqldb are the only free embedded databases
for commercial use.



SQL Server CE is also free for commercial use iirc.

Regards, Dave

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Out of Memory - 8.2.4

2007-08-27 Thread Erik Jones
Yes, but fortunately for me, unfortunately for the list, it's only  
happened to me once so I don't really have anything to go on wrt  
repeating the problem.  I can only say, "Yep!  It's happened!"  I am  
watching my db closely, though.  Well, my monitoring scripts are :)


On Aug 27, 2007, at 1:56 PM, Jeff Amiel wrote:


--- "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:


The machine we are tracking this problem on is also 64bit.


H.looks like 3 different people are tracking a similar  
issue on 64 bit platforms.you,

Erik and myself.








__ 
__Ready for the edge of your seat?

Check out tonight's top picks on Yahoo! TV.
http://tv.yahoo.com/

---(end of  
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that  
your

   message can get through to the mailing list cleanly


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Andrew Sullivan
On Mon, Aug 27, 2007 at 10:31:11PM +0200, Kamil Srot wrote:
> The script is very simple one:

Well, I don't see anything obvious, but. . .

> I can easily rewrite it to use the vacuumdb command, but I doubt it'll 
> make any difference.

The point is that you don't have to rewrite it.  Just run "vacuumdb
-a" and it vacuums _all_ databases.

> Is there a way how to find out the problem is here? I scanned the log 
> output of pgsql for wrap around but didn't found aby recent relevant 
> entries... I'm pretty sure, I miss something stupid :-(

Are you quite sure you're getting all your logs where you think you
are?

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The plural of anecdote is not data.
--Roger Brinner

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] pgsql Windows installer fixed registry key

2007-08-27 Thread Dizzy
Hello

I am currently working on creating a build system for an open source portable 
project that should be able to build the project on many platforms, POSIX and 
non-POSIX such as Windows. Our project has the option for using PostgreSQL.

Searching for PostgreSQL includes/libraries is very easy on POSIX and Unices 
but on Windows one has usually 3 options:
- have a relative location to the project source tree where those should be 
found (like a $project-root/pgsql)
- have the user specify the location to them
- try to read the location from registry settings

Note that these methods do not exclude eachover.

The pgsql MSI installer does register a registry key but it's random everytime 
it installs (probably something MSI choses). I am asking if someone could 
make it (also) register a fixed registry key pointing to it's install 
location (from which I could locate the includes/library to use when building 
our project). This could be useful for other projects too of course. In a 
similar way MySQL registers the key 
HKEY_LOCAL_MACHINE\SOFTWARE\MySQL AB\MySQL Server 5.0;Location for a 5.0.x 
version. Could it be possible to have PostgreSQL installer do something 
similar?

Thank you!

-- 
Mihai RUSU  Email: [EMAIL PROTECTED]
GPG : http://dizzy.roedu.net/dizzy-gpg.txt  WWW: http://dizzy.roedu.net
"Linux is obsolete" -- AST

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Dann Corbit
There are some limitations to SQL Server Express:
http://www.microsoft.com/sql/downloads/trial-software.mspx

Download SQL Server 2005 Express Edition
Complete a SQL Server Express download, free. There are no time limits
and the software is freely redistributable (with registration). With a
database size limit of 4 gigabytes (GB) and support for 1 CPU and up to
1 GB of RAM, the SQL Server 2005 Express download provides software that
is suitable for application embedding or lightweight application
development.

I have never used the CE version and I do not know what the limitations
are.
They are not made clear from the download page:
http://www.microsoft.com/downloads/details.aspx?familyid=B9B12312-FE57-4
817-A4BC-69992802732D&displaylang=en

This document:
http://download.microsoft.com/download/7/f/c/7fc20778-4e2e-4944-b432-ed7
4b404e542/sqlservercompactdatasheet_final.doc

mentions a maximum database size of 4 GB (like the Express version), but
the other limits are not clear.

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Stephen Ince
> Sent: Monday, August 27, 2007 1:30 PM
> To: Dave Page
> Cc: Tony Caduto; Greg Smith; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PostgreSQL vs Firebird feature comparison
finished
> 
> Dave,
>Thx I will take a look. I was trying to port a postgres schema to a
> database that had embedded capability. I could not find any
non-commerical
> databases that supported triggers, sequences, udf function, and stored
> procedure. I as I remembered firebird has pretty weak UDF function
> capability(only C/C++) and the name size limitation was a killer.
> 
> Steve
> 
> >
> >
> >> --- Original Message ---
> >> From: "Stephen Ince" <[EMAIL PROTECTED]>
> >> To: "Tony Caduto" <[EMAIL PROTECTED]>, "Greg Smith"
> >> <[EMAIL PROTECTED]>, pgsql-general@postgresql.org
> >> Sent: 27/08/07, 17:02:21
> >> Subject: Re: [GENERAL] PostgreSQL vs Firebird feature comparison
> finished
> >>
> >> Postgres can't be embedded or serverless. Firebird has the embedded
> >> feature.
> >> Most of the databases have this capability (hsqldb,
derby,oracle,mysql,
> >> firebird, and db2). Derby and hsqldb are the only free embedded
> databases
> >> for commercial use.
> >>
> >
> > SQL Server CE is also free for commercial use iirc.
> >
> > Regards, Dave
> >
> > ---(end of
broadcast)---
> > TIP 2: Don't 'kill -9' the postmaster
> >
> 
> 
> ---(end of
broadcast)---
> TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot

Andrew Sullivan wrote:
I can easily rewrite it to use the vacuumdb command, but I doubt it'll 
make any difference.



The point is that you don't have to rewrite it.  Just run "vacuumdb
-a" and it vacuums _all_ databases.

  

Oh, I have it now! It takes some time, but at the end, I'll understand :-D
Sure, it's much better than coding some scripts etc. I already used it 
in my script...
Is there a way how to find out the problem is here? I scanned the log 
output of pgsql for wrap around but didn't found aby recent relevant 
entries... I'm pretty sure, I miss something stupid :-(



Are you quite sure you're getting all your logs where you think you
are?
  
Yes, the log is correct... I can see statement errors I did selecting 
from system tables few hours ago...

I have default debug level... should I increase it?

I really need to make sure, the next problem (if it's necessary to let 
it happen) will be the last one... :-(


Thank you,
--
Kamil


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Is this a bug? Insert float into int column inserts rounded value instead of error.

2007-08-27 Thread Matthew Schumacher
List,

One of the reasons why I use postgres is because you can insert data and
it will work or give you an error instead of converting, truncating,
etc... well I found a place where postgres makes an erroneous
assumption and I'm not sure this is by design.

When inserting a float such as 4.12322345 into a int column postgres
inserts 4 instead of returning an error telling you that your value
won't fit.  I would much rather have the error and check for it since I
can be sure I'll get 4.12322345 back out if I didn't get an error on insert.

Is this by design?  If so I think it should be changed so that postgres
will always return your data exactly as you entered it if there isn't an
error on insert.

template1=# create table test (number int);
CREATE TABLE
template1=# insert into test (number) values (4.123123123);
INSERT 0 1
template1=# select * from test;
 number

  4
(1 row)


Thanks,
schu

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Tom Lane
Kamil Srot <[EMAIL PROTECTED]> writes:
> # select xmin, age(xmin) from pg_class;
>xmin|age
> ---+
>  2 | 2147483647
>  2 | 2147483647
>  2 | 2147483647
>  2 | 2147483647
>  2 | 2147483647
>  2 | 2147483647
>  236838019 | 539453
>  2 | 2147483647
>  2 | 2147483647
>  231899309 |5478163
> etc.

I see no indication here that there's anything wrong at all.

The rows with xmin = 2 (ie, FrozenTransactionID) are evidently pg_class
rows that have never been modified since the database was created.
There isn't anything wrong with them.  age() is returning INT_MAX for
them to denote that they are "infinitely far in the past", but they are
in no danger of being invalidated by wraparound.

The newer rows indicate that your current transaction counter is around
237 million, which is barely a tenth of the way to wraparound.  While
it's conceivable that it already wrapped around (ie, you've really
executed 237 million plus some multiple of 2^32 transactions), I think
it's more probable that no wraparound has happened and your problem is
entirely unrelated to that.  I believe this in part because having just
one table "disappear" is not the likely result of an XID wraparound
problem, and in part because 8.2 has got very strong defenses against
allowing wraparound failure to occur: it will shut down first, and it
will do a whole lot of bleating and autovacuuming before that.

You haven't yet showed us any details of what happens.  What exactly do
you mean by the table "disappearing"?  Can you select from it?  Do you
see a row for it in pg_class?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished

2007-08-27 Thread Dave Page


> --- Original Message ---
> From: "Stephen Ince" <[EMAIL PROTECTED]>
> To: "Dave Page" <[EMAIL PROTECTED]>
> Sent: 27/08/07, 21:30:06
> Subject: Re: [GENERAL] PostgreSQL vs Firebird feature comparison finished
> 
> Dave,
>Thx I will take a look. I was trying to port a postgres schema to a 
> database that had embedded capability. I could not find any non-commerical 
> databases that supported triggers, sequences, udf function, and stored 
> procedure. I as I remembered firebird has pretty weak UDF function 
> capability(only C/C++) and the name size limitation was a killer.
> 

SQL CE is pretty limited as well - no sequences, triggers or udf's either. It 
works very well with .Net CF of course and is a very useful datastore on pocket 
pc device though.
.
Regards, Dave

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pgsql Windows installer fixed registry key

2007-08-27 Thread Dave Page


> --- Original Message ---
> From: Dizzy <[EMAIL PROTECTED]>
> To: pgsql-general@postgresql.org
> Sent: 27/08/07, 21:12:55
> Subject: [GENERAL] pgsql Windows installer fixed registry key
> 
> The pgsql MSI installer does register a registry key but it's random 
> everytime 
> it installs (probably something MSI choses).

No it's not random. It uses the product ID which only changes between major 
versions (ie. 8.2 - 8.3) or between distros (eg. pgInstaller vs. EnterpriseDB 
Postgres).

Regards, Dave

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Kamil Srot

Tom Lane wrote:

Kamil Srot <[EMAIL PROTECTED]> writes:
  

# select xmin, age(xmin) from pg_class;
   xmin|age
---+
 2 | 2147483647
 2 | 2147483647
 2 | 2147483647
 2 | 2147483647
 2 | 2147483647
 2 | 2147483647
 236838019 | 539453
 2 | 2147483647
 2 | 2147483647
 231899309 |5478163
etc.



I see no indication here that there's anything wrong at all.

The rows with xmin = 2 (ie, FrozenTransactionID) are evidently pg_class
rows that have never been modified since the database was created.
There isn't anything wrong with them.  age() is returning INT_MAX for
them to denote that they are "infinitely far in the past", but they are
in no danger of being invalidated by wraparound.


  

I think the same, but I'm at the end of my ideas...


You haven't yet showed us any details of what happens.  What exactly do
you mean by the table "disappearing"?  Can you select from it?  Do you
see a row for it in pg_class?
  
It's completelly gone... cannot do anything with this relation... I 
fixed it by creating this specific table from several hours old backup 
and it's fine (this table is nearly static)...
I didn't check if the relation was in pg_class... but I have complete 
backup of database files after this issue from today and also from the 
last time it happened (but they was vacuumed few hour before the backup 
by nightly script, so the data can be changed in this snapshots)... but 
it it can help and you'll navigate me, I can fetch any data you want 
from it...


Thank you,
--
Kamil



Re: [GENERAL] Is this a bug? Insert float into int column inserts rounded value instead of error.

2007-08-27 Thread Tom Lane
Matthew Schumacher <[EMAIL PROTECTED]> writes:
> template1=# create table test (number int);
> CREATE TABLE
> template1=# insert into test (number) values (4.123123123);
> INSERT 0 1

Perhaps you'd be happier doing it like this:

regression=# insert into test (number) values ('4.123123123');
ERROR:  invalid input syntax for integer: "4.123123123"

Or if you use an integer-typed parameter, or COPY, the same thing will
happen.  The point here being that the integer input function is picky
in the way you want, but that has nothing to do with whether an
acknowleged non-integral value can be converted to int.  The original
case is allowed because float to int is an "assignment" cast.  You could
change it to an explicit cast if you like, but I think you'd soon find
that unpleasant; and it would be contrary to the SQL spec.  SQL92
section 4.6 saith:

 Values of the data types NUMERIC, DECIMAL, INTEGER, SMALLINT,
 FLOAT, REAL, and DOUBLE PRECISION are numbers and are all mutually
 comparable and mutually assignable. If an assignment would result
 in a loss of the most significant digits, an exception condition
 is raised. If least significant digits are lost, implementation-
 defined rounding or truncating occurs with no exception condition
 being raised. The rules for arithmetic are generally governed by
 Subclause 6.12, "".

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Is this a bug? Insert float into int column inserts rounded value instead of error.

2007-08-27 Thread Andrew Sullivan
On Mon, Aug 27, 2007 at 12:48:34PM -0800, Matthew Schumacher wrote:
> When inserting a float such as 4.12322345 into a int column postgres
> inserts 4 instead of returning an error telling you that your value
> won't fit.  I would much rather have the error and check for it since I
> can be sure I'll get 4.12322345 back out if I didn't get an error on insert.

If you quote it, it works.  That is:

testing=# SELECT 4.123123123::int;
 int4 
--
4
(1 row)

testing=# SELECT '4.123123123'::int;
ERROR:  invalid input syntax for integer: "4.123123123"

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Tables dissapearing

2007-08-27 Thread Erik Jones

On Aug 27, 2007, at 4:08 PM, Tom Lane wrote:


Kamil Srot <[EMAIL PROTECTED]> writes:

# select xmin, age(xmin) from pg_class;
   xmin|age
---+
 2 | 2147483647
 2 | 2147483647
 2 | 2147483647
 2 | 2147483647
 2 | 2147483647
 2 | 2147483647
 236838019 | 539453
 2 | 2147483647
 2 | 2147483647
 231899309 |5478163
etc.


I see no indication here that there's anything wrong at all.

The rows with xmin = 2 (ie, FrozenTransactionID) are evidently  
pg_class

rows that have never been modified since the database was created.
There isn't anything wrong with them.  age() is returning INT_MAX for
them to denote that they are "infinitely far in the past", but they  
are

in no danger of being invalidated by wraparound.

The newer rows indicate that your current transaction counter is  
around

237 million, which is barely a tenth of the way to wraparound.  While
it's conceivable that it already wrapped around (ie, you've really
executed 237 million plus some multiple of 2^32 transactions), I think
it's more probable that no wraparound has happened and your problem is
entirely unrelated to that.  I believe this in part because having  
just

one table "disappear" is not the likely result of an XID wraparound
problem, and in part because 8.2 has got very strong defenses against
allowing wraparound failure to occur: it will shut down first, and it
will do a whole lot of bleating and autovacuuming before that.

You haven't yet showed us any details of what happens.  What  
exactly do

you mean by the table "disappearing"?  Can you select from it?  Do you
see a row for it in pg_class?

regards, tom lane


Also, in your original post you mentioned a "proprietal CMS system".   
Is this proprietary to your company or one that you've purchased?   
The fact that the same table going on multiple dbs all being run by  
that CMS system certainly makes it worthy of suspicion.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Bigtime scaling of Postgresql (cluster and stuff I suppose)

2007-08-27 Thread Philip Hallstrom

Bill Moran <[EMAIL PROTECTED]> writes:

First off, "clustering" is a word that is too vague to be useful, so
I'll stop using it.


Right.  MySQL Cluster, on the other hand, is a very specific technology.
http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster.html

It is, however, capable of being d*mn fast for read-mostly workloads
that can fit their whole dataset into RAM --- and with the price of


There are however some things that won't work (or work well) using NDB 
that will drive you crazy.


VARCHAR's aren't varchars.  They are fixed to the max length.  There's 
also a limit on overall row length which is pretty small (don't remember 
what it is off hand).  Cluster doesn't really enjoy processing queries 
with left outer joins or joins in general -- what will take <1s on a 
single mysql instance can take several seconds on the cluster.  Some of 
this is because the storage nodes can't do it so copy all the tables 
involved to the api nodes for processing.  Even on a fast network this 
takes a lot of time.  You can't have a query with two OR'd LIKE clauses. 
Instead you have to break them into their own query and UNION the result. 
You can't insert/update/delete more than 32000 rows at a time.  In 
practice (and no I don't understand why) sometimes this really means more 
like 1.


Most annoying however is that to make a change to the database schema you 
have to shut down all the nodes except one.  Not sure if this is typical 
of other systems or not, but it kind of sucks :/


There's other things too, but I don't remember what they are until I build 
something that works fine with a single mysql instance and then doesn't on 
the cluster...


-philip

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


  1   2   >