Hello all!
I'm working on a new base type. It's apparently installing fine via
PG_XS. I'm using malloc and free for memory management within the
module code.
x=# select '+'::x_type.x_type;
x_type
+
(1 row)
x=# select '+'::x_type.x_type;
x_type
-
On 8/30/07, Madison Kelly <[EMAIL PROTECTED]> wrote:
> Hi all,
>
>I am pretty sure I've done this before, but I am drawing a blank on
> how I did it or even what commands I need. Missing the later makes it
> hard to search. :P
>
>I've got Postfix working using PostgreSQL as the backend on a
On 8/30/07, Jeff Davis <[EMAIL PROTECTED]> wrote:
> I know already it's possible to audit changes to data in postgresql
> tables using triggers, etc.
>
> A lot of other things can also be logged using the logging mechanism,
> such as permission errors (by logging all error messages), etc.
>
> Howev
On Thu, 2007-08-30 at 15:39 -0600, Guy Fraser wrote:
> Below is the logging section from the postgresql.conf file. It
> would appear that you can configure PostgreSQL to log as much
> detail as you want to where you want. You can then write a
> program to parse the log file and present the infor
Denis Gasparin wrote:
>> Yeah, you're wrong. The difference is that plain vacuum does not try
>> very hard to reduce the length of a table file --- it just frees up
>> space within the file for reuse. vacuum full will actually move things
>> from the end of the file to free space nearer the head
I know already it's possible to audit changes to data in postgresql
tables using triggers, etc.
A lot of other things can also be logged using the logging mechanism,
such as permission errors (by logging all error messages), etc.
However, there are also other things that would be useful to audit
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Thu, Aug 30, 2007 at 03:32:40PM -0400, Tom Lane wrote:
>> difference is that SERIALIZABLE takes one snapshot at transaction start
>> and works with that for the whole transaction, whereas READ COMMITTED
>> takes a new snap for each statement.
> Oh,
On Aug 30, 2007, at 2:45 PM, André Volpato wrote:
Hello,
I need constraint definitions like pg_indexes.indexdef , in order
to drop / create them on the fly.
BTW, where can I find a complete doc about Postgres Metadata ?
- AV
The system catalogs are in the manual appendix. However, for w
Hello,
I need constraint definitions like pg_indexes.indexdef , in order to
drop / create them on the fly.
BTW, where can I find a complete doc about Postgres Metadata ?
- AV
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, pl
On Thu, Aug 30, 2007 at 03:32:40PM -0400, Tom Lane wrote:
> difference is that SERIALIZABLE takes one snapshot at transaction start
> and works with that for the whole transaction, whereas READ COMMITTED
> takes a new snap for each statement.
Oh, I get it. This explains then why in principle READ
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> I had the impression that, when working in READ COMMITTED mode, you
> could see (for instance) _new_ rows that were INSERTed by others who
> were also doing work. In SERIALIZABLE, you couldn't. So in cases
> where the additional rows met criteria in y
On Thu, Aug 30, 2007 at 02:21:56PM -0400, Tom Lane wrote:
> SERIALIZABLE mode does not introduce any waits that wouldn't happen
> anyway. It only affects what happens after you stop waiting.
Ok, this makes me think I'm deeply confused about something. (Uh,
well, on this specific topic. Anyone
"John Prevost" <[EMAIL PROTECTED]> writes:
> So, what's the problem? Well=97I have twelve tables that are
> partitioned by hour. There are 24 hours in a day, there are seven
> days in a week, and... you may see where I'm going here. PostgreSQL
> gets a lock on each individual table queried (in t
Madison Kelly wrote:
Hi all,
I am pretty sure I've done this before, but I am drawing a blank on
how I did it or even what commands I need. Missing the later makes it
hard to search. :P
I've got Postfix working using PostgreSQL as the backend on a small,
simple test database where I hav
On Thu, Aug 30, 2007 at 08:03:07AM -0400, Paul Tilles wrote:
> We would like to use the 8.2 version of postgres with our next software
> build. The next 8.2 version will contain a software patch which is
> critical to our needs. When is the next release of 8.2 expected to occur?
Note that the
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> I think there's a reason why SERIALIZABLE could be slower, and that
> is that it's waiting on possibly-conflicting (but not actually
> conflicting) commits to happen in READ COMMITTED mode. No? Won't it
> have to check those things when it COMMITs?
S
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> Maybe we could load it in a more compact form after the heap cleanup
>> pass, instead of a plain TID list.
>
> Hmm ... a non-lossy bitmap would be an interesting alternative.
> I think I went with bsearch mainly
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> ! /* no need to allocate more space than we have pages */
> ! maxtuples = LAZY_ALLOC_TUPLES * relblocks;
Uh ... you need to guard against integer overflow in this calculation.
Perhaps leave the current initialization alone, and t
On Wed, Aug 29, 2007 at 10:22:32PM +0100, Gregory Stark wrote:
> mode. In fact I believe SERIALIZABLE mode is actually measurably faster in
> benchmarks but haven't run one in READ COMMITTED mode recently (for that
> reason).
I think there's a reason why SERIALIZABLE could be slower, and that
is t
First, read the Perl DBI documentation that is relevant:
perldoc DBD::Pg
perldoc DBI
Your examples do not make sense. You "prepare" a SQL statement, not just
data. E.g.:
$sth = $dbh->prepare("INSERT INTO test3(nameval, boolval) VALUES (?, ?)")
or die($sth->errstr);
foreach my $nm ('Joe', 'Fr
Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > I'm not having much luck really. I think the problem is that ANALYZE
> > stores reltuples as the number of live tuples, so if you delete a big
> > portion of a big table, then ANALYZE and then VACUUM, there's a huge
> > misestimat
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Maybe we could load it in a more compact form after the heap cleanup
> pass, instead of a plain TID list.
Hmm ... a non-lossy bitmap would be an interesting alternative.
I think I went with bsearch mainly because I knew I could generate
the TID list alr
> Yeah, you're wrong. The difference is that plain vacuum does not try
> very hard to reduce the length of a table file --- it just frees up
> space within the file for reuse. vacuum full will actually move things
> from the end of the file to free space nearer the head of the file,
> so that it
Woops, I wasn't careful enough when I wrote that email, sorry. The
results showed my real domains instead of 'test.com'. I had different
domains in the test and real DBs.
Madison Kelly wrote:
email_file
-
feneon.com/mkelly/inbox
and
email_file
--
I've recently been developing for work a website backed by PostgreSQL,
showing information about network flows. This data is extremely high
volume in places, and I was hard pressed to come up with a good way to
speed up data loading times until I came across the recommendation to
use table inherit
Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> >> VACUUM getting "Out of memory" may not sound like a big problem, but
> >> the scary thing is - the last VACUUM's memory request may succeed and
> >> that means following queries start failing and that is big problem.
>
> > Maybe
Hi all,
I am pretty sure I've done this before, but I am drawing a blank on
how I did it or even what commands I need. Missing the later makes it
hard to search. :P
I've got Postfix working using PostgreSQL as the backend on a small,
simple test database where I have a simple table calle
On Aug 30, 2007, at 4:03 AM, Ow Mun Heng wrote:
2. how do I perform a list of SQL using transactions. eg: like above,
but wrap it into a transaction.
assuming $dbh is your open handle to the database via DBI, then you
do something like this:
$dbh->begin_work() or die;
$sth = $dbh->prepare
=?ISO-8859-15?Q?G=FCnther_Jedenastik?= <[EMAIL PROTECTED]> writes:
> What's wrong with fire a trigger event after/before the system catalog
> insert/update/delete?
This has been discussed before, see the archives.
regards, tom lane
---(end of broa
Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> VACUUM getting "Out of memory" may not sound like a big problem, but
>> the scary thing is - the last VACUUM's memory request may succeed and
>> that means following queries start failing and that is big problem.
> Maybe what we should do is spill the
"Roberto Icardi" <[EMAIL PROTECTED]> writes:
> Everything works perfectly... BUT the query returns always 0 rows affected,
> even if one record is inserted or deleted in friends_hobbies...why??
The rules for this are explained here:
http://www.postgresql.org/docs/8.2/static/rules-status.html
If
Denis Gasparin <[EMAIL PROTECTED]> writes:
> I'm a bit concerned about the autovacuum daemon.
> Today I runned a vacuum full during a normal maintainance task and I
> noticed that the size of pg_largeobject
> decreased from 14GB to 4GB...
> Every night we have a procedure that deletes large object
rafikoko <[EMAIL PROTECTED]> writes:
> Could you please suggest me other compination of parameters for pg_dump, so
> that it dumps complete database (inluding functions, triggers, procedures,
> operators, sequences, tables, views etc. and obviously data)?
> It doesn't have to be stored in .tar arch
On 8/30/07, Nitin Verma <[EMAIL PROTECTED]> wrote:
>
> >> Then you aren't doing regular vacuum often enough and / or don't have high
> enough fsm settings.
>
> Right now it's just default, can you please point me to a document that
> elaborates on calculation of FSM for a given load (or to say aver
Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Tom Lane escribió:
> >> BTW, if an autovac worker gets an elog(ERROR) on one table, does it die
> >> or continue on with the next table?
>
> > It continues with the next table if interrupted (SIGINT), but the worker
> > exits on a
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane escribió:
>> BTW, if an autovac worker gets an elog(ERROR) on one table, does it die
>> or continue on with the next table?
> It continues with the next table if interrupted (SIGINT), but the worker
> exits on any other error.
Hmm, that seems
Tom Lane escribió:
> BTW, if an autovac worker gets an elog(ERROR) on one table, does it die
> or continue on with the next table?
It continues with the next table if interrupted (SIGINT), but the worker
exits on any other error. I would ask you to review that code -- it's
in do_autovacuum, the
On 8/30/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Marko Kreen" <[EMAIL PROTECTED]> writes:
> > Note that it's much better to err on the smaller values.
>
> > Extra index pass is really no problem.
>
> I beg to differ ...
Well, if Postgres tries to cut down passes by using max memory
then admin is
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane escribió:
>> Yeah ... so just go with a constant estimate of say 200 deletable tuples
>> per page?
> How about we use a constant estimate using the average tuple width code?
I think that's overthinking the problem. The point here is mostly fo
Andrus wrote:
> > That's not pollution; it's telling you you need to fix your
> > application to escape the backslashes differently.
>
> I havent seen that ODBC specification requires escaping strings.
> So this is task of ODBC driver.
So complain to the ODBC guys. OTOH, maybe you are using ODBC
[EMAIL PROTECTED]
As far as i know, the reason a trigger doesn't work on system catalog
(e.g. DDL statements) is because it's not a INSERT SQL statement.
It seems 'simple_heap_insert' is used for insert's.
So i thought why not fire a trigger event after the simple_heap_insert
or the index upda
Denis Gasparin wrote:
> Another question/idea: why don't put messages about what tables got
> vacuumed by the autovacuum daemon as normal log messages (instead of
> debug2)?
We did that for 8.3, actually.
> I think it could be useful because in this way you can also know what
> tables are used m
Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > I'm not having much luck really. I think the problem is that ANALYZE
> > stores reltuples as the number of live tuples, so if you delete a big
> > portion of a big table, then ANALYZE and then VACUUM, there's a huge
> > misestimat
Tom Lane escribió:
> "Marko Kreen" <[EMAIL PROTECTED]> writes:
> > Note that it's much better to err on the smaller values.
>
> > Extra index pass is really no problem.
>
> I beg to differ ...
>
> What this may actually suggest is that autovacuum needs its own value of
> "maintenance_work_mem",
Marko Kreen escribió:
> On 8/29/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > > I'm not having much luck really. I think the problem is that ANALYZE
> > > stores reltuples as the number of live tuples, so if you delete a big
> > > portion of a big table
"Marko Kreen" <[EMAIL PROTECTED]> writes:
> Note that it's much better to err on the smaller values.
> Extra index pass is really no problem.
I beg to differ ...
What this may actually suggest is that autovacuum needs its own value of
"maintenance_work_mem", or that it should automatically divid
Hi all... I'm experimenting for the first time with rules to make a view
updatable, but I've found a strange "effect"...
I'll try to explain what I'm trying to achieve
Suppose I have a table "friends" and a table "hobbies". Each friend can be
interested in one or more hobbies:
CREATE TABLE fr
We tried to connect from a different gentoo machine.
both client and server are running version 8.2.4 of postgresql.
right now, we are trying to connect from gentoo, but we want to connect from
windows as well
Idan
On 8/30/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>
> On Thu, Aug 30, 2007 a
Hi,
On Thu, 2007-08-30 at 08:03 -0400, Paul Tilles wrote:
> When is the next release of 8.2 expected to occur?
http://archives.postgresql.org/pgsql-general/2007-08/msg01803.php
Regards,
--
Devrim GÜNDÜZ
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Share
We would like to use the 8.2 version of postgres with our next software
build. The next 8.2 version will contain a software patch which is
critical to our needs. When is the next release of 8.2 expected to occur?
As always, TIA.
Paul Tilles
---(end of broadcast)-
>> Then you aren't doing regular vacuum often enough and / or don't have high
enough fsm settings.
Right now it's just default, can you please point me to a document that
elaborates on calculation of FSM for a given load (or to say averaged load)
I found ( http://www.varlena.com/GeneralBits/Tid
On Thu, Aug 30, 2007 at 02:07:13PM +0300, Idan Miller wrote:
> Hi everyone,
>
> I'm trying to configure PostgreSQL version 8.2.4 with Kerberos and Active
> Directory.
> The AD is run on a windows 2003 server, and the postgre on gentoo.
> The gentoo computer name is postgre and it's added to the wi
Hi everyone,
I'm trying to configure PostgreSQL version 8.2.4 with Kerberos and Active
Directory.
The AD is run on a windows 2003 server, and the postgre on gentoo.
The gentoo computer name is postgre and it's added to the windows 2003
server AD domain.
I did the following:
- I compiled postgre w
On 8/30/07, Nitin Verma <[EMAIL PROTECTED]> wrote:
> > Why are you doing FULL vacuums? Is there some problem that regular vacuums
> aren't solving?
>
> Using dump/restore from a live DB to fresh DB, I get a DB that takes (49M +
> 12M - {I have two table spaces)) 61M of disk. Maximum size that I ca
Hi,
Thanks for the answer. I've checked and the archive is corrupted. Every time
I dump my database with pg_dump command mentioned in the previous post, I've
got the same error.
Could you please suggest me other compination of parameters for pg_dump, so
that it dumps complete database (inluding
Hi,
Thanks for the answer. I've checked and the archive is corrupted. Every time
I dump my database with pg_dump command mentioned in the previous post, I've
got the same error.
Could you please suggest me other compination of parameters for pg_dump, so
that it dumps complete database (inluding
> I'm just looking for the correct workaround.
While adding a new collation is the "correct" solution it's a lot of work.
Even then pg can't use different collations anyway unless you reinit
the datadir using initdb.
One workaround is to cast the text value into a bytea value, and then it
will be
I'm a bit concerned about the autovacuum daemon.
Today I runned a vacuum full during a normal maintainance task and I
noticed that the size of pg_largeobject
decreased from 14GB to 4GB...
Every night we have a procedure that deletes large object no more
referenced using the vacuumlo program.
Thi
I have experimented vacuuming a table and running transaction on the same. It
works fine without pause injection. Is that because of MVCC?
Hi,
Thanks for the answer. I've checked and the archive is corrupted. Every time
I dump my database with pg_dump command mentioned in the previous post, I've
got the same error.
Could you please suggest me other compination of parameters for pg_dump, so
that it dumps complete database (inluding
Hi all,
I'm sure some of you guys do perl-dbi to access perl. need some
pointers. (pg specific I guess)
1. Possible to execute queries to PG using multiple statemments?
eg:
prepare("A")
bind_param($A)
execute()
prepare("BB")
bind_param($B)
execute()
prepare("CC")
bind_param($B)
execute()
right
On 8/29/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > I'm not having much luck really. I think the problem is that ANALYZE
> > stores reltuples as the number of live tuples, so if you delete a big
> > portion of a big table, then ANALYZE and then VACUUM,
> Why are you doing FULL vacuums? Is there some problem that regular vacuums
aren't solving?
Using dump/restore from a live DB to fresh DB, I get a DB that takes (49M +
12M - {I have two table spaces)) 61M of disk. Maximum size that I can grow by
the quota allocated to DB is 100M.
A regular vacu
On Thu, 2007-08-30 at 09:14 +0200, A. Kretschmer wrote:
> am Thu, dem 30.08.2007, um 14:59:06 +0800 mailte Ow Mun Heng folgendes:
> > Is there a way to do a dump of a database using a select statement?
>
> A complete database or just a simple table?
a simple table.. couple million records, want
am Thu, dem 30.08.2007, um 14:59:06 +0800 mailte Ow Mun Heng folgendes:
> Is there a way to do a dump of a database using a select statement?
A complete database or just a simple table?
>
> eg: \copy trd to 'file' select * from table limit 10
Since 8.2 you can use COPY (select * from table) T
Is there a way to do a dump of a database using a select statement?
eg: \copy trd to 'file' select * from table limit 10
---(end of broadcast)---
TIP 6: explain analyze is your friend
66 matches
Mail list logo