Don Baccus writes:
> At 01:06 PM 8/1/00 +1000, Philip Warner wrote:
>
> >I agree; it's definitely a non-critical feature. But then, it is only 80
> >lines of code in one place (including 28 non-code lines). I am not totally
> >happy with the results it produces, so I have no objection to removi
Michael Talbot-Wilson wrote:
>
> > I want to alter the size of a column, say from char(40) to char(80),
> > but it seem that
> > the ALTER does not support such operation, nor does it support column
> > removing.
> >
> > How can I do for this ?
>
> I would also like to know how to do
> I want to alter the size of a column, say from char(40) to char(80),
> but it seem that
> the ALTER does not support such operation, nor does it support column
> removing.
>
> How can I do for this ?
I would also like to know how to do both of these things.
Hi,
I want to alter the size of a column, say from char(40) to char(80),
but it seem that
the ALTER does not support such operation, nor does it support column
removing.
How can I do for this ?
Thanks
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> 1. In my PHP code, I have functions like
> inserttransaction(values...). I could just modify inserttransaction()
> so that it runs the same query (the INSERT) on two or more DB
> servers. This would probably work ok.
Why not have a proxy server t
Help! Is there any recent, comprehensive information for getting the openlink ODBC
drivers installed (i.e. something like a HOWTO) for Linux? The PostgreSQL site merely
links to openlink, but the information there is uselessly vague and/or out of date.
That is, installing the RPM plus the SD
"Fetter, David M" <[EMAIL PROTECTED]> writes:
> DBI->connect(dbname=template1) failed: PQconnectPoll() -- connect() failed:
> Connection refused
> Is the postmaster running (with -i) at 'mwabs504'
> and accepting connections on TCP/IP port '5432'?
> at test.pl line 59
"Connection
I guess if you don't do deletes then something like selecting all the
records with an oid greater than the last replication cycle would
find the most recent additions.
Erich wrote:
>
> I am setting up a system that processes transactions, and it needs to
> be highly reliable. Once a transact
I am setting up a system that processes transactions, and it needs to
be highly reliable. Once a transaction happens, it can never be
lost. This means that there needs to be real-time off-site
replication of data. I'm wondering what's the best way to do this.
One thing that might simplify thi
Hi,
Originally postgres had a "recursive select" to handle cases like this.
Some syntax like...
retrieve* (notice the "*") which meant keep executing until you can't
anymore, and using an
appropriate where clause it would decend tree-like structures.
This feature disappeared somewhere along the
"Fetter, David M" wrote:
> I'm having issues with installing the postgres DBI for perl. Here is the
> output of make test and a verification postmaster is running:
>
> [~/DBD-Pg-0.95] dmfetter@mwabs505! make test
> PERL_DL_NONLAZY=1 /usr/local/bin/perl -Iblib/arch -Iblib/lib
> -I/usr/local/lib/p
"Bryan White" <[EMAIL PROTECTED]> writes:
>> Shut down the postmaster and then copy the entire db (including pg_log
>> file) and it should work. The catch is to make sure pg_log is in sync
>> with your table files.
> I would rather not leave my database down long enough to copy the entire db
> (
"Fetter, David M" <[EMAIL PROTECTED]> writes:
> and import the database, however the perl scripts that we were using
> (written by somebody who's gone now) no longer work. I've traced the
> problem to the following line:
> $dbh = DBI->connect("dbi:Pg:dbname=$PGDB;host=$PGHOST;port=$PGPORT", , );
I'm having issues with installing the postgres DBI for perl. Here is the
output of make test and a verification postmaster is running:
[~/DBD-Pg-0.95] dmfetter@mwabs505! make test
PERL_DL_NONLAZY=1 /usr/local/bin/perl -Iblib/arch -Iblib/lib
-I/usr/local/lib/perl5/5.00503/sun4-solaris -I/usr/loca
> Shut down the postmaster and then copy the entire db (including pg_log
> file) and it should work. The catch is to make sure pg_log is in sync
> with your table files.
I would rather not leave my database down long enough to copy the entire db
(3.5GB). I have control over when changes are app
"Bryan White" <[EMAIL PROTECTED]> writes:
> I am wiling to spend some time to track this down. However I would prefer
> to not keep crashing my live database. I would like to copy the raw data
> files to a backup maching. Are there any catches in doing this.
Shut down the postmaster and then c
> Hmm. Assuming that it is a corrupted-data issue, the only likely
> failure spot that I see in CopyTo() is the heap_getattr macro.
> A plausible theory is that the length word of a variable-length field
> (eg, text column) has gotten corrupted, so that when the code tries to
> access the next fi
Tom Lane writes:
> Felipe Alvarez Harnecker <[EMAIL PROTECTED]> writes:
> > Hi, I wonder if one must activate the LIMIT clause somewhere,
>
> uh ... no ...
>
> > bacause for me it does nothing.
>
> Details? What query did you issue exactly, and what did you get?
>
>
Paul Caskey <[EMAIL PROTECTED]> writes:
> This query takes 206 seconds:
> [snip]
> If I change the last line to this, it takes 1 second:
What does EXPLAIN show for these queries?
regards, tom lane
Joseph Shraibman <[EMAIL PROTECTED]> writes:
> His question was how to extract the data from postgres, since it is
> there until a vacuum.
Depends. If the table hadn't been touched at all since the erroneous
transaction, he could go into the pg_log file and twiddle the two bits
that give the com
Paul Caskey wrote:
>
> This query takes 206 seconds:
>
> SELECT t1.blah, t1.foo, t2.id
> FROM t1, t2, t3
> WHERE t1.SessionId = 427
> AND t1.CatalogId = 22
> AND t1.CatalogId = t3.CatalogId
> AND t2.id = t3.SomeId
> AND t2.Active != 0
>
> If I change the last line to this, it takes 1 second:
>
I'm attempting to upgrade our version of postgres from 6.4.3, which has a
couple of not too good bugs, to 7.0.2. I can successfully upgrade, export
and import the database, however the perl scripts that we were using
(written by somebody who's gone now) no longer work. I've traced the
problem to
Felipe Alvarez Harnecker <[EMAIL PROTECTED]> writes:
> Hi, I wonder if one must activate the LIMIT clause somewhere,
uh ... no ...
> bacause for me it does nothing.
Details? What query did you issue exactly, and what did you get?
regards, tom lane
"Bryan White" <[EMAIL PROTECTED]> writes:
>> I concur that this probably indicates corrupted data in the file. We
>> may or may not be able to guess how it got corrupted, but a stack trace
>> seems like the place to start.
> Here is the backtrace:
> #0 0x808b0e1 in CopyTo ()
Hmm. Assuming tha
Tom Lane writes:
> g <[EMAIL PROTECTED]> writes:
> > Use the limit clause.
> > SELECT message_text FROM messages ORDER BY creation_date LIMIT $limit,
> > $offset.
>
> > LIMIT 10, 0 gets you the first batch.
> > LIMIT 10, 10 gets you the second batch.
> > LIMIT 10, 20 gets you the third,
> Status 139 indicates a SEGV trap on most Unixen. There should be a core
> dump left by the crashed backend --- can you get a backtrace from it
> with gdb?
>
> I concur that this probably indicates corrupted data in the file. We
> may or may not be able to guess how it got corrupted, but a sta
This query takes 206 seconds:
SELECT t1.blah, t1.foo, t2.id
FROM t1, t2, t3
WHERE t1.SessionId = 427
AND t1.CatalogId = 22
AND t1.CatalogId = t3.CatalogId
AND t2.id = t3.SomeId
AND t2.Active != 0
If I change the last line to this, it takes 1 second:
AND t2.Active = 1
The "Active" field is 0 or
Herbert Liechti wrote:
>
> [EMAIL PROTECTED] wrote:
>
> >
> > > > How I can recover this table.
> > > >
> > > > Please help me.
> > >
> > >
> > > Put in your backup tape and restore.
> > >
> > >
> >
> > Unfortunately, I have not the backup tape. :(
>
> see http://www.rocksoft.com/taobackup/
>
"Bryan White" <[EMAIL PROTECTED]> writes:
> As a result of this event, in the log file I see:
> ---
> Server process (pid 21764) exited with status 139 at Mon Jul 31 14:51:44
Status 139 indicates a SEGV trap on most Unixen. There should be a core
dump left by the crashed backend ---
Joseph Shraibman <[EMAIL PROTECTED]> writes:
> What do these mean?
> NOTICE: FlushRelationBuffers(message, 2): block 2 is referenced
> (private 0,
> global 1)
> FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2
We've seen a couple reports of this happening with 7.0. Apparently
Dear all,
I would like to define threads in message system for replies to message but if
I define too many level, I am afraid I have problem in the select...
Say, I have define 3 levels:
1
/ \
2 3
/\\
4 5 6
It means message 2 is a reply to 1.
4 is a furt
[EMAIL PROTECTED] wrote:
>
> > > How I can recover this table.
> > >
> > > Please help me.
> >
> >
> > Put in your backup tape and restore.
> >
> >
>
> Unfortunately, I have not the backup tape. :(
see http://www.rocksoft.com/taobackup/
my heartfelt sympathy!
--
~~~
32 matches
Mail list logo