I've got a huge database table and I need to increase the size of a
varchar from like 100 to 200 characters. As I recall the size is just
a restriction and doesn't actually affect the format of the table
file.
Rather than dumping/restoring a 5Gb table with 20,000,000 rows which
will take all
I'm going to write a function in C that parses XML. Is there any way
to index the output of this function? I've got 10,000,000 rows that
contain XML data and I need to efficiently find the ones that contain
the proper keys. I tried pulling the values I want from the data and
putting it in its
I've been watching for this for some time. First it was 7.0, then
7.1. Does anyone have any idea on when the row re-use code will be
ready?
Currently I'm running into trouble with an OLTP database. It grows
like crazy, has only 3,000,000 rows and vacuum takes a good 1/2 hour.
Given trouble w
> "Michael Richards" <[EMAIL PROTECTED]> writes:
>> The reduction of the list doesn't seem to be terribly efficient.
>> Here are some strategies I've been looking at:
>
>> select id from users WHERE
>> id not in (select userid from sentlette
I've got 2 tables, one with a list of users (has only about 5000
entries) and another with a list of userids that have already been
sent letters. I'm trying to efficiently join these two so I get every
user who hasn't been sent a letter. The problem is, coupled with the
5 other joins on the us
Prfect! This is exactly what I needed. Didn't know postgres
supported subselects like that. Thanks.
-Michael
> select min(amtsum), max(amtsum), avg(amtsum)
> from (select sum(amount) as amtsum from payments group by userid)
> ss;
>
> In prior versions you'd need to do the initial select int
I have a table that looks like so:
userid | amount
---
1 | $500
2 | $400
2 | $-100
2 | $10
3 | $10
3 | $10
I run a select sum(amount) from payments group by userid
userid | sum
--
1| $500
2| $310
3| $20
I need to mo
I've got a select that pulls many values from the same table.
Basicaly for a given formid there can be many fields each one
depending on a definition. So form1 may be broken down as follows:
fieldid
1 firstname
2 lasname
3 postal code
Rather than sticking this data in
Ok, I've built the most insane query ever. It joins 11 tables, most
of which are the same table, just extracting different values. Here
is the query plan:
Nested Loop (cost=0.00..5011.89 rows=1 width=156)
-> Nested Loop (cost=0.00..4191.82 rows=1 width=140)
-> Nested Loop (cost=0.
Does anyone know how I can make a constraint on a key to enforce a 1
to n relationship where n>0?
I've invented an example to show the sort of constraint I need:
CREATE TABLE permissions (
id int4,
userid int4,
perm int4,
primary key (id,userid)
);
CREATE TABLE objects (
id int4,
per
I see that the REFERENCES constraint is a little more restrictive in
7.1. I need to have a table with a constraint so one of it's columns
must exist in a second table. This is not a key, since there may be N
columns in the second table that match. Any ideas on how I should
create this?
CREATE
>>> Best web-scripting (and compiling) language is PHP+Zend
>>> compiler PHP is extremely powerful as it combines the power of
>>> Perl, Java, C++, Javascript into one single language and it
>>> runs on all OSes - unixes and Windows NT/95.
>>
>> it seems that the author never used any other think
> from http://www.linux.org/docs/ldp/howto/PostgreSQL-HOWTO-11.html
>
>> Best web-scripting (and compiling) language is PHP+Zend
>> compiler PHP is extremely powerful as it combines the power of
>> Perl, Java, C++, Javascript into one single language and it
>> runs on all OSes - unixes and Windows
(999);
select * from objects;
select * from objcatalog;
commit;
-Michael
> Can you send the full schema of the tables you are using for
> this?
>
> On Tue, 16 Jan 2001, Michael Richards wrote:
>
>> Hi.
>>
>> I'm having trouble with committing a transaction.
Hi.
I'm having trouble with committing a transaction. Intuitively it
should work but does not.
I've got a table with 2 foreign keys, minrev and maxrev. They refer
to a the revisionid value in another table. I need to update the
second table to reduce the revisionid, then insert a new row. At
Hi.
I've always had trouble understanding how to properly deal with the
following case.
I need to run an insert on a table but I don't want any other
transactions to commit the insert before me. Within the logic of the
program, I run a select to see if there is a value there, then the
insert
It appears that postgres 7.0 does not support repeatable read for
transaction isolation. Is this planned? If so, when?
-Michael
_
http://fastmail.ca/ - Fast Free Web Email for Canadians
Hi.
I've got a postgres system that keeps going into recovery mode. I
can't really find any docs on this. All of the postgres processes
will be in the STOP state and when I try to connect it will say "The
database is in recovery mode".
I suspect there is a query that is causing it to crash in
> As I mentioned earlier, your solution worked great. I am a bit
> puzzled about the syntax that is created by my old method, using
> "pg_dump -D -u -a". I wonder why it creates "create sequence ..."
> commands instead of "update sequence ..."?
That is a good question. I do not know the answer
Hi.
Here are the options for pg_dump:
-a dump out only the data, no schema
-c clean (drop) schema prior to create
-d dump data as INSERT, rather than COPY, commands
-D dump data as INSERT commands with attribute names
-hserver host name
-i p
Hi.
I believe this to be a bug, but I am submitting it to the SQL list as
well in the case I overlooked something.
I'm running
Postgres 7.0.2 on FreeBSD 4.0-STABLE/Intel
compiled with [gcc version 2.95.2 19991024 (release)]
I've got this interesting problem where the query plan is not what
> am i use "PQexec()" or "conn.query()" for inserting informations
> in my BD ?
I wrote a class to encapsulate all the ugliness of it. Inside my
class I'm calling PQexec().
-Michael
22 matches
Mail list logo