Joseph Shraibman <[EMAIL PROTECTED]> writes:
> Do you really think I should do 1000 updates in a transaction instead of
> an IN with 1000 items? I can do my buffer flush any way I want but I'd
> have to think the overhead of making 1000 calls to the backend would be
> more than overwhelm the cost
i came across this patent in regard to main db usage, am i the only one
who think this patent is rediculous?
http://164.195.100.11/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/srchnum.htm&r=1&f=G&l=50&s1='5,832,497'.WKU.&OS=PN/5,832,497&RS=PN/5,832,497
Joseph Shraibman <[EMAIL PROTECTED]> writes:
> I recently tried to do a big update with postgres 7.1.2. The update was
> something like
> UPDATE table SET status = 2 WHERE id IN (a few thousand entries) AND
> status = 1;
> and I got:
> ERROR: Expression too complex: nesting depth exceeds max_ex
I don't know the cause, but if you only have to run this procedure once in
a while, you could select all the records that need to be updated, and use
a text editor to build a few thousand single update statement, then save
this file and echo it to the postgres backend through psql.
Good Luck!
I missed the first part, but if the numbers are rows in a table, why not
do something like:
numrows = select count(*) from table1 where some_condition
median_value = select some_col from table1 where some_condition order by
some_col limit numrows/2, 1
(or something very close to t
[EMAIL PROTECTED] (Konstantinos Agouros) writes:
> Since I must grant update/insert/delete access to this table to everybody
> that can use this application, how can I stop people from updating the data
> of the others.
Triggers that compare current_user to the userid column of the table,
perhaps
At 17:23 18.06.2001, you wrote:
>Thomas Seifert <[EMAIL PROTECTED]> writes:
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x400252ef in resetPQExpBuffer () from /usr/local/pgsql/lib/libpq.so.2
>
> > Do you have any idea what went wrong?
>
>No ... could we see a debugger backtrace fr
Thomas Seifert <[EMAIL PROTECTED]> writes:
> #0 0x400252ef in resetPQExpBuffer () from /usr/local/pgsql/lib/libpq.so.2
> #1 0x4002537d in printfPQExpBuffer () from /usr/local/pgsql/lib/libpq.so.2
> #2 0x400213e7 in PQgetResult () from /usr/local/pgsql/lib/libpq.so.2
> #3 0x40021467 in PQexec (
Taken almost literally from the tutorial example
(http://www.postgresql.org/idocs/index.php?app-ecpg.html) the following code:
EXEC SQL DECLARE my_cursor CURSOR FOR SELECT a,b FROM lala WHERE a= :i;
EXEC SQL FETCH FORWARD NEXT FROM my_cursor INTO :tmpa,:tmpb;
throws the following error in the
"Thalis A. Kalfigopoulos" <[EMAIL PROTECTED]> writes:
> But the intermmediate state cannot hold multiple values in an array
> (can it?)
Sure, why not? Might not scale too well to lots of values, however.
regards, tom lane
---(end of broadcast)---
"Vilson farias" <[EMAIL PROTECTED]> writes:
> Does anyone know what is this error?
> ERROR: cache lookup for userid 26 failed
Evidently pg_shadow has no entry with usesysid 26. Add it back...
regards, tom lane
---(end of broadcast)--
On Sat, Jun 16, 2001 at 12:41:57AM -0400, Alex Pilosov wrote:
> Actually, I just tried your original example, and it worked for me:
> use Apache::Session::Postgres;
>
> #if you want Apache::Session to open new DB handles:
>
> tie %hash, 'Apache::Session::Postgres', $id, {
Greetings,
Does anyone know what is this error?
ERROR: cache lookup for userid 26 failed
Some system tables are inacessible in my database but I can access others. I tried to
check pg_tables, but it's blocked :
persona=> select * from pg_tables;
ERROR: cache lookup for userid 26 failed
per
Hippl,
I'm interested in calculating the median of a set of numbers. The algorithm
requires that all values are known in advance (ie stored in an array). So the question
is: how can I store everything first in an array so I can later process it given that
I'd like this to be an aggregat
"Thalis A. Kalfigopoulos" <[EMAIL PROTECTED]> writes:
> In the manual fro creating aggregate functions
>(http://www.postgresql.org/idocs/index.php?sql-createaggregate.html) it reads:
> --->Alternatively, for an aggregate that does not examine its input
> values, the function takes just one argume
Alex Pilosov wrote:
>
> On Sat, 16 Jun 2001, will trillich wrote:
>
> > the manpages for Apache::Session::DBI still say that it
> > uses Apache::Session::DBIStore for its grunt work. whereas
> You still have the old manpages (and probably old scripts).
> CPAN's upgrades don't delete files that b
In the manual fro creating aggregate functions
(http://www.postgresql.org/idocs/index.php?sql-createaggregate.html) it reads:
sfunc
The name of the state transition function to be called for each input data value.
This is normally a function of two arguments, the first being of type state
Thomas Seifert <[EMAIL PROTECTED]> writes:
> Program received signal SIGSEGV, Segmentation fault.
> 0x400252ef in resetPQExpBuffer () from /usr/local/pgsql/lib/libpq.so.2
> Do you have any idea what went wrong?
No ... could we see a debugger backtrace from the core dump?
"Tim Knowles" <[EMAIL PROTECTED]> writes:
> I have two machines one running 7.1.1 and the other 7.1.2. I've used
> pg_dump to dump the schema and data from the 7.1.1 system and loaded it into
> 7.1.2 for testing. Some of my queries now run a lot slower as the planner
> prefers to use a has
I was under the impression that primary index is the same as clustered index i.e. the
order in the index matches the physical order the records are stored on disk thus
making it better when doing sequential accesses.
I assume that this is exactly the use of the CLUSTER command, to actually make
"Gordan Bobic" <[EMAIL PROTECTED]> writes:
> Can somebody please explain to me why is it that 20 Gb of disk space
> is required to restore 5 GB of data?
You may wish to apply the patch at
http://www.ca.postgresql.org/mhonarc/pgsql-patches/2001-06/msg00061.html
> What is going on? I have had this
Hi,
I have two machines one running 7.1.1 and the other 7.1.2. I've used
pg_dump to dump the schema and data from the 7.1.1 system and loaded it into
7.1.2 for testing. Some of my queries now run a lot slower as the planner
prefers to use a hash-join instead of a nested loop. I have ru
22 matches
Mail list logo