John D. Burger wrote:
D. Dante Lorenso wrote:
I'd really like to have ORDER BY and LIMIT for UPDATE and DELETE
commands. Is this possible?
UPDATE invoice i
SET reserve_ts = NOW() + '1 hour'::timestamp
FROM account a
WHERE a.acct_id = i.acct_id
AND i.reserve_ts NOW()
AND
Kris Jurka, 13.12.2007 08:24:
Using a PreparedStatement with setBinaryStream() to insert the blob
and getBinaryStream() to read the BLOB works fine for me.
It depends how you want to handle binary data on the server side.
get/setBinaryStream only work with the bytea data type.
Uwe C. Schroeder wrote:
Ok, something I've been toying around with for a while.
Here's the scenario:
Imagine a blogging app.
I have a table for blogs with a blog_id (primary key)
and a table blog_comments also with a comment_id as primary key and a foreign
key holding the blog_id the post
On Thu, 2007-12-13 at 06:27 +, Gregory Stark wrote:
Tom Lane [EMAIL PROTECTED] writes:
Joshua D. Drake [EMAIL PROTECTED] writes:
Exactly. Which is the point I am making. Five minutes of transactions
is nothing (speaking generally).. In short, if we are in recovery, and
we are not
Bill Moran wrote:
D. Dante Lorenso [EMAIL PROTECTED] wrote:
All,
I'd really like to have ORDER BY and LIMIT for UPDATE and DELETE
commands. Is this possible?
UPDATE invoice i
SET reserve_ts = NOW() + '1 hour'::timestamp
FROM account a
WHERE a.acct_id = i.acct_id
AND
Em Wednesday 12 December 2007 11:57:48 Richard Broersma Jr escreveu:
There is a power distribution hierarchy that they would like to analyze.
1) the average pF of all motor in a Motor Control Center (MCC).
2) the average pF of all MCCs that are fed from a Load Center (LC).
3) the average pF
Em Wednesday 12 December 2007 03:42:55 pilzner escreveu:
Does stuff like this cause any aches and pains to developers out there, or
do I just need to get in a new mindset??? Also, is there a way to be sure
the primary key is *ONLY* ever given a value by serial, and not subject to
updates???
Simon Riggs [EMAIL PROTECTED] writes:
On Thu, 2007-12-13 at 06:27 +, Gregory Stark wrote:
Heikki proposed a while back to use posix_fadvise() when processing logs to
read-ahead blocks which the recover will need before actually attempting to
recover them. On a raid array that would bring
--- On Wed, 12/12/07, Uwe C. Schroeder [EMAIL PROTECTED] wrote:
The comments table also has a field that holds a
self-reference to comment id
for commments on comments (on comments) of a blog.
The model that you are referring to here is the adjacency list hierarchy
model. There are two
I'm using centos 5 as the OS so, there's no fancy dtrace to look at
which processes is causing my disks to thrash.
I have 4 disks in the box. (all ide, 7200rpm)
1 OS disk [hda]
2 raided (1) disks [hdb/hdc]
1 pg_xlog disk (and also used as an alternate tablespace for [hdd]
temp/in-transit files
On Dec 13, 2007, at 6:55, Uwe C. Schroeder wrote:
What I would like to do is to create a view that sucks the comments
for a
given blog_id in the order they should be displayed (very commonly
seen in
pretty much all blogging apps), i.e.
Blog
comment 1
comment on comment 1
comment on
--- On Thu, 12/13/07, Jorge Godoy [EMAIL PROTECTED] wrote:
Actually pF is measured from -1 to 1.
they tell you that they want doing some pF correction on
the facility and you
need to have that counted as well.
Thanks for the correct, and good point. :-)
Regards,
Richard Broersma Jr.
On Dec 13, 2007, at 10:19, Jorge Godoy wrote:
Em Wednesday 12 December 2007 03:42:55 pilzner escreveu:
Does stuff like this cause any aches and pains to developers out
there, or
do I just need to get in a new mindset??? Also, is there a way to
be sure
the primary key is *ONLY* ever given
Good morning to everybody,
I've to resolve this situation: I've a collection of many different databases, all identical, and the name of those databases is stored inside a table in another central
management database.
In an ideal world, I'd like with a single query to be able to recover the name
Hi ,
Try this out
SELECT count(o.id)
FROM output_table o , parent_table p
WHERE o.pid=p.pid AND o_date now()
GROUP BY p.pid ;
On Thu, 13 Dec 2007 10:00:56 + (GMT)
Ashish Karalkar [EMAIL PROTECTED] wrote:
Hello List member,
I have a table which ha sfollowing structure
my_table:
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
It's a good idea, but it will require more complex code. I prefer the
simpler solution of using more processes to solve the I/O problem.
Huh, I forgot about that idea. Ironically that was what I suggested when
Heikki described
Alban Hertroys wrote:
The problem the OP is pointing out seems difficult to solve. A
sequence doesn't know about existing records with a possibly higher
number than the sequence is at.
This may be worked around by keeping a list of numbers used up
beyond the current sequence value so the
In response to D. Dante Lorenso [EMAIL PROTECTED]:
Bill Moran wrote:
D. Dante Lorenso [EMAIL PROTECTED] wrote:
All,
I'd really like to have ORDER BY and LIMIT for UPDATE and DELETE
commands. Is this possible?
UPDATE invoice i
SET reserve_ts = NOW() + '1 hour'::timestamp
On 12/13/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Good morning to everybody,
I've to resolve this situation: I've a collection of many different
databases, all identical, and the name of those databases is stored inside a
table in another central
management database.
In an ideal
Hello,
when I put the autovacuum on, where can I check what it does? Is there any
log-file?
Regards
Michaela
D. Dante Lorenso wrote:
Doesn't this create race condition in the query where multiple
processes might find the same invoice_id while executing the inner
select. The update would then update the same record more than
once during the update step and 2 processes might get the same
[EMAIL PROTECTED] (Scott Marlowe) writes:
On Dec 10, 2007 2:19 PM, Chris Browne [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] (Josh Harrison) writes:
Does slony support postgres major version upgrade ? ie., will it
replicate between different major versions?
Yes, that's one of the major use
On Thu, 2007-12-13 at 10:18 -0300, Alvaro Herrera wrote:
Gregory Stark wrote:
Simon Riggs [EMAIL PROTECTED] writes:
It's a good idea, but it will require more complex code. I prefer the
simpler solution of using more processes to solve the I/O problem.
Huh, I forgot about that
Hello All:
Ashish wants latest output_date but less than now()
want to retrieve all the output_id which are having latest
output_date but less than now() (no future output_dates)
The query written by Tirnath will return count of all output ids given
parent id where output_date is less than
On Thu, 13 Dec 2007, Thomas Kellerer wrote:
Can you point me to the manual for these types of BLOBs are described? The
Data Types chapter does not list them as far as I can tell.
Apparently the only documentation appears to be in the client interfaces
section...
On Mon, 10 Dec 2007, A. Ozen Akyurek wrote:
We have a large table (about 9,000,000 rows and total size is about 2.8 GB)
which is exported to a binary file.
How was it exported? With COPY tablename TO 'filename' WITH BINARY?
The BINARY key word causes all data to be stored/read as binary
From: Magnus Hagander [EMAIL PROTECTED]
Subject: Re: [GENERAL] timestamp with time zone
Date: Thu, 13 Dec 2007 07:45:18 +0100
Message-ID: [EMAIL PROTECTED]
Tatsuo Ishii wrote:
Hi,
test=# select t at time zone 'jst' from t2;
timezone
-
17:34:56+09
Stefan Schwarzer wrote:
Hi there,
I have two rather simple queries, which I would to UNION, but somehow I
always get an error message for the UNION (ERROR: syntax error at or
near UNION)
SELECT
ORDER BY
LIMIT 1
UNION ALL
...
I think it's complaining about the order by/limit. The UNION
On Dec 12, 2007 11:10 PM, Tatsuo Ishii [EMAIL PROTECTED] wrote:
Hi,
Can someone enlighten me?
I know that I can insert date/time data along with time zone info into
the timestamp with time zone data type. My question is, can I extract
the *original* time zone info afterward? I seems
Hello
use derived tables
SELECT *
FROM (SELECT c.name, d.year_start, d.value
FROM emissions_so2_total_rivm AS d
LEFT JOIN
countries AS c ON c.id = d.id_country
WHERE ((c.iso_2_code = 'CH') OR
Hi there,
I have two rather simple queries, which I would to UNION, but somehow
I always get an error message for the UNION (ERROR: syntax error at
or near UNION)
Each query selects the country name (Switzerland in this case), the
most recent year in the table and its value.
What is
On Dec 12, 2007 9:16 PM, Robert Treat [EMAIL PROTECTED] wrote:
On Wednesday 12 December 2007 10:53, Thomas Kellerer wrote:
At least for Oracle it's not mainly the order that improves the
performance, but the fact that all the data is kept in the index, so
Oracle does not need to go back to
On Dec 13, 2007 5:06 AM, Ow Mun Heng [EMAIL PROTECTED] wrote:
I'm using centos 5 as the OS so, there's no fancy dtrace to look at
which processes is causing my disks to thrash.
I have 4 disks in the box. (all ide, 7200rpm)
1 OS disk [hda]
2 raided (1) disks [hdb/hdc]
1 pg_xlog disk (and
I´m running some compatibility and performance tests, between two
servers with 8.1 and 8.3 as follows :
[1] 8.1: postgres 8.1.9 / 1GB Ram / P4 3Ghz / Debian 2.6.18-4-686
[2] 8.3: postgres 8.3 b4 / 1GB Ram / P4 3Ghz / Debian 2.6.18-4-686
[2] is faster for every single operation, but I found
Vacuum only writes normal operations messages to the logs if you have
log_min_messages set to DEBUG2 or DEBUG3 (with DEBUG3 giving more
output). I'm not sure what query stats are kept for autovacuum
activities in 8.3, but in versions = 8.2.x you can run the following
query to see what
Hi all.
In order to speed up the COPY ... FROM ... command, I've
disabled everything (primary key, not null, references, default and indexes)
in the table definition before doing the actual COPY.
Later I can restore them with ALTER TABLE ... and CREATE INDEX ...
My question is: is all this
=?ISO-8859-1?Q?Andr=E9_Volpato?= [EMAIL PROTECTED] writes:
Besides the (expected) weak guess on rows for both servers on seq scan
on jtest, there is something nasty with [2] that prevents the planner to
use the index.
There isn't anything preventing either version from choosing any of
the
I've two tables related via a id-field.
Table public.fn_kat
Column |Type |
-+-+-
id | integer |
kategorie | text|
Table
Reg Me Please [EMAIL PROTECTED] writes:
In order to speed up the COPY ... FROM ... command, I've
disabled everything (primary key, not null, references, default and indexes)
in the table definition before doing the actual COPY.
Later I can restore them with ALTER TABLE ... and CREATE INDEX ...
Thanks everyone.
This was exactly what I needed. I went with connectby as Dante recommended
and it works like a charm.
UC
--
Open Source Solutions 4U, LLC 1618 Kelly St
Phone: +1 707 568 3056 Santa Rosa, CA 95401
Cell: +1 650 302 2405 United States
Fax:+1 707
peter pilsl wrote:
But there is a problem now: There are also entries in fn_kat which dont
have corresponding entries in fn_dokumente and this entries should be
listed too. With the proper count=0 !!
Your problem is not on the GROUP BY, but rather that you need an outer
join. Try something
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wed, 12 Dec 2007 23:08:35 -0800
Joshua D. Drake [EMAIL PROTECTED] wrote:
Tom Lane wrote:
Joshua D. Drake [EMAIL PROTECTED] writes:
Tom Lane wrote:
You sure about that? I tested CVS HEAD just now, by setting the
checkpoint_ parameters
André Volpato [EMAIL PROTECTED] writes:
And the query:
# select j.i, t.t from jtest j inner join test t on t.i = j.i where (j.i*1.5)
between 300 and 400;
Planner for [1]:
Nested Loop (cost=0.00..270192.02 rows=2 width=41) (actual
Planner for [2]:
Hash Join
Gregory Stark escreveu:
André Volpato [EMAIL PROTECTED] writes:
And the query:
# select j.i, t.t from jtest j inner join test t on t.i = j.i where (j.i*1.5)
between 300 and 400;
Planner for [1]:
Nested Loop (cost=0.00..270192.02 rows=2 width=41) (actual
Planner for [2]:
Gregory Stark [EMAIL PROTECTED] writes:
But I'm curious if you turn off mergejoin whether you can get a Nested Loop
plan and what cost 8.3 gives it. It looks to me like 8.3 came up with a higher
cost for Nested Loop than 8.1.9 (I think 8.1.10 came out with some planner
fixes btw) and so it's
Tom Lane escreveu:
Gregory Stark [EMAIL PROTECTED] writes:
But I'm curious if you turn off mergejoin whether you can get a Nested Loop
plan and what cost 8.3 gives it. It looks to me like 8.3 came up with a higher
cost for Nested Loop than 8.1.9 (I think 8.1.10 came out with some planner
Tom Lane [EMAIL PROTECTED] schrieb:
Andreas Kretschmer [EMAIL PROTECTED] writes:
cast it to varchar(8):
As i said in a private mail to Scott (sorry): the suggested way don't
work, at least with 8.1. Maybe this works better in more recent
versions.
Yes, it works a lot better in 8.2:
Tom Lane escreveu:
Gregory Stark [EMAIL PROTECTED] writes:
But I'm curious if you turn off mergejoin whether you can get a Nested Loop
plan and what cost 8.3 gives it. It looks to me like 8.3 came up with a higher
cost for Nested Loop than 8.1.9 (I think 8.1.10 came out with some
André Volpato [EMAIL PROTECTED] writes:
Gregory Stark escreveu:
André Volpato [EMAIL PROTECTED] writes:
I think the answer is that if you have bad statistics you'll get a bad plan
and which bad plan is going to be pretty much random.
I believe the statistics are ok, I´ve runned
Il Thursday 13 December 2007 19:56:02 Tom Lane ha scritto:
Reg Me Please [EMAIL PROTECTED] writes:
In order to speed up the COPY ... FROM ... command, I've
disabled everything (primary key, not null, references, default and
indexes) in the table definition before doing the actual COPY.
On Wed, Dec 12, 2007 at 12:28:37PM -0800, pilzner wrote:
thats what I'm familiar with discussion, just to get a feel of why its done
that way, if I'm doing anything wrong, or if there is an accepted way to
lock it down.
It'd be easy to lock down with a trigger that RAISEs ERROR in case OLD.id
peter pilsl [EMAIL PROTECTED] writes:
But there is a problem now: There are also entries in fn_kat which dont
have corresponding entries in fn_dokumente and this entries should be
listed too. With the proper count=0 !!
How to achieve this?
LEFT JOIN before the group by?
André Volpato [EMAIL PROTECTED] writes:
I think I found the answer!br
br
8.1: likes nested loop even after vacuumdb on the database.br
br
8.3: likes hash at first time but:br
- after vacuumdb *on the database* (I was running on the tables.),
it turns out to:br
nbsp;Merge Joinnbsp;
On Dec 13, 2007 4:31 PM, Reg Me Please [EMAIL PROTECTED] wrote:
Il Thursday 13 December 2007 19:56:02 Tom Lane ha scritto:
Reg Me Please [EMAIL PROTECTED] writes:
In order to speed up the COPY ... FROM ... command, I've
disabled everything (primary key, not null, references, default and
Dann Corbit wrote:
What happens if you disable the net card on your server machine?
Hi Dann,
The connection is from IIS worker process running on the same machine.
My problem is that I want to disable connections for a single database.
Disabling the pgsql service or disabling the card
Thomas H. wrote:
On Tue, Dec 11, 2007 at 05:50:46PM -0500, Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
Howard Cole wrote:
I take it from the lack of response that nobody knows how to kill
a connection from the postgresql side on windows?
You can't, short of sending a signal to
wasn't the OP asking for a way to kill active connections to a db?
afaik pgAdmin3 does provide this functionality:
pgadmin3 tools server status
there you can easily terminate connections transactions to a
particular db. works pretty well in my test, i can kill active
connections and drop
Which you can do, no? I thought pg_ctl's kill option was invented
specifically to make this less painful on Windows.
I shall look into the pg_ctl options to see if the kill option does
what taskill cannot (thanks for the heads up on that)
Using
$ pg_ctl kill TERM [pid]
worked great. Since
Thomas H. wrote:
wasn't the OP asking for a way to kill active connections to a db?
afaik pgAdmin3 does provide this functionality:
pgadmin3 tools server status
there you can easily terminate connections transactions to a
particular db. works pretty well in my test, i can kill active
Guys,
I've got a strange situation where instance may crash sometimes on
pgplsql function call that performs several inserts into tables with
triggers attached. Are there any known bugs regarding plpgsql and
triggers in 8.2.5 ? Thanks!
Best regards,
Alex Vinogradovs
Kris and all,
Here is the query I will call to the get the name of columns by ordinal
position. Do you see any compatibility drivers will older server
versions, or other issues?
SELECT column_name
FROM information_schema.columns
WHERE table_catalog=? AND table_schema=? AND table_name=?
ORDER
Alex Vinogradovs [EMAIL PROTECTED] writes:
I've got a strange situation where instance may crash sometimes on
pgplsql function call that performs several inserts into tables with
triggers attached. Are there any known bugs regarding plpgsql and
triggers in 8.2.5 ? Thanks!
Nope; can you
There is no solid test scenario, as it doesn't happen every time,
and I couldn't reproduce it by manually running the queries.
Not the entire thing was compiled with debugging symbols, here is
what I get for stack trace:
#0 0x005492a9 in spi_printtup ()
#1 0x0054a0bc in
Alex Vinogradovs [EMAIL PROTECTED] writes:
Not the entire thing was compiled with debugging symbols, here is
what I get for stack trace:
Not much help. We might get a little further if you recompile with
--enable-debug (and I'd suggest adding --enable-cassert too) and then
get a stack trace
On Thu, 13 Dec 2007, Ken Johanson wrote:
Here is the query I will call to the get the name of columns by ordinal
position. Do you see any compatibility drivers will older server versions, or
other issues?
SELECT column_name
FROM information_schema.columns
WHERE table_catalog=? AND
I'm starting to perform some basic housekeeping to try to trim some big
tables (~200 million rows - ~50GB+indexes) into separate partitions (via
inheritance).
The current issue which prompted me to do such housekeeping is due to
long database wide vacuum time. (it went from 2 hours to 4 hours to
Kris Jurka wrote:
Using pg_catalog tables is better than using information_schema because
of the way permissions work. For information_schema you must be the
table owner, while people who only have permissions to access a table
will most likely be able to read pg_catalog.
Do you have
On Dec 13, 2007 6:06 AM, Ow Mun Heng [EMAIL PROTECTED] wrote:
I'm using centos 5 as the OS so, there's no fancy dtrace to look at
which processes is causing my disks to thrash.
I have 4 disks in the box. (all ide, 7200rpm)
1 OS disk [hda]
2 raided (1) disks [hdb/hdc]
1 pg_xlog disk (and
Oh great. Thanks a lot.
But now, I have another problem in this context. If I use text in the
SELECT statement (so, that the final output gives me the name of the
selected variables, plus the year and the value) than I get this
error message: ERROR: failed to find conversion function from
Merlin Moncure [EMAIL PROTECTED] writes:
there are a few things that I can think of that can can cause postgres
to cause i/o on a drive other than the data drive:
* logging (eliminate this by moving logs temporarily)
* swapping (swap is high and changing, other ways)
* dumps, copy statement
On Thu, 13 Dec 2007, Ow Mun Heng wrote:
I'm using centos 5 as the OS so, there's no fancy dtrace to look at
which processes is causing my disks to thrash.
Does plain old top show you anything interesting? If you hit 'c' after
starting it you'll get more information about the postgres
But now, I have another problem in this context. If I use text in the
SELECT statement (so, that the final output gives me the name of the
selected variables, plus the year and the value) than I get this
error message: ERROR: failed to find conversion function from
unknown to text
[squint...]
On Fri, 2007-12-14 at 01:54 -0500, Tom Lane wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
there are a few things that I can think of that can can cause postgres
to cause i/o on a drive other than the data drive:
* logging (eliminate this by moving logs temporarily)
I'll have to try this
Stefan Schwarzer [EMAIL PROTECTED] writes:
But now, I have another problem in this context. If I use text in the
SELECT statement (so, that the final output gives me the name of the
selected variables, plus the year and the value) than I get this
error message: ERROR: failed to find
Ow Mun Heng [EMAIL PROTECTED] writes:
vmstat would confirm or disprove that particular guess, since it tracks
swap I/O separately.
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff cache si sobibo in cs us sy id wa
75 matches
Mail list logo