On Apr 7, 2005, at 5:45 PM, Carlos Moreno wrote:
The thing seems to work -- I had to go in a shell as user
postgres and execute the command:
$ createlang -d dbname plpgsql
(I'm not sure I understand why that is necessary, or
what implications -- positive or negative -- it may have)
As a security
Per a thread a while back the discussion was along the lines
of serving data up to the web quick.
Our app currently pulls a bunch of data to several query
pages.
I have tried and not gotten the queries to return as fast as
they do now which is a huge disappointment as the hardware is
On Apr 8, 2005, at 8:28 AM, John DeSoi wrote:
On Apr 7, 2005, at 5:45 PM, Carlos Moreno wrote:
The thing seems to work -- I had to go in a shell as user
postgres and execute the command:
$ createlang -d dbname plpgsql
(I'm not sure I understand why that is necessary, or
what implications --
On Apr 8, 2005, at 9:41 AM, Sean Davis wrote:
Just one detail, but in the form of a question. In the original
posting, I think the trigger was doing the logging for something
happening on a table as a before insert or update--I may be wrong on
that detail. I would think of doing such actions
Sean Davis [EMAIL PROTECTED] writes:
Just one detail, but in the form of a question. In the original
posting, I think the trigger was doing the logging for something
happening on a table as a before insert or update--I may be wrong on
that detail. I would think of doing such actions AFTER
On Fri, Apr 08, 2005 at 10:36:26AM -0400, Tom Lane wrote:
AFAICS the only way that you could get into a can't-roll-back situation
is if the trigger tries to propagate the update outside the database.
For instance, the proverbial trigger to send mail: once sent you can't
cancel it. But really
On Fri, Apr 08, 2005 at 09:29:13AM -0400, Joel Fradkin wrote:
My understanding was this gets slower as you move further into the data, but
we have several options to modify the search, and I do not believe our
clients will page very far intro a dataset.
It gets slower because when you do an
Andrew Sullivan [EMAIL PROTECTED] writes:
On Fri, Apr 08, 2005 at 10:36:26AM -0400, Tom Lane wrote:
AFAICS the only way that you could get into a can't-roll-back situation
is if the trigger tries to propagate the update outside the database.
For instance, the proverbial trigger to send mail:
Tom,
Thanks for setting the record straight. It has been a while since I
have written a trigger and I forgot that you can't modify the row in
the AFTER trigger. Makes perfect sense.
For the record, here is what the docs say:
Typically, row before triggers are used for checking or modifying the
On Fri, Apr 08, 2005 at 11:35:47AM -0400, Tom Lane wrote:
What this does do for you is replace the risk of phantom emails (mail
sent but corresponding action inside the database never committed)
with the risk of duplicate emails (mail-sender sends you another one
after it restarts). In most
Tom Lane wrote:
The hypothetical mail-sending process
would presumably want to send mail and then delete the associated record
from the table of pending mails ... so what if it fails after sending
the mail and before committing the delete?
What this does do for you is replace the risk of phantom
On Fri, 2005-04-08 at 11:07 -0400, Andrew Sullivan wrote:
On Fri, Apr 08, 2005 at 09:29:13AM -0400, Joel Fradkin wrote:
Is there a fast way to get the count?
Not really, no. You have to perform a count() to get it, which is
possibly expensive. One way to do it, though, is to do
On Fri, 2005-04-08 at 09:29 -0400, Joel Fradkin wrote:
Our app currently pulls a bunch of data to several query pages.
My idea is to use the limit and offset to return just the first 50
records, if they hit next I can set the offset.
My understanding was this gets slower as you move
Ragnar =?ISO-8859-1?Q?Hafsta=F0?= [EMAIL PROTECTED] writes:
you might reduce the performance loss if your dataset is ordered by
a UNIQUE index.
select * from mytable where somecondition
ORDER by uniquecol limit 50;
and next:
select * from mytable where somecondition
Hello Everyone,
Currently, here at work, I am doing the whole
'advocacy' part of postgreSQL. It's not really hard to
do, as the other database's are MySQL and Sybase ;)
There is obviously a whole spat of data munging
going on in the background, and I noticed that psql in
8.0.1 now
select * from mytable where somecondition AND uniquecol?
ORDER by uniquecol limit 50 OFFSET 50;
where the ? is placeholder for last value returned by last query.
Uh, you don't want the OFFSET there do you? But otherwise, yeah,
this is a popular solution for
On Fri, 2005-04-08 at 12:32 -0400, Tom Lane wrote:
Ragnar =?ISO-8859-1?Q?Hafsta=F0?= [EMAIL PROTECTED] writes:
you might reduce the performance loss if your dataset is ordered by
a UNIQUE index.
select * from mytable where somecondition
ORDER by uniquecol limit
On Apr 8, 2005, at 10:59 AM, Andrew Sullivan wrote:
wakes up the mail-sender client with the NOTIFY; the NOTIFY and the
commit to the mail-it table only happen in that case if the
transaction commits. And since mail is async anyway, the extra few
seconds shouldn't make any difference, right?
I
Hello Keith,
well, I can understand that 3265 appears to be a
string, but, I was under the impression that -everything-
in a CSV format file could have ' ' around it? Is this not
the case ?
Sorry if I am being completely insane here :)
Steph
On Fri, Apr 08, 2005 at
On Fri, Apr 08, 2005 at 02:25:13PM -0400, Stef wrote:
well, I can understand that 3265 appears to be a
string, but, I was under the impression that -everything-
in a CSV format file could have ' ' around it? Is this not
the case ?
See the documentation for COPY -- the default quote
Vivek Khera [EMAIL PROTECTED] writes:
I have a lot of processing that could benefit from this type of
synchronization, except the fact that there's no Pg command to wait
until I get a notify message.
This is a client library deficiency, not a problem with the backend or
the protocol. In
On Apr 8, 2005 1:10 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Fri, 2005-04-08 at 12:08, Joel Fradkin wrote: Thanks all. I might have to add a button to do the count on command so they don't get the hit. I would want it to return the count of the condition, not the currently displayed number of
Bob Henkel [EMAIL PROTECTED] writes:
From a simple/high level perspective why is this? That is why can't
PostgreSQL do aggregates as well across large chunks of data. I'm
assuming it extremely complicated. Otherwise the folks around here
would have churned out a fix in a month or less and made
Believe me I just spent two months converting our app, I do not wish to give
up on that work. We do a great deal more then count. Specifically many of
our queries run much slower on postgres. As mentioned I purchased a 4 proc
box with 8 gigs of memory for this upgrade (Dell may have been a poor
On Fri, Apr 08, 2005 at 03:23:25PM -0400, Joel Fradkin wrote:
Believe me I just spent two months converting our app, I do not wish to give
our queries run much slower on postgres. As mentioned I purchased a 4 proc
I suspect you want the -performance list. And it'd be real handy to
get some
On Apr 8, 2005 2:23 PM, Joel Fradkin [EMAIL PROTECTED] wrote:
Believe me I just spent two months converting our app, I do not wish to giveup on that work. We do a great deal more then count. Specifically many ofour queries run much slower on postgres. As mentioned I purchased a 4 procbox with 8
Quoting Scott Marlowe [EMAIL PROTECTED]:
On Fri, 2005-04-08 at 12:08, Joel Fradkin wrote:
I might have to add a button to do the count on command so they don't get
the hit.
I would want it to return the count of the condition, not the currently
displayed number of rows.
Judging
I have asked specific questions and paid
attention to the various threads on configuration.
I will take my config files and post on
the performance thread that is a good suggestion (personnaly I have more faith
in this forum then a paid consultant, but at this point I am willing to try
I will also look at doing it the way you describe, they do have wide
liberty. Thanks so much for the ideas. Sorry I did not do a perusal of the
archives first (I normally try that, but think I am brain dead today).
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.
On Fri, 2005-04-08 at 15:23, Vivek Khera wrote:
On Apr 8, 2005, at 3:23 PM, Joel Fradkin wrote:
I set up the data on 4 10k scsi drives in a powervault and my wal on 2
15k
drives. I am using links to those from the install directory. It
starts and
stops ok this way, but maybe it
On Apr 8, 2005, at 4:35 PM, Bob Henkel wrote:
desktop SATA drive with no RAID? I'm by any means as knowledgeable
about I/O
setup as many of you are but my 2 cents wonders if the Dell RAID is
really
that much slower than a competitively priced/speced alternative? Would
Joel's problems just fade
On Apr 8, 2005 3:42 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Fri, 2005-04-08 at 15:35, Bob Henkel wrote: On Apr 8, 2005 3:23 PM, Vivek Khera [EMAIL PROTECTED] wrote: On Apr 8, 2005, at 3:23 PM, Joel Fradkin wrote: I set up the data on 4 10k scsi drives in a powervault and my wal on 2 15k
On Fri, 2005-04-08 at 15:36, Vivek Khera wrote:
On Apr 8, 2005, at 4:31 PM, Scott Marlowe wrote:
Note that there are several different RAID controllers you can get with
a DELL. I had good luck with the PERC 4C (AMI MegaRAID based) at my
I've had bad luck regarding speed with *all* of
On Fri, 2005-04-08 at 15:41, Vivek Khera wrote:
On Apr 8, 2005, at 4:35 PM, Bob Henkel wrote:
desktop SATA drive with no RAID? I'm by any means as knowledgeable
about I/O
setup as many of you are but my 2 cents wonders if the Dell RAID is
really
that much slower than a competitively
I turned off hyperthreading (I saw that on the list that it did not help on
Linux).
I am using a pretty lightweight windows box Optiplex with IDE 750-meg
internal 2.4 mghz cpu.
My desktop has 2 gig, so might not be bad idea to try it local (I have
installed), but me thinks its not totally a
Since it is a count of matched condition records I may not have a way
around.
What you could do is cache the search results (just caching the id's of
the rows to display is enough and uses little space) in a cache table,
numbering them with your sort order using a temporary sequence, so that
Please run this disk throughput test on your system :
http://boutiquenumerique.com/pf/multi_io.py
It just spawns N threads which will write a lot of data simultaneously to
the disk, then measures the total time. Same for read. Modify the
parameters in the source... it's set to generate
Stef:
Why is iuserid numeric? Are you going to do any math on the field?
If not, change it to varchar. In the long run you'll be happier.
Stef wrote:
Hello Everyone,
Currently, here at work, I am doing the whole
'advocacy' part of postgreSQL. It's not really hard to
do, as the
38 matches
Mail list logo