On 04/28/2017 01:34 AM, Andres Freund wrote:
On 2017-04-28 01:29:14 +0200, Tomas Vondra wrote:
I can confirm this observation. I bought the Intel 750 NVMe SSD last year,
the device has 1GB DDR3 cache on it (power-loss protected), can do ~1GB/s of
sustained O_DIRECT sequential writes. But when ru
On 2017-04-28 01:29:14 +0200, Tomas Vondra wrote:
> I can confirm this observation. I bought the Intel 750 NVMe SSD last year,
> the device has 1GB DDR3 cache on it (power-loss protected), can do ~1GB/s of
> sustained O_DIRECT sequential writes. But when running pgbench, I can't push
> more than ~3
On 04/27/2017 07:35 PM, Andres Freund wrote:
On 2017-04-27 10:29:48 -0700, Joshua D. Drake wrote:
On 04/27/2017 09:34 AM, Andres Freund wrote:
On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
On 04/27/2017 08:59 AM, Andres Freund wrote:
I would agree it isn't yet a widespread issue.
On 04/27/2017 06:34 PM, Andres Freund wrote:
On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
On 04/27/2017 08:59 AM, Andres Freund wrote:
Ok, based on the, few, answers I've got so far, my experience is
indeed skewed. A number of the PG users I interacted with over
the last couple years
On 2017-04-27 10:29:48 -0700, Joshua D. Drake wrote:
> On 04/27/2017 09:34 AM, Andres Freund wrote:
> > On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
> > > On 04/27/2017 08:59 AM, Andres Freund wrote:
> > >
>
> > > I would agree it isn't yet a widespread issue.
> >
> > I'm not yet sure ab
On 04/27/2017 09:34 AM, Andres Freund wrote:
On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
On 04/27/2017 08:59 AM, Andres Freund wrote:
I would agree it isn't yet a widespread issue.
I'm not yet sure about that actually. I suspect a large percentage of
people with such workloads ar
On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
> On 04/27/2017 08:59 AM, Andres Freund wrote:
>
> >
> > Ok, based on the, few, answers I've got so far, my experience is indeed
> > skewed. A number of the PG users I interacted with over the last couple
> > years had WAL write ranges somewhe
On 04/27/2017 08:59 AM, Andres Freund wrote:
Ok, based on the, few, answers I've got so far, my experience is indeed
skewed. A number of the PG users I interacted with over the last couple
years had WAL write ranges somewhere in the range of 500MB/s to 2.2GB/s
(max I'veseen). At that point WA
Hi,
On 2017-04-24 21:17:43 -0700, Andres Freund wrote:
> I've lately seen more and more installations where the generation of
> write-ahead-log (WAL) is one of the primary bottlenecks. I'm curious
> whether that's primarily a "sampling error" of mine, or whether that's
> indeed more common.
>
>
On Tue, Apr 25, 2017 at 1:17 AM, Andres Freund wrote:
> Questions (answer as many you can comfortably answer):
> - How many MB/s, segments/s do you see on busier servers?
~20MB/s with FPW compression, with peaks of ~35MB/s. Writes become the
bottleneck without compression and it tops at about 40-
Hi Andres.
> 25 апр. 2017 г., в 7:17, Andres Freund написал(а):
>
> Hi,
>
> I've lately seen more and more installations where the generation of
> write-ahead-log (WAL) is one of the primary bottlenecks. I'm curious
> whether that's primarily a "sampling error" of mine, or whether that's
> ind
Thanks for all help of everyone.
I have tried to change effective_cache_size = 24GB and it run well.
Tuan Hoang Anh
On Sat, Feb 20, 2016 at 8:46 AM, tuanhoanganh wrote:
> Hello
>
> I have a bad query on PostgreSQL 9.0.23 - 64bit - Windows 2012 R2 - 48GB Ram
9.0 is no longer supported. You should work toward upgrading to a
newer version. It might not solve this problem, but it would give you
better tools for
Here is the output of pid
postgres@symds-pg:~ $ cat /proc/15610/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimitedunlimitedseconds
Max file size unlimitedunlimitedbytes
Max
On Thu, Jun 18, 2015 at 07:19:13PM +, Sheena, Prabhjot wrote:
> Hi Ken/ Will
>
> I have checked the ulimit value and we are nowhere hitting the max 4096
> that we have currently set. Is there any other explanation why we should be
> thinking of bumping it to like ulimit -n 5000
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Sheena, Prabhjot
Sent: Thursday, June 18, 2015 3:19 PM
To: k...@rice.edu; Will Platnick
Cc: pgsql-performa...@postgresql.org; pgsql-general@postgresql.org
Subject:
"Sheena, Prabhjot" writes:
> Hi Ken/ Will
>
> I have checked the ulimit value and we are nowhere hitting the max
> 4096 that we have currently set. Is there any other explanation why
> we should be thinking of bumping it to like ulimit -n 5 ( Add
> ulimit -n 5 to the
Hi Ken/ Will
I have checked the ulimit value and we are nowhere hitting the max 4096 that
we have currently set. Is there any other explanation why we should be thinking
of bumping it to like ulimit -n 5 ( Add ulimit -n 5 to the start of
whatever you use to start pgbouncer
On Thu, Jun 18, 2015 at 05:41:01PM +, Sheena, Prabhjot wrote:
> Here is the output of OS limits
>
> postgres@symds-pg:~ $ ulimit -a
>
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size (b
Here is the output of OS limits
postgres@symds-pg:~ $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 790527
max locked memory
On Thu, Jun 18, 2015 at 05:09:10PM +, Sheena, Prabhjot wrote:
> Guys
> I have an issue going on with PGBOUNCER which is slowing down the
> site
>
> PGBOUNCER VERSION: pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16
> cpu) 98GB RAM
> DATABASE VERION: postgresql 9.3
>
On Fri, Jun 12, 2015 at 09:37:36PM +, Sheena, Prabhjot wrote:
> Here is some more information
>
> pool_mode | transaction
>
> We have transactional pooling and our application is set up in such a way
> that we have one query per transaction. We have set default pool size to
Please do not cross-post on the PostgreSQL lists. Pick the most
appropriate list to post to and just post there.
cheers
andrew
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Unsubscribe
On Fri, Jun 12, 2015 at 8:57 PM, Sheena, Prabhjot <
prabhjot.si...@classmates.com> wrote:
> Guys we see spike in pg bouncer during the peak hours and that was
> slowing down the application. We did bump up the connection limit and it is
> helpful but now we again notice little spike
2014-03-14 4:26 GMT+09:00 :
> PostgreSQL 9.3.3 RHEL 6.4
>
> Total db Server memory 64GB
(...)
> effective_cache_size = 48MB
I'm not sure if this will help directly, but is the value for
'effective_cache_size' intentional? 48 *GB* would be a more likely
setting.
Regards
Ian Barwick
--
Sent v
Sorry, I used AND-statements instead of OR-statement in the example.
I notices that gin is much faster than gist, but I don't know why.
The query gets slow, because there are many non-stop words which appear very
often in my sentences, like in 3% of all the sentences.
Do you think it could be wor
On 9/5/2013 3:42 PM, Merlin Moncure wrote:
My advise is to start
small and pick a very specific topic and focus on that.
my advise is to first familiarize yourself with the package from the
user perspective before even thinking of diving in and making any
changes.I say this, because the
On Thu, Sep 5, 2013 at 10:39 AM, Kevin Grittner wrote:
> M Tarkeshwar Rao wrote:
>
>> I want start contributing in Postgres in code level.
>
> Welcome! You should probably start with this page and its links:
>
> http://wiki.postgresql.org/wiki/Developer_FAQ
>
>> Can you please suggest me some li
M Tarkeshwar Rao wrote:
> I want start contributing in Postgres in code level.
Welcome! You should probably start with this page and its links:
http://wiki.postgresql.org/wiki/Developer_FAQ
> Can you please suggest me some links where I can learn:
> 1. Transaction Isolation in Database
On 09/07/2013 05:20, Mike Christensen wrote:
> I was reading about Postgres stored procs in the FAQ:
>
> https://wiki.postgresql.org/wiki/FAQ#Does_PostgreSQL_have_stored_procedures.3F
>
> It claims that an alternative syntax to:
>
> SELECT theNameOfTheFunction(arg1, arg2);
>
> Is:
>
> PERFORM
Ah ok that makes sense. The FAQ wasn't exactly clear about that.
On Mon, Jul 8, 2013 at 9:38 PM, Tony Theodore wrote:
>
> On 09/07/2013, at 2:20 PM, Mike Christensen wrote:
>
>
> PERFORM MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' );
>
> I get the error:
>
> ERROR: syntax error at or near "
On 09/07/2013, at 2:20 PM, Mike Christensen wrote:
>
> PERFORM MyInsert(1,101,'2013-04-04','2013-04-04',2,'f' );
>
> I get the error:
>
> ERROR: syntax error at or near "PERFORM"
> SQL state: 42601
> Character: 1
>
> Is the FAQ out of date or was this feature removed? I'm using 9.2.1. Thank
I was reading about Postgres stored procs in the FAQ:
https://wiki.postgresql.org/wiki/FAQ#Does_PostgreSQL_have_stored_procedures.3F
It claims that an alternative syntax to:
SELECT theNameOfTheFunction(arg1, arg2);
Is:
PERFORM theNameOfTheFunction(arg1, arg2);
However, when I try the followin
On Fri, May 31, 2013 at 7:32 PM, wrote:
> 1.) Is there any way to clear the cache so that we can ensure that when we
> run "explain analyze" on a query and make some minor adjustments to that
> query and re-execute, the plan is not cached.
PostgreSQL doesn't cache query plans if you do a normal
parameters. Un-cached the query ran in about 55 seconds.
Original Message Subject: Re: [GENERAL] [PERFORM] Very slow inner join query Unacceptablelatency.From: Scott Marlowe <scott.marl...@gmail.com>Date: Fri, May 24, 2013 3:03 pmTo: fburg...@radiantblue.comCc: Jaime Casano
On Fri, May 24, 2013 at 3:44 PM, wrote:
> Total runtime: 1606.728 ms 1.6 seconds <- very good response time
> improvement
>
> (7 rows)
>
> Questions:
>
> Any concerns with setting these conf variables you recommended; work_mem,
> random_page_cost dbserver wide (in postgresql,conf)?
>
> Thanks so
me=0.007..2.659 rows=3360 loops=441) Index Cond: (SARS_RUN_ID=tr1_.ID)Total runtime: 1606.728 ms 1.6 seconds <- very good response time improvement(7 rows)Questions:Any concerns with setting these conf variables you recommended; work_mem, random_page_cost dbserver wide (in postgresql,conf
Looking at the execution plan makes me wonder what your work_mem is
set to. Try cranking it up to test and lowering random_page_cost:
set work_mem='500MB';
set random_page_cost=1.2;
explain analyze select ...
and see what you get.
--
Sent via pgsql-general mailing list (pgsql-general@postgres
On Thursday, May 23, 2013 10:51 PM fburgess wrote:
> serverdb=# set enable_hashjoin=off;
> SET
> serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join
> SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT';
>
On Thu, May 23, 2013 at 12:21 PM, wrote:
>
> But what negative impact is disabling hash joins?
>
doing it just for a single query, could be a tool for solving
particular problems.
setting it in postgresql.conf, therefore affecting all queries, is
like using a hammer to change tv channel... it wi
serverdb=# set enable_hashjoin=off;SETserverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1.ALGORITHM='SMAT'; QUERY PLAN--
On Wed, May 22, 2013 at 7:41 AM, wrote:
> PostgreSQL 9.1.6 on linux
>
>From the numbers in your attached plan, it seems like it should be doing a
nested loop from the 580 rows (it thinks) that match in SARS_ACTS_RUN
against the index on sars_run_id to pull out the 3297 rows (again, it
think, th
PostgreSQL 9.1.6 on linux
Original Message
Subject: Re: [PERFORM] Very slow inner join query Unacceptable latency.
From: Jaime Casanova
Date: Tue, May 21, 2013 2:59 pm
To: Freddie Burgess
Cc: psql performance list
On Wednesday, May 22, 2013 3:24 AM fburgess wrote:
> The SARS_ACTS table currently has 37,115,515 rows
> we have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree
> (sars_run_id)
> we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY
> KEY (id )
> serverdb=# e
On Tue, May 21, 2013 at 4:53 PM, wrote:
> The SARS_ACTS table currently has 37,115,515 rows
>
> we have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree
> (sars_run_id)
> we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY
> KEY (id )
>
> serverdb=# explain sel
Am 27.09.2012, 02:04 Uhr, schrieb Jeff Janes :
On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski
wrote:
On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
When I disable nested loop, I get a cost of
On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski
wrote:
> On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
>> The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
>> When I disable nested loop, I get a cost of 2,535,992.34 which runs in
>> only 133,447.7
Edson Richter writes:
>> That said, looking at your actual query:
>>
>> SELECT COUNT(DISTINCT eu.id)
>> FROM exchange_uploads eu
>> JOIN upload_destinations ud ON ud.id = eu.upload_destination_id
>> LEFT JOIN uploads u ON u.id = eu.upload_id
>> LEFT JOIN import_errors ie ON ie.exchange_upload_id
On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski <
dep...@depesz.com> wrote:
> On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
> > The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
> > When I disable nested loop, I get a cost of 2,535,992.34 which ru
On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
> The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
> When I disable nested loop, I get a cost of 2,535,992.34 which runs in
> only 133,447.790 ms. We have run queries on our database with a cost
> of 200K cost b
Em 26/09/2012 17:03, Shaun Thomas escreveu:
On 09/26/2012 01:38 PM, Robert Sosinski wrote:
I seem to be getting an inaccurate cost from explain. Here are two
examples for one query with two different query plans:
Well, there's this:
Nested Loop (cost=0.00..151986.53 rows=2817 width=4) (act
On 09/26/2012 01:38 PM, Robert Sosinski wrote:
I seem to be getting an inaccurate cost from explain. Here are two
examples for one query with two different query plans:
Well, there's this:
Nested Loop (cost=0.00..151986.53 rows=2817 width=4) (actual
time=163.275..186869.844 rows=43904 loop
On 07/06/2012 09:33 AM, Samuel Gendler wrote:
Some other potential issues - with only 10MB of work_mem, you might be
gong to temp space on disk more than you realize. Explain analyze
might reveal that, but only if you happen to pick a query that exceeds
work_mem on at least one step.
Rather
On Sat, 2012-06-02 at 10:05 -0700, idc danny wrote:
> Now, if I want do do the following:
> select CombineStrings(ARRAY[SplitString2Array("SomeTextColumn"), 'New
> string to add']) from "SomeTable"
>
> i get the following error:
> array value must start with "{" or dimension information
This disc
Jayashankar K B writes:
> Hi All,
> I am trying to compile Postgres Source code for ARM cortex A8 architecture.
> While compiling, I got an error message which read "selected processor does
> not support `swpb r4,r4,[r3]' "
> One of the Postgres forums at the location
> "http://postgresql.104569
Rohan Malhotra wrote:
First off, for a one-row result with too many values to fit on one
line, you might want to use this in psql:
\x on
More importantly, you seem to be misinterpreting the numbers.
You've allocated 2,205,969,940 buffers. Of those allocations, the
allocating backend had t
If the query planner thinks it has the default amount of memory (128MB)
and the stats are out of date, then it will by no means be able to plan
proper execution.
I would recommend setting the effective_cache_size to an appropriate
value, running "analyze" on both tables with an appropriate stats
t
On Friday 08 July 2011 10:31:33 French, Martin wrote:
> If the query planner thinks it has the default amount of memory (128MB)
> and the stats are out of date, then it will by no means be able to plan
> proper execution.
>
> I would recommend setting the effective_cache_size to an appropriate
> v
On Thursday 07 July 2011 19:54:08 French, Martin wrote:
> How up to date are the statistics for the tables in question?
>
> What value do you have for effective cache size?
>
> My guess would be that planner thinks the method it is using is right
> either for its current row number estimations, o
How up to date are the statistics for the tables in question?
What value do you have for effective cache size?
My guess would be that planner thinks the method it is using is right
either for its current row number estimations, or the amount of memory
it thinks it has to play with.
-Origina
On Mon, Jun 13, 2011 at 6:33 PM, Willy-Bas Loos wrote:
> On Fri, Jun 10, 2011 at 9:58 PM, Josh Berkus wrote:
>>
>> It's not 10%. We use a fixed sample size, which is configurable on the
>> system, table, or column basis.
>
> It seems that you are referring to "alter column set statistics" and
>
On Fri, Jun 10, 2011 at 9:58 PM, Josh Berkus wrote:
> It's not 10%. We use a fixed sample size, which is configurable on the
> system, table, or column basis.
>
It seems that you are referring to "alter column set statistics" and
"default_statistics_target", which are the number of percentiles
Hi,
is there a way to change the sample size for statistics (that analyze
gathers)?
It is said to be 10%. i would like to raise that, because we are getting bas
estimations for n_distinct.
Cheers,
WBL
--
"Patriotism is the conviction that your country is superior to all others
because you were
On Mon, May 16, 2011 at 10:19 AM, Robert Klemme
wrote:
> On Fri, May 13, 2011 at 9:04 PM, Robert Haas
> wrote:
> Separating index and tables might not be a totally good idea
> generally. Richard Foote has an excellent article about Oracle but I
> assume at least a few things do apply to PostgreS
On 10/14/2010 05:18 AM, Brandon Casci wrote:
Hello
I have an application hosted on Heroku. They use postgres. It's more or
less abstracted away, but I can get some performance data from New
Relic. For the most part, performance is ok, but now and then some
queries take a few seconds, and spike u
Hi Kevin,
> Sorry; I didn't mean to be harsh.
I also overreacted, sorry about that.
Indeed the documentation is well done, as is the software itself =)
Thanks, Clemens
> Sometimes people coming from some other products aren't used to that
> -- I was just trying to point you in the direction of
I suppose some workaround would be to introduce temporary cursor:
CREATE OR REPLACE FUNCTION A3() RETURNS VOID AS $BODY$
declare _tmp record;
begin
select * from A1() as dummy ( x double precision ) into _tmp;
end;
$BODY$ LANGUAGE 'plpgsql';
But I'm not sure if this is more effiecent than A3 r
2010/2/19 :
> I suppose some workaround would be to introduce temporary cursor:
>
> CREATE OR REPLACE FUNCTION A3() RETURNS VOID AS $BODY$
> declare _tmp record;
> begin
> select * from A1() as dummy ( x double precision ) into _tmp;
> end;
> $BODY$ LANGUAGE 'plpgsql';
>
> But I'm not sure if thi
On 19/02/2010 09:59, wilcza...@op.pl wrote:
> Hi Ray, thanks for reply. Your solution needs to be modified with alias to
> get executed properly:
>
> CREATE OR REPLACE FUNCTION A3() RETURNS VOID AS $BODY$
> begin
> select * from A1() as dummy ( x double precision );
> return;
> end;
> $BODY$ LA
And what about that :
http://www.postgresql.org/docs/8.4/interactive/xfunc-sql.html#XFUNC-SQL-FUNCTIONS-RETURNING-SET
Maybe my french english disallowed me to understand right the question,
but I think that this item could help in a way!
Le vendredi 19 février 2010 à 11:04 +0100, Pavel Stehule a
Hi Ray, thanks for reply. Your solution needs to be modified with alias to get
executed properly:
CREATE OR REPLACE FUNCTION A3() RETURNS VOID AS $BODY$
begin
select * from A1() as dummy ( x double precision );
return;
end;
$BODY$ LANGUAGE 'plpgsql';
but when used: select * from A3() it genera
Hi Pavel, thanks for reply. Your solution:
CREATE OR REPLACE FUNCTION A3() RETURNS VOID AS $BODY$
begin
return query select * from A1();
return;
end;
$BODY$ LANGUAGE 'plpgsql';
generates error "cannot use RETURN QUERY in a non-SETOF function" because A3
returns VOID.
"Pavel Stehule" napisa
2010/2/19 :
> Hi Pavel, thanks for reply. Your solution:
>
> CREATE OR REPLACE FUNCTION A3() RETURNS VOID AS $BODY$
> begin
> return query select * from A1();
> return;
> end;
> $BODY$ LANGUAGE 'plpgsql';
>
> generates error "cannot use RETURN QUERY in a non-SETOF function" because A3
> returns
On 18/02/2010 12:05, wilcza...@op.pl wrote:
>
> CREATE OR REPLACE FUNCTION A3() RETURNS VOID AS $BODY$
> begin
> perform A1();
> end;
> $BODY$ LANGUAGE 'plpgsql';
You need to do:
select * from A1();
Ray.
--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie
--
Sent via pgsql-general mai
Hello
2010/2/18 :
> I have a function A1 that returns setof records, and I use it in two ways:
> 1) from function A2, where I need results from A1
> 2) from function A3, where I don't need these results, all I need is to
> execute logic from A1
>
> Here ale very simple versions of my functions:
>
I have a function A1 that returns setof records, and I use it in two ways:
1) from function A2, where I need results from A1
2) from function A3, where I don't need these results, all I need is to
execute logic from A1
Here ale very simple versions of my functions:
CREATE OR REPLACE FUNCTION A1()
Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
Could some of you please share some info on such scenarios- where
you are supporting/designing/developing databases that run into at
least a few hundred GBs of data (I know, that is small by todays'
standards)?
At NuevaSync we use PG in
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
> > Could some of you please share some info on such scenarios- where
> > you are supporting/designing/developing databases that run into at
> > least a few hundred GBs of data (I know, that is small by todays'
> > standards)?
Just saw this, so
Jayadevan M wrote:
> Could some of you please share some info on such scenarios- where
> you are supporting/designing/developing databases that run into at
> least a few hundred GBs of data (I know, that is small by todays'
> standards)?
I'm a database administrator for the Wisconsin Courts.
On Tue, 24 Nov 2009, Denis Lussier wrote:
Bouncing the app will roll back the transactions.
Depends on the application. Some certainly use a shutdown hook to flush
data out to a database cleanly.
Obviously if you kill -9 it, then all bets are off.
Matthew
--
Software suppliers are trying t
Bouncing the app will roll back the transactions. If there were any
pending updates/inserts, wouldn't he be able to see them in one of the
system tables...
On 11/24/09, Matthew Wakeling wrote:
> On Tue, 24 Nov 2009, Denis Lussier wrote:
>> IMHO the client application is already confused and it'
IMHO the client application is already confused and it's in Prod.
Shouldn't he perhaps terminate/abort the IDLE connections in Prod and
work on correcting the problem so it doesn't occur in Dev/Test??
On 11/24/09, Matthew Wakeling wrote:
> On Mon, 23 Nov 2009, Lorenzo Allegrucci wrote:
>> Anyway
On Tue, 24 Nov 2009, Denis Lussier wrote:
IMHO the client application is already confused and it's in Prod.
Shouldn't he perhaps terminate/abort the IDLE connections in Prod and
work on correcting the problem so it doesn't occur in Dev/Test??
The problem is, the connection isn't just IDLE - it
On Fri, 20 Nov 2009, Lorenzo Allegrucci wrote:
performance is degrading...
In normal conditions the postgres process uses about 3% of cpu time
but when is in "degraded" conditions it can use up to 25% of cpu time.
You don't really give enough information to determine what is going on
here.
2009/11/2 Grant Masan
> Size is "6154 MB". I have checked all queries, and those are as good as
> they can be in this situation. You think that this confs doesn't make really
> no difference at all ?
>
> you gotta hit 'reply all' next time ;)
configuration makes difference, but you need to know w
William Scott Jordan wrote:
> Hi Andrew,
>
> That's a very good guess. We are in fact updating this table multiple
> times within the same triggered function, which is being called on an
> INSERT. Essentially, we're using this to keep a running total of the
> number of rows being held in an
>-Original Message-
>From: pgsql-performance-ow...@postgresql.org
>
>We use a typical counter within a transaction to generate
>order sequence number and update the next sequence number.
>This is a simple next counter - nothing fancy about it. When
>multiple clients are concurrently a
On Thu, 2009-07-16 at 00:11 -0600, Scott Marlowe wrote:
> As others have said, a serial is a good idea, HOWEVER, if you can't
> have gaps in sequences, or each customer needs their own sequence,
> then you get to lock the rows / table / etc that you're mucking with
> to make sure you don't issue t
On Thursday 16 July 2009 23:20:34 William Scott Jordan wrote:
> Hi Andrew,
>
> That's a very good guess. We are in fact updating this table multiple
> times within the same triggered function, which is being called on an
> INSERT. Essentially, we're using this to keep a running total of the
> num
Hi Andrew,
That's a very good guess. We are in fact updating this table multiple
times within the same triggered function, which is being called on an
INSERT. Essentially, we're using this to keep a running total of the
number of rows being held in another table. The function we're using
c
Hi Andrew,
That's a very good guess. We are in fact updating this table multiple
times within the same triggered function, which is being called on an
INSERT. Essentially, we're using this to keep a running total of the
number of rows being held in another table. The function we're using
c
On Thursday 16 July 2009 19:56:47 William Scott Jordan wrote:
> Hey all!
>
> Is there a better way to increase or decrease the value of an integer
> than doing something like:
>
> ---
> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 123 ;
> ---
>
> We seem to be getting a lot of deadlock
May be a simple way would be to use a "SEQUENCE" database object. And
call nextval('your_sequence') to obtain the next unique value (of type
bigint).
According to PG docs
"http://www.postgresql.org/docs/8.4/interactive/functions-sequence.html";,
the sequence object has functions that "provide simpl
Raji Sridar wrote:
> We use a typical counter within a transaction to generate
> order sequence number and update the next sequence number.
> This is a simple next counter - nothing fancy about it. When
> multiple clients are concurrently accessing this table and
> updating it, under extermely
On Wed, Jul 15, 2009 at 10:59 PM, Raji Sridar (raji) wrote:
> Hi,
>
> We use a typical counter within a transaction to generate order sequence
> number and update the next sequence number. This is a simple next counter -
> nothing fancy about it. When multiple clients are concurrently accessing
>
Hi,
Are you using automatic sequence increment in table?
- Original Message -
From: Raji Sridar (raji)
To: pgsql-general@postgresql.org ; pgsql-performa...@postgresql.org
Sent: Thursday, July 16, 2009 10:29 AM
Subject: [PERFORM] Concurrency issue under very heay loads
Hi,
2009/5/28 Eddy Ernesto Baños Fernández :
> Try Cybercluster
I looked into that. There is one piece of documentation that is less
than ten pages long. There is no users group, no listserve, no
community that I can discern.
Do you have experience with it and if so could you please share.
Thank
On Fri, Jun 26, 2009 at 4:36 PM, Brian
Troutwine wrote:
>> *) use indexes to optimize where and join conditions. for example,
>> update yadda set yadda where foo = bar, make sure that there is an
>> index on foo. As alan noted this is almost definitely your problem.
>
> To my knowledge, I have. a
> Depending on your exact needs, which the terminology you're using only allow
> to guess about, you might enjoy this reading:
> http://wiki.postgresql.org/wiki/Image:Moskva_DB_Tools.v3.pdf
Thanks. To be honest I don't even know myself what my needs are yet.
I've only been on the job here for a
Hi,
Le 27 mai 09 à 19:57, Alan McKay a écrit :
I have done some googling and found a few things on the matter. But
am looking for some suggestions from the experts out there.
Got any good pointers for reading material to help me get up to speed
on PostgreSQL clustering? What options are avai
1 - 100 of 229 matches
Mail list logo