;
>>
>>
>> Thanks
>>
>> ravi
>>
>>
>>
>>
>> Journyx, Inc.
>> 7600 Burnet Road #300
>> Austin, TX 78757
>> www.journyx.com
>>
>> p 512.834. <(512)%20834->
>> f 512-834-8858 <(512)%20834-8858>
>>
>> Do you receive our promotional emails? You can subscribe or unsubscribe
>> to those emails at http://go.journyx.com/emailPreference/e/4932/714/
>>
>
>
--
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com
The OP is using:
autovacuum_vacuum_threshold | 10
That means that vacuum won't consider a table to be 'vacuum-able' until
after 100k changes that's nowhere near aggressive enough. Probably
what's happening is that when autovacuum finally DOES start on a table, it
just takes forever.
On Tue, Mar 23, 2010 at 12:12 AM, Greg Smith wrote:
> Carlo Stonebanks wrote:
>
>> So, we have the hardware, we have the O/S - but I think our config leaves
>> much to be desired. Typically, our planner makes nad decisions, picking seq
>> scan over index scan, where index scan has a better result
On Fri, Dec 11, 2009 at 4:39 PM, Nikolas Everett wrote:
>
>
>
> Fair enough. I'm of the opinion that developers need to have their unit
> tests run fast. If they aren't fast then your just not going to test as
> much as you should. If your unit tests *have* to createdb then you have to
> do wh
On Mon, Dec 7, 2009 at 1:12 PM, Ben Brehmer wrote:
> Hello All,
>
> I'm in the process of loading a massive amount of data (500 GB). After some
> initial timings, I'm looking at 260 hours to load the entire 500GB. 10 days
> seems like an awfully long time so I'm searching for ways to speed this
On Fri, Oct 23, 2009 at 11:38 AM, Michal J. Kubski wrote:
>
>
> Hi,
>
> Is there any way to get the query plan of the query run in the stored
> procedure?
> I am running the following one and it takes 10 minutes in the procedure
> when it is pretty fast standalone.
>
> Any ideas would be welcome!
>
> If you run Redhat, I would advise the most recent; i.e., Red Hat Enterprise
> Linux 5, since they do not add any new features and only correct errors.
> CentOS is the same as Red Hat, but you probably get better support from Red
> Hat if you need it -- though you pay for it.
>
The other thin
On Wed, Jul 15, 2009 at 10:36 PM, Scott Marlowe wrote:
> I'd love to see it.
+1 for index organized tables
--Scott
On Wed, Jul 15, 2009 at 9:18 AM, Alex Goncharov
wrote:
> ,--- You/Suvankar (Wed, 15 Jul 2009 18:32:12 +0530) *
> | Yes, I have got 2 segments and a master host. So, in a way processing
> | should be faster in Greenplum.
>
> No, it should not: it all depends on your data, SQL statements and
> s
>
> You're right that it should be removed, but this explanation is wrong. The
> behavior as configured is actually "if there are >=100 other transactions in
> progress, wait 0.1 second before committing after the first one gets
> committed", in hopes that one of the other 100 might also join along
oes.
--SCott
>
>
> Brian
>
> On Fri, Jun 26, 2009 at 1:06 PM, Scott Mead
> wrote:
> > -- sorry for the top-post and short response.
> >
> > Turn commit delay and commit siblings off.
> >
> > --Scott
> >
> > On 6/26/09, Brian Troutwine
On Wed, Jun 10, 2009 at 9:39 AM, Matthew Wakeling wrote:
> On Wed, 10 Jun 2009, Gurjeet Singh wrote:
>
>> There is a limit on the size of the mail that you can send to different
>> mailing lists. Please try to remove/link your
>> attachments if you are trying to send any.
>>
>
> No, size is not an
On Fri, May 29, 2009 at 3:45 PM, Fabrix wrote:
>
> Which is better and more complete, which have more features?
> What you recommend? pgbouncer or pgpool?
>
>>
In your case, where you're looking to just get the connection overhead
off of the machine, pgBouncer is probably going to be more effi
On Fri, May 29, 2009 at 1:30 PM, Dave Dutcher wrote:
> > From: Anne Rosset
> > Subject: Re: [PERFORM] Unexpected query plan results
> > >
> > >
> > Thank Dave. We are using postgresql-server-8.2.4-1PGDG and
> > have work-mem set to 20MB.
> > What value would you advise?
> > thanks,
> >
> > Anne
>
2009/5/29 Greg Smith
> On Fri, 29 May 2009, Grzegorz Ja?kiewicz wrote:
>
> if it is implemented somewhere else better, shouldn't that make it
>> obvious that postgresql should solve it internally ?
>>
>
> Opening a database connection has some overhead to it that can't go away
> without losing *
On Thu, May 28, 2009 at 4:53 PM, Fabrix wrote:
>
>
>>
>> Wow, that's some serious context-switching right there - 300k context
>> switches a second mean that the processors are spending a lot of their
>> time fighting for CPU time instead of doing any real work.
>
>
There is a bug in the quad c
On Wed, May 27, 2009 at 1:57 PM, Alan McKay wrote:
> Hey folks,
>
> I have done some googling and found a few things on the matter. But
> am looking for some suggestions from the experts out there.
>
> Got any good pointers for reading material to help me get up to speed
> on PostgreSQL clusteri
On Tue, May 26, 2009 at 7:58 PM, Dave Page wrote:
> On 5/26/09, Greg Smith wrote:
> > I keep falling into situations where it would be nice to host a server
> > somewhere else. Virtual host solutions and the mysterious cloud are no
> > good for the ones I run into though, as disk performance is
On Thu, May 7, 2009 at 10:14 AM, David Brain wrote:
> Hi,
>
> Some context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -
> the 'datatable' in the example below although in order to improve
> performance this table is partitioned (by date range) into a number of
> partition tables. Each
19 matches
Mail list logo