On Wed, Nov 6, 2013 at 8:35 AM, Scott Marlowe wrote:
> That's a mostly religious argument. I.e. you're going on feeling here
> that pooling in jdbc alone is better than either jdbc/pgbouncer or
> plain pgbouncer alone. My experience is that jdbc pooling is not in
> the same category as pgbouncer f
On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang wrote:
> For our application, a few seconds of data loss is acceptable.
If a few seconds of data loss is acceptable, I would seriously look at
the synchronous_commit setting and think about turning that off rather
than risk silent corruption with non-e
On Thu, Mar 14, 2013 at 4:37 PM, David Boreham wrote:
> You might want to evaluate the performance you can achieve with a single-SSD
> (use several for capacity by all means) before considering a RAID card + SSD
> solution.
> Again I bet it depends on the application but our experience with the ol
On Wed, Aug 24, 2011 at 10:23 AM, Andy wrote:
> According to the specs for database storage:
> "Random 4KB arites: Up to 600 IOPS"
> Is that for real? 600 IOPS is *atrociously terrible* for an SSD. Not much
> faster than mechanical disks.
Keep in mind that the 600 IOPS is over the entire disk. p
On Sun, Jul 17, 2011 at 7:30 PM, Craig Ringer
wrote:
> On 18/07/2011 9:43 AM, Andy wrote:
>> Is BBU still needed with SSD?
>
> You *need* an SSD with a supercapacitor or on-board battery backup for its
> cache. Otherwise you *will* lose data.
>
> Consumer SSDs are like a hard disk attached to a RA
On Thu, Apr 21, 2011 at 1:28 AM, Tory M Blue wrote:
> this is a Fedora 12 system, 2.6.32.23-170. I've been reading and
> appears this is yet another fedora bug, but so far I have not found
> any concrete evidence on how to fix it.
If it's a "fedora" bug, it's most likely related to the kernel whe
On Mon, Apr 11, 2011 at 6:04 AM, Glyn Astill wrote:
> The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz, our current servers
> are 2 x 4 core Xeon E5320 CPUs at 2Ghz.
>
> What I'm seeing is when the number of clients is greater than the number of
> cores, the new servers perform better on f
On Wed, Apr 6, 2011 at 5:42 PM, Scott Carey wrote:
> On 4/5/11 7:07 AM, "Merlin Moncure" wrote:
>>One thing about MLC flash drives (which the industry seems to be
>>moving towards) is that you have to factor drive lifespan into the
>>total system balance of costs. Data point: had an ocz vertex 2
On Mon, Sep 20, 2010 at 2:54 PM, George Sexton wrote:
> I'll throw in my 2 cents worth:
>
> 1) Performance using RAID 1 for reads sucks. You would expect throughput to
> double in this configuration, but it doesn't. That said, performance for
> RAID 1 is not noticeably worse than Linux MD. My test
On Wed, Apr 7, 2010 at 7:06 PM, Craig James wrote:
> On 4/7/10 5:47 PM, Robert Haas wrote:
>> On Wed, Apr 7, 2010 at 6:56 PM, David Rees wrote:
>>>> synchronous_commit = off
>>>
>>> You are playing with fire here. You should never turn this off unless
&
On Wed, Apr 7, 2010 at 3:57 PM, Craig James wrote:
> On 4/7/10 3:36 PM, Joshua D. Drake wrote:
>> My guess is that it is not CPU, it is IO and your CPU usage is all WAIT
>> on IO.
>>
>> To have your CPUs so flooded that they are the cause of an inability to
>> log in is pretty suspect.
>
> I thoug
On Wed, Apr 7, 2010 at 2:37 PM, Craig James wrote:
> Most of the time Postgres runs nicely, but two or three times a day we get a
> huge spike in the CPU load that lasts just a short time -- it jumps to 10-20
> CPU loads. Today it hit 100 CPU loads. Sometimes days go by with no spike
> events.
On Fri, Oct 9, 2009 at 9:45 AM, Alan McKay wrote:
> We've just discovered thanks to a new Munin plugin
> http://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html
> that our production DB is completely maxing out in I/O for about a 3
> hour stretch from 6am til 9am
> Th
On Sat, Aug 29, 2009 at 1:46 AM, Greg Stark wrote:
> On Sat, Aug 29, 2009 at 5:20 AM, Luke Koops wrote:
>> RAID-5 can be much faster than RAID-10 for random reads and writes. It is
>> much slower than
>> RAID-10 for sequential writes, but about the same for sequential reads. For
>> typical acce
On Wed, Jul 22, 2009 at 12:52 AM, Kelvin Quee wrote:
> I have been staring at *top* for a while and it's mostly been 40% in
> userspace and 30% in system. Wait is rather low and never ventures
> beyond 1%.
Certainly seems like you are CPU bound.
> My hardware is a duo core AMD Athlon64 X2 5000+,
On Fri, Jun 19, 2009 at 2:05 PM, Brian Cox wrote:
> David Rees [dree...@gmail.com] wrote:
>>
>> Along those lines, couldn't you just have the DB do the work?
>>
>> select max(ts_id), min(ts_id) from ... where ts_interval_start_time >=
>> ... and ...
>&
On Fri, Jun 19, 2009 at 1:05 PM, Brian Cox wrote:
> Thanks to all for the analysis and suggestions. Since the number of rows in
> an hour < ~500,000, brute force looks to be a fast solution:
>
> select ts_id from ... where ts_interval_start_time >= ... and ...
>
> This query runs very fast as does
On Sun, May 31, 2009 at 10:26 PM, S Arvind wrote:
> Having a doubt, we want to vacuum and reindex some 50 most used tables daily
> on specific time. Is it best to have a function in postgres and call it in
> cron or is there any other good way to do the two process for specified
> tables at specif
On Thu, May 28, 2009 at 11:50 AM, Fabrix wrote:
> Monitoring (nmon, htop, vmstat) see that everything is fine (memory, HD,
> eth, etc) except that processors regularly climb to 100%.
What kind of load are you putting the server under when this happens?
> I can see that the processes are waiting
On Fri, Mar 27, 2009 at 10:30 AM, wrote:
> On Thu, 26 Mar 2009, Dave Cramer wrote:
>> So far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write
>> speed. So the question becomes what is the best filesystem for this drive?
>
> until the current mess with ext3 and fsync gets resolve
On Tue, Mar 24, 2009 at 6:48 PM, Scott Carey wrote:
> Your xlogs are occasionally close to max usage too -- which is suspicious at
> 10MB/sec. There is no reason for them to be on ext3 since they are a
> transaction log that syncs writes so file system journaling doesn't mean
> anything. Ext2 th
On Fri, Feb 20, 2009 at 1:34 PM, Battle Mage wrote:
> The amount of tps almost doubled, which is good, but i'm worried about the
> load. For my application, a load increase is bad and I'd like to keep it
> just like in 8.2.6 (a load average between 3.4 and 4.3). What parameters
> should I work w
On Tue, Feb 3, 2009 at 9:54 AM, Jeff wrote:
> Scalefactor 50, 10 clients: 900tps
>
> At scalefactor 50 the dataset fits well within memory, so I scaled it up.
>
> Scalefactor 1500: 10 clients: 420tps
>
> While some of us have arrays that can smash those numbers, that is crazy
> impressive for a pl
On Mon, Jan 26, 2009 at 12:27 PM, Jeff wrote:
> I'm quite excited about the feature. I'm still on 8.2 mostly because of the
> downtime of the dump & restore. I wrote up some plans a while back on doing
> the poor-mans parallel restore, but I haven't had the time to actually do
> it.
We use slon
On Mon, Jan 26, 2009 at 11:58 AM, Jeff wrote:
> On Jan 26, 2009, at 2:42 PM, David Rees wrote:
>> Lots of people have databases much, much, bigger - I'd hate to imagine
>> have to restore from backup from one of those monsters.
>
> If you use PITR + rsync you can creat
On Mon, Jan 26, 2009 at 4:09 AM, Matthew Wakeling wrote:
> On Sun, 25 Jan 2009, Scott Marlowe wrote:
>>
>> More cores is more important than faster but fewer
>>
>> Again, more slower disks > fewer slower ones.
>
> Not necessarily. It depends what you are doing. If you're going to be
> running only
On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
wrote:
> Version 1.93d --Sequential Output-- --Sequential Input-
> --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
On Thu, Jan 15, 2009 at 2:36 PM, Bill Preston
wrote:
> We are in Southern California.
> What I need someone for when the SHTF again, and if I can't handle it, I
> have some resource to get on the job right away. And it would help if they
> were a company that does this kind of thing so that I can
On Tue, Jan 6, 2009 at 11:02 AM, Stefano Nichele
wrote:
> BTW, why did you said I/O bound ? Which are the parameters that highlight
> that ? Sorry for my ignorance
In addition to the percentage of time spent in wait as Scott said, you
can also see the number of processes which are blocked (b
On Tue, Dec 16, 2008 at 8:03 PM, Nimesh Satam wrote:
> We are trying to implement slony as a replication tool for one of our
> database. The Insert and updates have increased by approximately double
> making some of our important script slow.
What version of PostgreSQL are you running and on what
On Thu, Nov 6, 2008 at 4:03 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 6, 2008 at 4:04 PM, Kevin Grittner <[EMAIL PROTECTED]> wrote:
>> "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
>>> Without write barriers in my file system an fsync request will
>>> be immediately returned true, cor
On Thu, Nov 6, 2008 at 8:07 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 6, 2008 at 8:47 AM, David Rees <[EMAIL PROTECTED]> wrote:
>>
>> In the case of the machines without a BBU on them, they are configured
>> to be in WriteBack, but are actually
On Thu, Nov 6, 2008 at 2:21 AM, Peter Schuller
<[EMAIL PROTECTED]> wrote:
>> I also found that my write cache was set to WriteThrough instead of
>> WriteBack, defeating the purpose of having a BBU and that my secondary
>> server apparently doesn't have a BBU on it. :-(
>
> Note also that several RA
On Fri, Oct 31, 2008 at 4:14 PM, David Rees <[EMAIL PROTECTED]> wrote:
> Well, I'm pretty sure the delays are not checkpoint related. None of
> the slow commits line up at all with the end of checkpoints.
>
> The period of high delays occur during the same period of time ea
(Resending this, the first one got bounced by mail.postgresql.org)
On Wed, Oct 29, 2008 at 3:30 PM, David Rees <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 29, 2008 at 6:26 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
>> What you should do first is confirm
>> whether or not th
On Wed, Oct 29, 2008 at 6:26 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> The CentOS 4.7 kernel will happily buffer about 1.6GB of writes with that
> much RAM, and the whole thing can get slammed onto disk during the final
> fsync portion of the checkpoint. What you should do first is confirm
> whe
Hi,
I've got an OLTP application which occasionally suffers from slow
commit time. The process in question does something like this:
1. Do work
2. begin transaction
3. insert record
4. commit transaction
5. Do more work
6. begin transaction
7. update record
8. commit transaction
9. Do more work
37 matches
Mail list logo