> You tested the correct branch, right? Which commit does "git rev-parse
> HEAD" show?
I applied last two patches manually on PostgreSQL 9.2 Stable.
On 2013-12-05 17:46:44 +0200, Metin Doslu wrote:
> I tried your patches on next link. As you suspect I didn't see any
> improvements. I tested it on PostgreSQL 9.2 Stable.
You tested the correct branch, right? Which commit does "git rev-parse
HEAD" show?
But generally, as long as your profile hid
> You could try my lwlock-scalability improvement patches - for some
> workloads here, the improvements have been rather noticeable. Which
> version are you testing?
I tried your patches on next link. As you suspect I didn't see any
improvements. I tested it on PostgreSQL 9.2 Stable.
http://git.p
> You could try my lwlock-scalability improvement patches - for some
> workloads here, the improvements have been rather noticeable. Which
> version are you testing?
I'm testing with PostgreSQL 9.3.1.
On 2013-12-04 20:19:55 +0200, Metin Doslu wrote:
> - When we increased NUM_BUFFER_PARTITIONS to 1024, this problem is
> disappeared for 8 core machines and come back with 16 core machines on
> Amazon EC2. Would it be related with PostgreSQL locking mechanism?
You could try my lwlock-scalability im
> Didn't follow the thread from the start. So, this is EC2? Have you
> checked, with a recent enough version of top or whatever, how much time
> is reported as "stolen"?
Yes, this EC2. "stolen" is randomly reported as 1, mostly as 0.
On 2013-12-04 16:00:40 -0200, Claudio Freire wrote:
> On Wed, Dec 4, 2013 at 1:54 PM, Andres Freund wrote:
> > All that time is spent in your virtualization solution. One thing to try
> > is to look on the host system, sometimes profiles there can be more
> > meaningful.
>
> You cannot profile th
> You could try HVM. I've noticed it fare better under heavy CPU load,
> and it's not fully-HVM (it still uses paravirtualized network and
> I/O).
I already tried with HVM (cc2.8xlarge instance on Amazon EC2) and observed
same problem.
On Wed, Dec 4, 2013 at 1:54 PM, Andres Freund wrote:
> On 2013-12-04 18:43:35 +0200, Metin Doslu wrote:
>> > I'd strongly suggest doing a "perf record -g -a ;
>> > perf report" run to check what's eating up the time.
>>
>> Here is one example:
>>
>> + 38.87% swapper [kernel.kallsyms] [k] hyp
On 2013-12-04 18:43:35 +0200, Metin Doslu wrote:
> > I'd strongly suggest doing a "perf record -g -a ;
> > perf report" run to check what's eating up the time.
>
> Here is one example:
>
> + 38.87% swapper [kernel.kallsyms] [k] hypercall_page
> + 9.32% postgres [kernel.kallsyms] [k] h
> I'd strongly suggest doing a "perf record -g -a ;
> perf report" run to check what's eating up the time.
Here is one example:
+ 38.87% swapper [kernel.kallsyms] [k] hypercall_page
+ 9.32% postgres [kernel.kallsyms] [k] hypercall_page
+ 6.80% postgres [kernel.kallsyms] [k] xen_
>Notice the huge %sy
>What kind of VM are you using? HVM or paravirtual?
This instance is paravirtual.
On 2013-12-04 14:27:10 -0200, Claudio Freire wrote:
> On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu wrote:
> >
> > Here are the results of "vmstat 1" while running 8 parallel TPC-H Simple
> > (#6) queries: Although there is no need for I/O, "wa" fluctuates between 0
> > and 1.
> >
> > procs ---
On Wed, Dec 4, 2013 at 9:19 AM, Metin Doslu wrote:
>
> Here are the results of "vmstat 1" while running 8 parallel TPC-H Simple
> (#6) queries: Although there is no need for I/O, "wa" fluctuates between 0
> and 1.
>
> procs ---memory-- ---swap-- -io --system--
> -cpu--
> I think all of this data cannot fit in shared_buffers, you might want
to increase shared_buffers
> to larger size (not 30GB but close to your data size) to see how it
behaves.
When I use shared_buffers larger than my data size such as 10 GB, results
scale nearly as expected at least for this
15 matches
Mail list logo