On Thu, Aug 21, 2014 at 5:29 PM, Josh Berkus wrote:
> On 08/21/2014 04:08 PM, Steve Crawford wrote:
>> On 08/21/2014 03:51 PM, Josh Berkus wrote:
>>> On 08/21/2014 02:26 PM, Scott Marlowe wrote:
I'm running almost the exact same setup in production as a spare. It
has 4 of those CPUs, 256
On 08/21/2014 04:29 PM, Josh Berkus wrote:
On 08/21/2014 04:08 PM, Steve Crawford wrote:
On 08/21/2014 03:51 PM, Josh Berkus wrote:
On 08/21/2014 02:26 PM, Scott Marlowe wrote:
I'm running almost the exact same setup in production as a spare. It
has 4 of those CPUs, 256G RAM, and is currentl
On 22/08/14 11:29, Josh Berkus wrote:
On 08/21/2014 04:08 PM, Steve Crawford wrote:
On 08/21/2014 03:51 PM, Josh Berkus wrote:
On 08/21/2014 02:26 PM, Scott Marlowe wrote:
I'm running almost the exact same setup in production as a spare. It
has 4 of those CPUs, 256G RAM, and is currently set t
On 08/21/2014 04:08 PM, Steve Crawford wrote:
> On 08/21/2014 03:51 PM, Josh Berkus wrote:
>> On 08/21/2014 02:26 PM, Scott Marlowe wrote:
>>> I'm running almost the exact same setup in production as a spare. It
>>> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since
>>> it's a spa
On 08/21/2014 03:51 PM, Josh Berkus wrote:
On 08/21/2014 02:26 PM, Scott Marlowe wrote:
I'm running almost the exact same setup in production as a spare. It
has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since
it's a spare node I might be able to do some testing on it as well.
It
On 08/21/2014 02:26 PM, Scott Marlowe wrote:
> I'm running almost the exact same setup in production as a spare. It
> has 4 of those CPUs, 256G RAM, and is currently set to use HT. Since
> it's a spare node I might be able to do some testing on it as well.
> It's running a 3.2 kernel right now. I c
> HT off is common knowledge for better benchmarking result
It's wise to use the qualifer 'for better benchmarking results'.
It's worth keeping in mind here that a benchmark is not the same as normal
production use.
For example, where I work we do lots of long-running queries in parallel over
On Thu, Aug 21, 2014 at 3:26 PM, Scott Marlowe wrote:
> On Thu, Aug 21, 2014 at 3:02 PM, Josh Berkus wrote:
>> On 08/20/2014 07:40 PM, Bruce Momjian wrote:
>>
>>> I am also
>>> unclear exactly what you tested, as I didn't see it mentioned in the
>>> email --- CPU type, CPU count, and operating sy
On Thu, Aug 21, 2014 at 3:02 PM, Josh Berkus wrote:
> On 08/20/2014 07:40 PM, Bruce Momjian wrote:
>
>> I am also
>> unclear exactly what you tested, as I didn't see it mentioned in the
>> email --- CPU type, CPU count, and operating system would be the minimal
>> information required.
>
> Ooops!
On Thu, Aug 21, 2014 at 02:17:13PM -0700, Josh Berkus wrote:
> >> Actually, I don't know that anyone has posted the benefits of HT. Link?
> >> I want to compare results so that we can figure out what's different
> >> between my case and theirs. Also, it makes a big difference if there is
> >> an
On 08/21/2014 02:11 PM, Bruce Momjian wrote:
> On Thu, Aug 21, 2014 at 02:02:26PM -0700, Josh Berkus wrote:
>> On 08/20/2014 07:40 PM, Bruce Momjian wrote:
>>> On Wed, Aug 20, 2014 at 12:13:50PM -0700, Josh Berkus wrote:
On a read-write test, it's 10% faster with HT off as well.
Furt
On Thu, Aug 21, 2014 at 02:02:26PM -0700, Josh Berkus wrote:
> On 08/20/2014 07:40 PM, Bruce Momjian wrote:
> > On Wed, Aug 20, 2014 at 12:13:50PM -0700, Josh Berkus wrote:
> >> On a read-write test, it's 10% faster with HT off as well.
> >>
> >> Further, from their production machine we've seen th
On 08/20/2014 07:40 PM, Bruce Momjian wrote:
> On Wed, Aug 20, 2014 at 12:13:50PM -0700, Josh Berkus wrote:
>> On a read-write test, it's 10% faster with HT off as well.
>>
>> Further, from their production machine we've seen that having HT on
>> causes the machine to slow down by 5X whenever you g
On Thu, Aug 21, 2014 at 7:19 PM, Eli Naeher wrote:
> However, when I try to do a
> test self-join using it, Postgres does two seq scans across the whole table,
> even though I have indexes on both id and previous_stop_event:
> http://explain.depesz.com/s/ctck. Any idea why those indexes are not be
Oops, I forgot to include the test self-join query I'm using. It is simply:
SELECT se1.stop_time AS curr, se2.stop_time AS prev
FROM stop_event se1
JOIN stop_event se2 ON se1.previous_stop_event = se2.id;
On Thu, Aug 21, 2014 at 11:19 AM, Eli Naeher wrote:
> Upping work_mem did roughly halve
Upping work_mem did roughly halve the time, but after thinking about
Shaun's suggestion, I figured it's better to calculate this stuff once and
then store it. So here is how the table looks now:
Table "public.stop_event"
Column|T
On 08/20/2014 06:14 PM, Mark Kirkwood wrote:
Notwithstanding the above results, my workmate Matt made an interesting
observation: the scaling graph for (our) 60 core box (HT off), looks
just like the one for our 32 core box with HT *on*.
Hmm. I know this sounds stupid and unlikely, but has any
On 08/21/2014 08:29 AM, Eli Naeher wrote:
With around 1.2 million rows, this takes 20 seconds to run. 1.2 million
rows is only about a week's worth of data, so I'd like to figure out a
way to make this faster.
Well, you'll probably be able to reduce the run time a bit, but even
with really go
On Thu, Aug 21, 2014 at 4:29 PM, Eli Naeher wrote:
> Clearly the bulk of the time is spent sorting the rows in the original
> table, and then again sorting the results of the subselect. But I'm afraid I
> don't really know what to do with this information. Is there any way I can
> speed this up?
I have a table called stop_event (a stop event is one bus passing one bus
stop at a given time for a given route and direction), and I'd like to get
the average interval for each stop/route/direction combination.
A few hundred new events are written to the table once every minute. No
rows are ever
On 21/08/14 11:14, Mark Kirkwood wrote:
You didn't mention what
cpu this is for (or how many sockets etc), would be useful to know.
Just to clarify - while you mentioned that the production system was 40
cores, it wasn't immediately obvious that the same system was the source
of the measure
21 matches
Mail list logo