Re: Exclusive core for a process, is it reasonable?

2018-04-10 Thread John Hening
Thanks for your responses 

W dniu niedziela, 8 kwietnia 2018 14:51:52 UTC+2 użytkownik John Hening 
napisał:
>
> Hello,
>
> I've read about thread affinity and I see that it is popular in 
> high-performance-libraries (for example 
> https://github.com/OpenHFT/Java-Thread-Affinity). Ok, jugglery a thread 
> between cores has impact (generally) on performance so it is reasonable to 
> bind a specific thread to a specific core. 
>
> *Intro*:
> It is obvious that the best idea to make it possible that any process will 
> be an owner of core [let's call it X] (in multi-core CPU). I mean that main 
> thread in a process will be one and only thread executed on core X. So, 
> there is no problem with context-switching and cache flushing [with expect 
> system calls]. 
> I know that it requires a special implementation of scheduler in kernel, 
> so it requires a modification of [Linux] kernel. I know that it is not so 
> easy and so on.
>
> *Question*:
> But, we know that we have systems that need a high performance. So, it 
> could be a solution with context-switching once and at all. So, why there 
> is no a such solution? My suspicions are:
>
> * it is pointless, the bottleneck is elsewhere [However, it is meaningful 
> to get thread-affinity]
> * it is too hard and there is too risky to make it not correctly
> * there is no need
> * forking own linux kernel doesn't sound like a good idea.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-09 Thread Wojciech Kudla
John,

What I'm referring to is basically reducing topological distance between
relevant nodes (ie. co-location, alternative network routes), or changing
means/medium (copper vs fiber vs microwave). For instance microwave has
gained a lot of popularity recently due to the speed of propagation of
radio waves in air vs light in a fiber wire.
There's a lot of interesting articles on the topic, just Google it.
Also transmission technology (ie. Infiniband) may make a difference.
Hope this helps


On Mon, 9 Apr 2018, 12:19 John Hening,  wrote:

>
>
>> Tangentially, there's more to gain from shaving off latency on network
>> paths than there is from affinitizing work to cores/dies. But that's
>> digressing from the OP.
>>
>
> @Wojciech Kudla,
>
> That's digressing but it is very interesting. Can you refer me somewhere
> where I could read more about it? [If you don't mean solutions based on
> bypass-kernel-network-stack-using-hardware]
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-09 Thread John Hening
 

> Tangentially, there's more to gain from shaving off latency on network 
> paths than there is from affinitizing work to cores/dies. But that's 
> digressing from the OP. 
>

@Wojciech Kudla,

That's digressing but it is very interesting. Can you refer me somewhere 
where I could read more about it? [If you don't mean solutions based on 
bypass-kernel-network-stack-using-hardware]

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-09 Thread Greg Young
Yep. Most the ASIC stuff I have seen has been trivial algorithms where
speed was the most important thing.

On Mon, Apr 9, 2018 at 5:03 PM, Wojciech Kudla 
wrote:

> Some of the stuff I had a chance to work on managed to handle market data
> in single digit micros and trading in low tens. That's Java/c++.
> With modern day hardware it would be extremely hard (and costly) to push
> it much further.
> I can easily imagine how going for ASIC and staying under 1 microsecond
> produces an edge worth investing in.
> Tangentially, there's more to gain from shaving off latency on network
> paths than there is from affinitizing work to cores/dies. But that's
> digressing from the OP.
>
>
>
> On Mon, 9 Apr 2018, 09:00 Avi Kivity,  wrote:
>
>> Seriously, people are trading on ASICs?
>>
>>
>> The amount of effort going into this is astounding. I can't help thinking
>> that it won't end well.
>>
>> On 04/09/2018 10:55 AM, Greg Young wrote:
>>
>> To be fair many of the fpga based things have also moved to asics. You
>> know you are in for fun when a fpga is too slow.
>>
>> On Mon, Apr 9, 2018 at 12:58 AM, Martin Thompson 
>> wrote:
>>
>>> 5+ years ago it was pretty common for folks to modify the Linux kernel
>>> or run cut down OS implementations when pushing the edge of HFT. These days
>>> the really fast stuff is all in FPGAs in the switches. However there is
>>> still work done on isolating threads to their own exclusive cores. This is
>>> often done by exchanges or those who want good predictable performance but
>>> not necessarily be the best.
>>>
>>> A simple way I have to look at it. You are either predator or prey. If
>>> predator then you are mostly likely on FPGAs and doing some pretty advanced
>>> stuff. If prey then you don't want to be at the back of the herd where you
>>> get picked off. For the avoidance of doubt if you are not sure if you are
>>> prey or predator then you are prey. ;-)
>>>
>>>
>>> On Sunday, 8 April 2018 13:51:52 UTC+1, John Hening wrote:

 Hello,

 I've read about thread affinity and I see that it is popular in
 high-performance-libraries (for example https://github.com/OpenHFT/
 Java-Thread-Affinity). Ok, jugglery a thread between cores has impact
 (generally) on performance so it is reasonable to bind a specific thread to
 a specific core.

 *Intro*:
 It is obvious that the best idea to make it possible that any process
 will be an owner of core [let's call it X] (in multi-core CPU). I mean that
 main thread in a process will be one and only thread executed on core X.
 So, there is no problem with context-switching and cache flushing [with
 expect system calls].
 I know that it requires a special implementation of scheduler in
 kernel, so it requires a modification of [Linux] kernel. I know that it is
 not so easy and so on.

 *Question*:
 But, we know that we have systems that need a high performance. So, it
 could be a solution with context-switching once and at all. So, why there
 is no a such solution? My suspicions are:

 * it is pointless, the bottleneck is elsewhere [However, it is
 meaningful to get thread-affinity]
 * it is too hard and there is too risky to make it not correctly
 * there is no need
 * forking own linux kernel doesn't sound like a good idea.


 --
>>> You received this message because you are subscribed to the Google
>>> Groups "mechanical-sympathy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to mechanical-sympathy+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>>
>> --
>> Studying for the Turing test
>> --
>> You received this message because you are subscribed to the Google Groups
>> "mechanical-sympathy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to mechanical-sympathy+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "mechanical-sympathy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to mechanical-sympathy+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>


-- 
Studying for the Turing test

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-09 Thread Wojciech Kudla
Some of the stuff I had a chance to work on managed to handle market data
in single digit micros and trading in low tens. That's Java/c++.
With modern day hardware it would be extremely hard (and costly) to push it
much further.
I can easily imagine how going for ASIC and staying under 1 microsecond
produces an edge worth investing in.
Tangentially, there's more to gain from shaving off latency on network
paths than there is from affinitizing work to cores/dies. But that's
digressing from the OP.



On Mon, 9 Apr 2018, 09:00 Avi Kivity,  wrote:

> Seriously, people are trading on ASICs?
>
>
> The amount of effort going into this is astounding. I can't help thinking
> that it won't end well.
>
> On 04/09/2018 10:55 AM, Greg Young wrote:
>
> To be fair many of the fpga based things have also moved to asics. You
> know you are in for fun when a fpga is too slow.
>
> On Mon, Apr 9, 2018 at 12:58 AM, Martin Thompson 
> wrote:
>
>> 5+ years ago it was pretty common for folks to modify the Linux kernel or
>> run cut down OS implementations when pushing the edge of HFT. These days
>> the really fast stuff is all in FPGAs in the switches. However there is
>> still work done on isolating threads to their own exclusive cores. This is
>> often done by exchanges or those who want good predictable performance but
>> not necessarily be the best.
>>
>> A simple way I have to look at it. You are either predator or prey. If
>> predator then you are mostly likely on FPGAs and doing some pretty advanced
>> stuff. If prey then you don't want to be at the back of the herd where you
>> get picked off. For the avoidance of doubt if you are not sure if you are
>> prey or predator then you are prey. ;-)
>>
>>
>> On Sunday, 8 April 2018 13:51:52 UTC+1, John Hening wrote:
>>>
>>> Hello,
>>>
>>> I've read about thread affinity and I see that it is popular in
>>> high-performance-libraries (for example
>>> https://github.com/OpenHFT/Java-Thread-Affinity). Ok, jugglery a thread
>>> between cores has impact (generally) on performance so it is reasonable to
>>> bind a specific thread to a specific core.
>>>
>>> *Intro*:
>>> It is obvious that the best idea to make it possible that any process
>>> will be an owner of core [let's call it X] (in multi-core CPU). I mean that
>>> main thread in a process will be one and only thread executed on core X.
>>> So, there is no problem with context-switching and cache flushing [with
>>> expect system calls].
>>> I know that it requires a special implementation of scheduler in kernel,
>>> so it requires a modification of [Linux] kernel. I know that it is not so
>>> easy and so on.
>>>
>>> *Question*:
>>> But, we know that we have systems that need a high performance. So, it
>>> could be a solution with context-switching once and at all. So, why there
>>> is no a such solution? My suspicions are:
>>>
>>> * it is pointless, the bottleneck is elsewhere [However, it is
>>> meaningful to get thread-affinity]
>>> * it is too hard and there is too risky to make it not correctly
>>> * there is no need
>>> * forking own linux kernel doesn't sound like a good idea.
>>>
>>>
>>> --
>> You received this message because you are subscribed to the Google Groups
>> "mechanical-sympathy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to mechanical-sympathy+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> --
> Studying for the Turing test
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-09 Thread Greg Young
To be fair many of the fpga based things have also moved to asics. You know
you are in for fun when a fpga is too slow.

On Mon, Apr 9, 2018 at 12:58 AM, Martin Thompson  wrote:

> 5+ years ago it was pretty common for folks to modify the Linux kernel or
> run cut down OS implementations when pushing the edge of HFT. These days
> the really fast stuff is all in FPGAs in the switches. However there is
> still work done on isolating threads to their own exclusive cores. This is
> often done by exchanges or those who want good predictable performance but
> not necessarily be the best.
>
> A simple way I have to look at it. You are either predator or prey. If
> predator then you are mostly likely on FPGAs and doing some pretty advanced
> stuff. If prey then you don't want to be at the back of the herd where you
> get picked off. For the avoidance of doubt if you are not sure if you are
> prey or predator then you are prey. ;-)
>
>
> On Sunday, 8 April 2018 13:51:52 UTC+1, John Hening wrote:
>>
>> Hello,
>>
>> I've read about thread affinity and I see that it is popular in
>> high-performance-libraries (for example https://github.com/OpenHFT/Jav
>> a-Thread-Affinity). Ok, jugglery a thread between cores has impact
>> (generally) on performance so it is reasonable to bind a specific thread to
>> a specific core.
>>
>> *Intro*:
>> It is obvious that the best idea to make it possible that any process
>> will be an owner of core [let's call it X] (in multi-core CPU). I mean that
>> main thread in a process will be one and only thread executed on core X.
>> So, there is no problem with context-switching and cache flushing [with
>> expect system calls].
>> I know that it requires a special implementation of scheduler in kernel,
>> so it requires a modification of [Linux] kernel. I know that it is not so
>> easy and so on.
>>
>> *Question*:
>> But, we know that we have systems that need a high performance. So, it
>> could be a solution with context-switching once and at all. So, why there
>> is no a such solution? My suspicions are:
>>
>> * it is pointless, the bottleneck is elsewhere [However, it is meaningful
>> to get thread-affinity]
>> * it is too hard and there is too risky to make it not correctly
>> * there is no need
>> * forking own linux kernel doesn't sound like a good idea.
>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Studying for the Turing test

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-09 Thread Avi Kivity

Seriously, people are trading on ASICs?


The amount of effort going into this is astounding. I can't help 
thinking that it won't end well.



On 04/09/2018 10:55 AM, Greg Young wrote:
To be fair many of the fpga based things have also moved to asics. You 
know you are in for fun when a fpga is too slow.


On Mon, Apr 9, 2018 at 12:58 AM, Martin Thompson > wrote:


5+ years ago it was pretty common for folks to modify the Linux
kernel or run cut down OS implementations when pushing the edge of
HFT. These days the really fast stuff is all in FPGAs in the
switches. However there is still work done on isolating threads to
their own exclusive cores. This is often done by exchanges or
those who want good predictable performance but not necessarily be
the best.

A simple way I have to look at it. You are either predator or
prey. If predator then you are mostly likely on FPGAs and doing
some pretty advanced stuff. If prey then you don't want to be at
the back of the herd where you get picked off. For the avoidance
of doubt if you are not sure if you are prey or predator then you
are prey. ;-)


On Sunday, 8 April 2018 13:51:52 UTC+1, John Hening wrote:

Hello,

I've read about thread affinity and I see that it is popular
in high-performance-libraries (for example
https://github.com/OpenHFT/Java-Thread-Affinity
). Ok,
jugglery a thread between cores has impact (generally) on
performance so it is reasonable to bind a specific thread to a
specific core.

*Intro*:
It is obvious that the best idea to make it possible that any
process will be an owner of core [let's call it X] (in
multi-core CPU). I mean that main thread in a process will be
one and only thread executed on core X. So, there is no
problem with context-switching and cache flushing [with expect
system calls].
I know that it requires a special implementation of scheduler
in kernel, so it requires a modification of [Linux] kernel. I
know that it is not so easy and so on.

*Question*:
But, we know that we have systems that need a high
performance. So, it could be a solution with context-switching
once and at all. So, why there is no a such solution? My
suspicions are:

* it is pointless, the bottleneck is elsewhere [However, it is
meaningful to get thread-affinity]
* it is too hard and there is too risky to make it not correctly
* there is no need
* forking own linux kernel doesn't sound like a good idea.


-- 
You received this message because you are subscribed to the Google

Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to mechanical-sympathy+unsubscr...@googlegroups.com
.
For more options, visit https://groups.google.com/d/optout
.




--
Studying for the Turing test
--
You received this message because you are subscribed to the Google 
Groups "mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to mechanical-sympathy+unsubscr...@googlegroups.com 
.

For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-08 Thread Jean-Philippe BEMPEL
Hi John,

As always, it depends :)

In my previous job, We had a system that processing orders in 100us. Here
thread affinity and core isolation were mandatory to achieve the SLA. The
difference measured was from 2x to 4x.
But not all systems need this setting/tuning. if you are accessing a lot of
memory and it does not fit most of the time in CPU caches I don't think
this will make a big difference like we had.
So measure, and see what are the figures. As Gil pointed, it's relatively
easy to setup a test by pinning by hand your critical threads with taskset
for example.

Regards


On Sun, Apr 8, 2018 at 2:51 PM, John Hening  wrote:

> Hello,
>
> I've read about thread affinity and I see that it is popular in
> high-performance-libraries (for example https://github.com/OpenHFT/
> Java-Thread-Affinity). Ok, jugglery a thread between cores has impact
> (generally) on performance so it is reasonable to bind a specific thread to
> a specific core.
>
> *Intro*:
> It is obvious that the best idea to make it possible that any process will
> be an owner of core [let's call it X] (in multi-core CPU). I mean that main
> thread in a process will be one and only thread executed on core X. So,
> there is no problem with context-switching and cache flushing [with expect
> system calls].
> I know that it requires a special implementation of scheduler in kernel,
> so it requires a modification of [Linux] kernel. I know that it is not so
> easy and so on.
>
> *Question*:
> But, we know that we have systems that need a high performance. So, it
> could be a solution with context-switching once and at all. So, why there
> is no a such solution? My suspicions are:
>
> * it is pointless, the bottleneck is elsewhere [However, it is meaningful
> to get thread-affinity]
> * it is too hard and there is too risky to make it not correctly
> * there is no need
> * forking own linux kernel doesn't sound like a good idea.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-08 Thread Martin Thompson
5+ years ago it was pretty common for folks to modify the Linux kernel or 
run cut down OS implementations when pushing the edge of HFT. These days 
the really fast stuff is all in FPGAs in the switches. However there is 
still work done on isolating threads to their own exclusive cores. This is 
often done by exchanges or those who want good predictable performance but 
not necessarily be the best.

A simple way I have to look at it. You are either predator or prey. If 
predator then you are mostly likely on FPGAs and doing some pretty advanced 
stuff. If prey then you don't want to be at the back of the herd where you 
get picked off. For the avoidance of doubt if you are not sure if you are 
prey or predator then you are prey. ;-)

On Sunday, 8 April 2018 13:51:52 UTC+1, John Hening wrote:
>
> Hello,
>
> I've read about thread affinity and I see that it is popular in 
> high-performance-libraries (for example 
> https://github.com/OpenHFT/Java-Thread-Affinity). Ok, jugglery a thread 
> between cores has impact (generally) on performance so it is reasonable to 
> bind a specific thread to a specific core. 
>
> *Intro*:
> It is obvious that the best idea to make it possible that any process will 
> be an owner of core [let's call it X] (in multi-core CPU). I mean that main 
> thread in a process will be one and only thread executed on core X. So, 
> there is no problem with context-switching and cache flushing [with expect 
> system calls]. 
> I know that it requires a special implementation of scheduler in kernel, 
> so it requires a modification of [Linux] kernel. I know that it is not so 
> easy and so on.
>
> *Question*:
> But, we know that we have systems that need a high performance. So, it 
> could be a solution with context-switching once and at all. So, why there 
> is no a such solution? My suspicions are:
>
> * it is pointless, the bottleneck is elsewhere [However, it is meaningful 
> to get thread-affinity]
> * it is too hard and there is too risky to make it not correctly
> * there is no need
> * forking own linux kernel doesn't sound like a good idea.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Exclusive core for a process, is it reasonable?

2018-04-08 Thread Gil Tene
“Reasonable people adapt themselves to the world. Unreasonable people 
attempt to adapt the world to themselves. All progress, therefore, depends 
on unreasonable people.”― George Bernard Shaw 

To your question tho: there are plenty of tools available in Linux today to 
control how cores are used across processes. E.g. between numactl, cpusets, 
tasksets, and isolcpus, you can shape the way the scheduler chooses which 
cores are used by which processes and thread pretty well.

On Sunday, April 8, 2018 at 5:51:52 AM UTC-7, John Hening wrote:
>
> Hello,
>
> I've read about thread affinity and I see that it is popular in 
> high-performance-libraries (for example 
> https://github.com/OpenHFT/Java-Thread-Affinity). Ok, jugglery a thread 
> between cores has impact (generally) on performance so it is reasonable to 
> bind a specific thread to a specific core. 
>
> *Intro*:
> It is obvious that the best idea to make it possible that any process will 
> be an owner of core [let's call it X] (in multi-core CPU). I mean that main 
> thread in a process will be one and only thread executed on core X. So, 
> there is no problem with context-switching and cache flushing [with expect 
> system calls]. 
> I know that it requires a special implementation of scheduler in kernel, 
> so it requires a modification of [Linux] kernel. I know that it is not so 
> easy and so on.
>
> *Question*:
> But, we know that we have systems that need a high performance. So, it 
> could be a solution with context-switching once and at all. So, why there 
> is no a such solution? My suspicions are:
>
> * it is pointless, the bottleneck is elsewhere [However, it is meaningful 
> to get thread-affinity]
> * it is too hard and there is too risky to make it not correctly
> * there is no need
> * forking own linux kernel doesn't sound like a good idea.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Exclusive core for a process, is it reasonable?

2018-04-08 Thread John Hening
Hello,

I've read about thread affinity and I see that it is popular in 
high-performance-libraries (for example 
https://github.com/OpenHFT/Java-Thread-Affinity). Ok, jugglery a thread 
between cores has impact (generally) on performance so it is reasonable to 
bind a specific thread to a specific core. 

*Intro*:
It is obvious that the best idea to make it possible that any process will 
be an owner of core [let's call it X] (in multi-core CPU). I mean that main 
thread in a process will be one and only thread executed on core X. So, 
there is no problem with context-switching and cache flushing [with expect 
system calls]. 
I know that it requires a special implementation of scheduler in kernel, so 
it requires a modification of [Linux] kernel. I know that it is not so easy 
and so on.

*Question*:
But, we know that we have systems that need a high performance. So, it 
could be a solution with context-switching once and at all. So, why there 
is no a such solution? My suspicions are:

* it is pointless, the bottleneck is elsewhere [However, it is meaningful 
to get thread-affinity]
* it is too hard and there is too risky to make it not correctly
* there is no need
* forking own linux kernel doesn't sound like a good idea.


-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.