I believe there is no definite methodology, it is all experimental and
dependent on the applications you are running and as Valdis told on the
changes you would like to make. Basically, when a paper offers a new
algorithm or design then they should have tested it on their own testbed.
They may repo
Basically I am looking for methodology guidelines for doing my own testing
on a bunch of techniques in different papers and seeing what the
performance impact is overall. Are there guidelines for doing such things?
On Tue, Oct 16, 2018 at 3:19 AM wrote:
> On Tue, 16 Oct 2018 01:23:45 +0800, Cart
On Tue, 16 Oct 2018 01:23:45 +0800, Carter Cheng said:
> I am actually looking at some changes that litter the kernel with short
> code snippets and thus according to papers i have read can result in CPU
> hits of around 48% when applied is userspace.
You're going to need to be more specific. Not
I am actually looking at some changes that litter the kernel with short
code snippets and thus according to papers i have read can result in CPU
hits of around 48% when applied is userspace. I am curious how you would
best measure the impact of similar modifications (since obviously one isn't
alway
On Mon, 15 Oct 2018 23:42:03 +0800, Carter Cheng said:
> I was wondering what are some good ways to assess the performance impact of
> kernel modifications. Are there some papers in the literature where this is
> done? Does one need to differentiate between CPU bound and different types
> of I/O b
Hi,
I was wondering what are some good ways to assess the performance impact of
kernel modifications. Are there some papers in the literature where this is
done? Does one need to differentiate between CPU bound and different types
of I/O bound processes etc?
Regards,
Carter.