I don't see how lowering the frequency helps. If I lower it, then I'll
need to run longer to capture the same # samples to achieve a given
variation.
I've already tried lowering the size of each sample from 8KiB to 512,
but that only allows storing 2 or 3 parent function calls, which isn't
enough for me.

"don't use dwarf unwinding."
The main reason I'm trying to switch from Zoom to perf is because it
supports dwarf unwinding! That's very convenient because otherwise
I'll need to compile with frame pointers to do stack traces. This
isn't the default and can slow down the program. As long as the Dwarf
unwinding doesn't perturb the profilee  (by doing it at the end), then
it would be OK.


On Mon, Jan 5, 2015 at 9:39 PM, Andi Kleen <[email protected]> wrote:
> Yale Zhang <[email protected]> writes:
>
>> Perf developers,
>> I'm also very interested in reducing the size of the data files when
>> reporting call stacks. Currently, profiling 30s of execution with call
>
> The simplest way is to lower the frequency. Especially don't
> use adaptive frequency (which is the default). This also
> lowers overhead quite a bit. Also don't use dwarf unwinding.
> It is very expensive.
>
> There are some other ways.
>
> I suspect you would most of what you want by just running
> a fast compressor (snappy or LZO) at perf record time. That would
> be likely a useful addition.
>
> -Andi
>
> --
> [email protected] -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-perf-users" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to