Thanks all for review, merged into master.

пн, 6 дек. 2021 г. в 23:26, Ivan Daschinsky <ivanda...@gmail.com>:

> You are not wrong, it is built from source, every night. And every TC run.
> I don't understand why numa allocator cannot be treated the same. Moreover,
> it is built using maven, with maven plugin and just needs gcc and
> libnuma-dev. All of theese are already on TC agents and build are ready. I
> didn't see any difficulties in building.
>
> But I still don't understand how to organize Anton's proposal. It seems
> that if we follow this way, we cannot release allocator in 2.13
>
> пн, 6 дек. 2021 г., 23:18 Ilya Kasnacheev <ilya.kasnach...@gmail.com>:
>
>> Hello!
>>
>> Maybe I am wrong, but ODBC installer is built from source and may be
>> improved from release to release.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 6 дек. 2021 г. в 20:41, Ivan Daschinsky <ivanda...@gmail.com>:
>>
>> > Only one reason -- nowadays amost all hardware platforms uses NUMA
>> >
>> > Another reason -- there is no any release process of extensions.
>> >
>> >
>> > BTW, apache ignite release is shipped with odbc binary installer for
>> > windows. And nobody complains about it.
>> >
>> > But may be listen to others?
>> >
>> > пн, 6 дек. 2021 г., 19:49 Anton Vinogradov <a...@apache.org>:
>> >
>> > > Any reason to release the same cpp sources for each release?
>> > > Any reason to increase the requirements amount for each new release?
>> > > Any reason to increase release complexity and duration?
>> > > All answers are "definitely no"
>> > >
>> > > What we should do is to release cpp part once and use it as a
>> dependency.
>> > > Extensions are a good location.
>> > >
>> > > On Mon, Dec 6, 2021 at 3:11 PM Zhenya Stanilovsky
>> > > <arzamas...@mail.ru.invalid> wrote:
>> > >
>> > > >
>> > > >
>> > > > +1 with Ivan, let`s store it in core product just because it looks
>> > > > like inalienable functionality and release cycle of extensions a
>> little
>> > > bit
>> > > > different.
>> > > >
>> > > >
>> > > >
>> > > > >Anton, I disagree.
>> > > > >
>> > > > >1. This should be released with main distro.
>> > > > >2. This should not be abandoned.
>> > > > >3. There is not any release process in ignite-extensions.
>> > > > >4. Everything is working now and working good.
>> > > > >
>> > > > >
>> > > > >So lets do not do this :)
>> > > > >
>> > > > >пн, 6 дек. 2021 г. в 14:49, Anton Vinogradov < a...@apache.org >:
>> > > > >
>> > > > >> Let's move all GCC-related parts to ignite-extensions, release,
>> and
>> > > use
>> > > > >> them as a maven dependency.
>> > > > >>
>> > > > >>
>> > > > >> On Fri, Dec 3, 2021 at 1:08 PM Ivan Daschinsky <
>> > ivanda...@gmail.com
>> > > >
>> > > > >> wrote:
>> > > > >>
>> > > > >> > Ok, TC suite is ready [1].
>> > > > >> > If there is no objections, I will merge it soon.
>> > > > >> >
>> > > > >> > Possible concerns -- now it is required to install
>> > build-essentials
>> > > > and
>> > > > >> > libnuma-dev in order to build ignite on 64 bit linux.
>> > > > >> > I suppose that this is not a big deal, but maybe someone will
>> > > > contradict?
>> > > > >> >
>> > > > >> >
>> > > > >> > [1] --
>> > > > >> >
>> > > > >> >
>> > > > >>
>> > > >
>> > >
>> >
>> https://ci.ignite.apache.org/buildConfiguration/IgniteTests24Java8_NumaAllocator/?mode=builds
>> > > > >> >
>> > > > >> > чт, 2 дек. 2021 г. в 12:03, Ivan Daschinsky <
>> ivanda...@gmail.com
>> > > >:
>> > > > >> >
>> > > > >> > > >> Our runs show about 7-10 speedup,
>> > > > >> > > Sorry, typo 7-10% speedup
>> > > > >> > >
>> > > > >> > > чт, 2 дек. 2021 г. в 12:01, Ivan Daschinsky <
>> > ivanda...@gmail.com
>> > > > >:
>> > > > >> > >
>> > > > >> > >> Andrey, thanks!
>> > > > >> > >>
>> > > > >> > >> This allocator can be tested on every NUMA system.
>> > > > >> > >> Our runs show about 7-10 speedup, if we use allocattor with
>> > > > >> interleaved
>> > > > >> > >> strategy + -XX:+UseNUMA.
>> > > > >> > >> But unfortunately our yardstick benches doesn't use offheap
>> a
>> > > lot,
>> > > > >> > >> usually above one Gb.
>> > > > >> > >> We trying to do more benches with real data and share them,
>> > > > possibly
>> > > > >> in
>> > > > >> > >> meetup.
>> > > > >> > >>
>> > > > >> > >> AFAIK, GG lab servers are two-sockets machines, aren't
>> they? So
>> > > it
>> > > > is
>> > > > >> > >> worth to run benches with a lot data on them, using
>> > > > >> > >> allocator with interleaved strategy (you can skip specifying
>> > numa
>> > > > >> nodes,
>> > > > >> > >> by default it will use all available) and use -XX:+UseNUMA
>> jvm
>> > > > >> > >> flag.
>> > > > >> > >>
>> > > > >> > >>
>> > > > >> > >>
>> > > > >> > >> чт, 2 дек. 2021 г. в 11:48, Andrey Mashenkov <
>> > > > >> >  andrey.mashen...@gmail.com
>> > > > >> > >> >:
>> > > > >> > >>
>> > > > >> > >>> Ivan,
>> > > > >> > >>>
>> > > > >> > >>> Great job. PR looks good.
>> > > > >> > >>>
>> > > > >> > >>> This allocator in interleaved mode and passing
>> `-XX:+UseNUMA`
>> > > > flag to
>> > > > >> > jvm
>> > > > >> > >>> > show promising results on yardstick benches.
>> Technically, G1
>> > > is
>> > > > >> not a
>> > > > >> > >>> numa
>> > > > >> > >>> > aware collector for java versions less than 14, but
>> > allocation
>> > > > of
>> > > > >> > heap
>> > > > >> > >>> in
>> > > > >> > >>> > interleaved mode shows good results even on java 11.
>> > > > >> > >>>
>> > > > >> > >>> Can you share benchmark results?
>> > > > >> > >>> I'm not sure I'll have an Optane on my notebook in a
>> > reasonable
>> > > > time
>> > > > >> ;)
>> > > > >> > >>>
>> > > > >> > >>>
>> > > > >> > >>> On Thu, Dec 2, 2021 at 10:41 AM Ivan Daschinsky <
>> > > > ivanda...@gmail.com
>> > > > >> >
>> > > > >> > >>> wrote:
>> > > > >> > >>>
>> > > > >> > >>> > Semyon D. and Maks T. -- thanks a lot for review.
>> > > > >> > >>> >
>> > > > >> > >>> > BTW, Igniters, I will appreciate all opinions and
>> feedback.
>> > > > >> > >>> >
>> > > > >> > >>> > пн, 29 нояб. 2021 г. в 10:13, Ivan Daschinsky <
>> > > > >>  ivanda...@apache.org
>> > > > >> > >:
>> > > > >> > >>> >
>> > > > >> > >>> > > Hi, igniters!
>> > > > >> > >>> > >
>> > > > >> > >>> > > There is not a big secret that nowadays NUMA is quite
>> > common
>> > > > in
>> > > > >> > >>> > > multiprocessor systems.
>> > > > >> > >>> > > And this memory architecture should be treated in
>> specific
>> > > > ways.
>> > > > >> > >>> > >
>> > > > >> > >>> > > Support for NUMA is present in many commercial and
>> > > open-source
>> > > > >> > >>> products.
>> > > > >> > >>> > >
>> > > > >> > >>> > > I've implemented a NUMA aware allocator for Apache
>> Ignite
>> > > [1]
>> > > > >> > >>> > > It is a JNI wrapper around `libnuma` and supports
>> > different
>> > > > >> > >>> allocation
>> > > > >> > >>> > > options.
>> > > > >> > >>> > > I.e. interleaved, local, interleved_mask and so on. For
>> > more
>> > > > >> > >>> information,
>> > > > >> > >>> > > see
>> > > > >> > >>> > > [2], [3].
>> > > > >> > >>> > > This allocator in interleaved mode and passing
>> > > `-XX:+UseNUMA`
>> > > > >> flag
>> > > > >> > >>> to jvm
>> > > > >> > >>> > > show promising results on yardstick benches.
>> Technically,
>> > G1
>> > > > is
>> > > > >> > not a
>> > > > >> > >>> > numa
>> > > > >> > >>> > > aware collector for java versions less than 14, but
>> > > > allocation of
>> > > > >> > >>> heap in
>> > > > >> > >>> > > interleaved mode shows good results even on java 11.
>> > > > >> > >>> > >
>> > > > >> > >>> > > Currently, all needed libraries and tools for building
>> > this
>> > > > >> module
>> > > > >> > >>> are
>> > > > >> > >>> > > available on TC agents
>> > > > >> > >>> > > setup of specific test suite is in progress [4]
>> > > > >> > >>> > >
>> > > > >> > >>> > > So I am asking for a review of my patch.
>> > > > >> > >>> > >
>> > > > >> > >>> > > [1] --
>> > https://issues.apache.org/jira/browse/IGNITE-15922
>> > > > >> > >>> > > [2] --
>> https://man7.org/linux/man-pages/man3/numa.3.html
>> > > > >> > >>> > > [3] --
>> > https://man7.org/linux/man-pages/man2/mbind.2.html
>> > > > >> > >>> > > [4] --
>> > https://issues.apache.org/jira/browse/IGNITE-15994
>> > > > >> > >>> > >
>> > > > >> > >>> >
>> > > > >> > >>> >
>> > > > >> > >>> > --
>> > > > >> > >>> > Sincerely yours, Ivan Daschinskiy
>> > > > >> > >>> >
>> > > > >> > >>>
>> > > > >> > >>>
>> > > > >> > >>> --
>> > > > >> > >>> Best regards,
>> > > > >> > >>> Andrey V. Mashenkov
>> > > > >> > >>>
>> > > > >> > >>
>> > > > >> > >>
>> > > > >> > >> --
>> > > > >> > >> Sincerely yours, Ivan Daschinskiy
>> > > > >> > >>
>> > > > >> > >
>> > > > >> > >
>> > > > >> > > --
>> > > > >> > > Sincerely yours, Ivan Daschinskiy
>> > > > >> > >
>> > > > >> >
>> > > > >> >
>> > > > >> > --
>> > > > >> > Sincerely yours, Ivan Daschinskiy
>> > > > >> >
>> > > > >>
>> > > > >
>> > > > >--
>> > > > >Sincerely yours, Ivan Daschinskiy
>> > > >
>> > > >
>> > > >
>> > > >
>> > >
>> >
>>
>

-- 
Sincerely yours, Ivan Daschinskiy

Reply via email to