[hpx-users] HPX 1.4.1 released

2020-02-25 Thread Simberg Mikael
Dear HPX users,

We have just released HPX 1.4.1. This release fixes problems found in the 1.4.0 
release. Among other things it fixes:


  *   Compilation issues on various platforms and compilers
  *   MPI finalization if HPX has not initialized MPI
  *   A few CMake configuration problems
  *   Installation of pdb files on Windows

The release can be downloaded from our release 
page or by checking out the 
1.4.1 tag. Please see our release 
notes
 for a full list of fixed issues and merged pull requests. If you have any 
questions, comments, or exploits to report you can reach us on IRC (#stellar on 
Freenode), or email us at hpx-users@stellar.cct.lsu.edu. We depend on your 
input!
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX community survey

2020-02-12 Thread Simberg Mikael
This is just a reminder that if you haven't yet filled in the survey, we'd very 
much appreciate you doing so! We'll keep the survey open for approximately 
another two weeks.


Thanks!


From: hpx-users-boun...@stellar.cct.lsu.edu 
 on behalf of Simberg Mikael 

Sent: Monday, January 27, 2020 4:13:12 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] HPX community survey


Dear HPX users,


We've put together a short survey to get to know our userbase a bit better. 
Please take a few minutes to fill in the survey over here: 
https://docs.google.com/forms/d/e/1FAIpQLSfJfiQt8EUQxtIqFqDdWPhj70pWal1M7Brgp52FN1TGOpsUGw/viewform.
 Only the first question is compulsory, but the more you fill in the more you 
can help us! We'll keep the survey up for about a month, and then we'll of 
course share the results with you.


Thank you for your help!
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] HPX community survey

2020-01-27 Thread Simberg Mikael
Dear HPX users,


We've put together a short survey to get to know our userbase a bit better. 
Please take a few minutes to fill in the survey over here: 
https://docs.google.com/forms/d/e/1FAIpQLSfJfiQt8EUQxtIqFqDdWPhj70pWal1M7Brgp52FN1TGOpsUGw/viewform.
 Only the first question is compulsory, but the more you fill in the more you 
can help us! We'll keep the survey up for about a month, and then we'll of 
course share the results with you.


Thank you for your help!
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] parallel_executor::post dominating APEX trace

2019-12-18 Thread Simberg Mikael
Kor,


I think you've just found a bug (all my fault...) at a very good time. There 
are indeed configuration options that affect how the scheduler steals tasks and 
it looks like I've set them to very inappropriate values. Stay tuned for a PR.


On the second point you're probably seeing your hpx_main/main function as 
run_helper. Because of some changes in APEX the task names in the OTF file get 
the first name of the task. We don't have a good solution for this yet. As a 
hack you could try doing this as the first thing in your main function (note: 
that's main if you include hpx_main.hpp or hpx_main if you include hpx_init.hpp 
or hpx_start.hpp):


hpx::util::annotate_function annotation("my_main");

hpx::this_thread::yield();


Once the task is rescheduled it should have the label "my_main".


Mikael


From: hpx-users-boun...@stellar.cct.lsu.edu 
 on behalf of Jong, K. de (Kor) 

Sent: Wednesday, December 18, 2019 3:20:59 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: Re: [hpx-users] parallel_executor::post dominating APEX trace

Hi Mikael,

Thank you for your detailed and helpful answer! It starts to make sense
to me now. My program almost behaves as I expect it should. I still have
two questions though. Maybe you or someone else can point me to the
right direction?

1. I run my program on a single node and expect that all threads receive
about the same number of same-sized tasks. The APEX trace in Vampir
shows that all threads start busy but after a while gaps appear on some
OS threads during which nothing seems to happen, while other threads are
still performing tasks. I would expect tasks to be more evenly
distributed and/or to be stolen from the task queues of other OS
threads. Is this assumption correct? Can I increase the tendency of the
scheduler to steal tasks to keep OS threads busy?

2. I perform scaling tests, and each time my tasks run in parallel there
is a serial 'run_helper' task that runs on a single OS thread. What is
this and can I somehow keep it out of my timings? Based on a quick look
at the HPX code I concluded that run_helper has to do with initializing
the HPX run time. But even if I run my tasks multiple times (from the
same running process), run_helper spends time before my tasks do.

I focus on a single node now (1-96 OS threads) and am not doing anything
too clever, I think. I don't tweak the bindings and scheduler yet.

Thanks!

Kor
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] parallel_executor::post dominating APEX trace

2019-12-17 Thread Simberg Mikael
Hi Kor,


hpx::parallel::execution::parallel_executor::post is what eventually gets 
called if you use hpx::apply (or future::then) to create tasks. post itself 
should not take up much time, even in extreme cases. It's hard to say for sure 
without more information about your application but there are at least a few 
possibilities:


  1.  You haven't annotated your tasks and your tasks end up with the default 
name set by the executor. Have you annotated your tasks and do you see them at 
all in your traces?
  2.  You have annotated your tasks and we haven't fixed the annotations for 
all use cases. Are you actually using actions or just plain local functions? 
hpx::apply or hpx::async?
  3.  Your task size is too small for our overheads. What is a typical task 
size in your case? We typically recommend a task size of at least 1 ms to be on 
the safe side, but you can most likely go a bit smaller than that, especially 
if you don't have too many cores on your machine. I think if this were the case 
the symptoms would be a bit different though, so it's most likely not this.

Mikael

From: hpx-users-boun...@stellar.cct.lsu.edu 
 on behalf of Jong, K. de (Kor) 

Sent: Tuesday, December 17, 2019 4:14:51 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] parallel_executor::post dominating APEX trace

Hi list,

I am using HPX commit bbc3ad7 (1.4.0-rc2 + APEX linking fix) and APEX to
gain insights into the run time behavior of my program (and hopefully
improve it, based on that). The trace I am looking at shows that by far
most of the time is spent by

hpx::parallel::execution::parallel_executor::post

instead of in my actions. Maybe this makes complete sense. Can someone
maybe explain in what situations parallel_executor::post would take up a
lot of time?

Thanks!

Kor
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX 1.4.0 release candidate 2

2019-12-10 Thread Simberg Mikael
Hi Kor,


You'll also want to export APEX_OTF2=1 to actually generate the OTF2 files. It 
should end up in OTF2_archive/APEX.otf2 in the directory that you run the 
executable. Let us know if that doesn't work. Also keep in mind that to get 
meaningful names for your tasks you'll want to annotate them with e.g. 
hpx::util::annotated_function (wrap your functions at hpx::async/apply 
callsites).


Mikael


From: hpx-users-boun...@stellar.cct.lsu.edu 
 on behalf of Jong, K. de (Kor) 

Sent: Tuesday, December 10, 2019 11:23:12 AM
To: hpx-users@stellar.cct.lsu.edu
Subject: Re: [hpx-users] HPX 1.4.0 release candidate 2

On 12/6/19 5:28 PM, Simberg Mikael wrote:
> APEX integration has also
> been updated and can be enabled with -DHPX_WITH_APEX=ON and
> -DHPX_WITH_APEX_TAG=develop. The latter option will be unnecessary for
> the final release.

I am interested in generating traces in OTF2 format to load into Vampir.
For that I am testing 1.4.0-rc2 with support for APEX and OTF2.

Building HPX with support for APEX works fine, great! I have also built
OTF2 (version 2.2) and passed in the APEX_WITH_OTF2 flag. I noticed that
otf2_listener.cpp.o was being built. I have no reason to believe the
resulting HPX does not have support for APEX+OTF2. Also:

./fibonacci --hpx:info | grep APEX
   HPX_WITH_APEX=ON
   HPX_WITH_APEX_NO_UPDATE=OFF

When I try this, no trace is printed, though:

APEX_SCREEN_OUTPUT=1 ./fibonacci

When I try this, a trace is printed:

APEX_SCREEN_OUTPUT=1 ./apex_fibonacci

I expected the first command to also print a trace. Is automatic
instrumentation of HPX with APEX maybe not working yet in 1.4.0-rc2?

Kor
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] HPX 1.4.0 release candidate 2

2019-12-06 Thread Simberg Mikael
The second release candidate of the 1.4.0 release is now available. Please get 
it by cloning the 1.4.0-rc2 tag or by downloading it from GitHub: 
https://github.com/STEllAR-GROUP/hpx/releases/tag/1.4.0-rc2.


Since the previous release candidate we have fixed multiple build and 
configuration issues. In addition you can now enable the hpxMP option by 
passing -DHPX_WITH_HPXMP=ON as a CMake option. APEX integration has also been 
updated and can be enabled with -DHPX_WITH_APEX=ON and 
-DHPX_WITH_APEX_TAG=develop. The latter option will be unnecessary for the 
final release.


As usual please let us know about any issues that you encounter!
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] HPX 1.4.0 release candidate 1

2019-11-14 Thread Simberg Mikael
Dear HPX users,

The first release candidate of HPX 1.4.0 is now available. Get it by cloning 
the repository and checking out the 1.4.0-rc1 tag or downloading an archive 
from our releases page: https://github.com/STEllAR-GROUP/hpx/releases.

This is a big release with many changes and improvements. Most relevant in 
terms of your builds is that we have refactored our CMake setup and continued 
the modularization efforts. We intend to make this a non-breaking change. While 
we believe we've sorted out most issues, we ask you to try out the release 
candidate and let us know of any issues that you find. As part of the 
modularization we have moved many headers around. The old header locations will 
still work for a couple of releases but will cause warnings in your builds. We 
encourage you to update your includes to the new header locations as soon as 
possible. If you can't update your includes just yet you can also turn off the 
warnings per module with the CMake option 
HPX__WITH_DEPRECATION_WARNINGS=OFF. Conversely, you can turn off 
the compatibility headers with HPX__WITH_COMPATIBILITY_HEADERS=OFF.


In the first release candidate the APEX and hpxMP integrations do not yet work.


Please let us know of any issues, small or big, that you might find by opening 
an issue on GitHub: https://github.com/STEllAR-GROUP/hpx/issues.


Thanks for your help!
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Link error building HPX with support for APEX

2019-06-13 Thread Simberg Mikael
I think the real problem here is that the latest HPX release actually pulls 
latest APEX develop, and not the tag we specify. Specifying a tag is supposed 
to avoid situations exactly like this but that doesn't work (master may 
sometimes not work until the dependencies have been updated in APEX, but that's 
okish. Actually, if we update dependencies in APEX first, tag it, and update 
the tag in HPX when a module is introduced even master won't be broken).

The workaround for 1.3.0 is to do as Kevin suggested, but with the 2.1.3 tag. 
For HPX master you'll need APEX develop.

I'll try to fix the way we fetch APEX, and we'll maybe make a patch release 
with that fix.

Mikael

From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of Hartmut Kaiser 
[hartmut.kai...@gmail.com]
Sent: Tuesday, June 04, 2019 5:47 PM
To: hpx-users@stellar.cct.lsu.edu; Biddiscombe, John A.
Subject: Re: [hpx-users] Link error building HPX with support for APEX

> Yeah, HPX changed build dependencies without warning, these are now fixed.

Yes, sorry guys. When we started with the modularization effort, we didn't
realize that APEX and hpxMP will require special handling (as those are
essentially modules on their own.

Regards Hartmut
---
http://stellar.cct.lsu.edu
https://github.com/STEllAR-GROUP/hpx


> You'll need to add an extra option to the HPX config to pull the latest
> from APEX:
>
> -DHPX_WITH_APEX_NO_UPDATE=FALSE -DHPX_WITH_APEX_TAG=develop
>
> Thanks -
> Kevin
>
> > On Jun 4, 2019, at 1:24 AM, Biddiscombe, John A. 
> wrote:
> >
> >> I use the FetchContent module to integrate HPX into my project, which
> means my project is the main project. I think some paths in APEX'
> CMakeLists.hpx need to be adjusted, which I have done locally. Once I have
> the whole build working I can provide a patch or pull request with my
> edits.
> >
> > Please do submit a PR (to the APEX repo though if it needs fixes) - I
> thought I had fixed all those ages ago, but I guess some new ones have
> crept in.
> >
> 
> > Second, my build of HPX with support for APEX almost succeeds, but not
> completely (repro-case at bottom of this message):
> > <<<
> >
> > My local projects that build using HPX are also failing with the similar
> problems related to hpx_cache and also hpx/config.hpp include directory
> problems.
> > Can you confirm if you are using master branch, or a 1.3 release - the
> cache stuff was not present in the 1.3 release.
> >
> > Thanks
> >
> > JB
> >
> > /usr/bin/ld: cannot find -lhpx_cache
> > collect2: error: ld returned 1 exit status
> > _deps/hpx-build/src/CMakeFiles/hpx.dir/build.make:2769: recipe for
> > target 'lib/libhpx.so.1.3.0' failed
> > make[2]: *** [lib/libhpx.so.1.3.0] Error 1
> > CMakeFiles/Makefile2:1112: recipe for target '_deps/hpx-
> build/src/CMakeFiles/hpx.dir/all' failed Building with HPX_WITH_APEX=OFF
> succeeds, building with HPX_WITH_APEX=ON fails for lack of hpx_cache. I
> cannot find any reference to the hpx_cache library in the HPX and APEX
> sources. It is mentioned in three files in HPX' build directory, though:
> > ./CMakeFiles/Export/lib/cmake/HPX/HPXTargets.cmake
> > ./lib/cmake/HPX/HPXTargets.cmake
> > ./src/CMakeFiles/hpx.dir/link.txt
> >
> > The CMake scripts contain this snippet:
> > set_target_properties(apex PROPERTIES
> >
> >   INTERFACE_COMPILE_OPTIONS "-std=c++17"
> >
> >   INTERFACE_LINK_LIBRARIES "hpx_cache"
> >
> > )
> >
> > Does anyone know how I can make my build succeed?
> >
> > Thanks!
> >
> > Kor
> >
> >
> >
> > The simplest CMakeLists.txt with which I can recreate the issue is this
> one:
> >
> > # dummy/CMakeLists.txt
> > cmake_minimum_required(VERSION 3.12)
> > project(dummy LANGUAGES CXX)
> >
> > include(FetchContent)
> >
> > set(HPX_WITH_EXAMPLES OFF CACHE BOOL "") set(HPX_WITH_TESTS OFF CACHE
> > BOOL "") set(HPX_WITH_APEX ON CACHE BOOL "")
> >
> > FetchContent_Declare(hpx
> > GIT_REPOSITORY https://github.com/STEllAR-GROUP/hpx
> > GIT_TAG 1.3.0
> > )
> >
> > FetchContent_GetProperties(hpx)
> >
> > if(NOT hpx_POPULATED)
> >
> > FetchContent_Populate(hpx)
> >
> > if(HPX_WITH_APEX)
> > # I think something like this should be done in
> > # APEX' CMakeLists.hpx
> > include_directories(
> > ${hpx_SOURCE_DIR}/libs/preprocessor/include
> > ${hpx_SOURCE_DIR}/apex/src/apex
> > ${hpx_SOURCE_DIR}/apex/src/contrib)
> >
> > endif()
> >
> > add_subdirectory(${hpx_SOURCE_DIR} ${hpx_BINARY_DIR})
> >
> > endif()
> > # / dummy/CMakeLists.txt
> >
> > The project can be built (until the above link error) like this:
> >
> > mkdir dummy/build
> > cd dummy/build
> > cmake ..
> > make
> > ___
> > hpx-users mailing list
> > hpx-users@stellar.cct.lsu.edu
> > https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
> >
>
> --
> Kevin Huck, PhD
> Research Associate / Computer Scientist
> 

Re: [hpx-users] C++20 Modules and HPX library modularization

2019-06-12 Thread Simberg Mikael
Hi Michael,

I'm one of the CSCS people Hartmut mentioned below. Sorry for not joining the 
discussion earlier. I've been away the last two weeks.

I don't have too much to add to the discussion at this point, other than: yes, 
there's definitely interest! The main goal of the modularization effort is to 
simply improve the structure of HPX, and not necessarily to enable C++ modules. 
But being able to use C++ modules might turn out to be a very nice bonus. And 
keeping a one-to-one correspondence between C++ modules and our current 
"modules" is most likely the only sane thing to do. Anything else will cause 
confusion and pain.

I haven't looked too much into the mechanisms and syntax of enabling C++ 
modules so your help would definitely be appreciated there. We'll be getting 
more modules (not sure if we need to call our current modules something other 
than modules to avoid confusion...) in the near future, so there'll be plenty 
to play around with. So far there's:
- A preprocessor module: this one is interesting because it only "exports" 
macros. As far as I understand macros can't be exported from C++ modules, so 
this would already be a first special case to deal with.
- A config module. Same thing as above.
- A cache module. This is the first module that actually implements functions 
and classes that are exported. This is likely the closest to a "typical" module 
we'll have.

The question is how much extra boilerplate we'd have to add to support 
importing and exporting modules. E.g. how we can avoid duplicating "imports" in 
the form of including headers and actual module imports. I think the modules in 
master (right now and in the near future) should be enough to figure out if and 
how C++ modules could be used in HPX.

Does this help at all? It might be easiest to continue on IRC if you have more 
questions about this.

Kind regards,
Mikael

From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of Hartmut Kaiser 
[hartmut.kai...@gmail.com]
Sent: Thursday, June 06, 2019 7:16 PM
To: 'Michael Levine'; 'HPX Users'
Subject: Re: [hpx-users] C++20 Modules and HPX library modularization

Michael,

> Thanks for the quick reply - unfortunately it was deemed spam and since my
> spam folder does not sync on my phone, I didn't see it until I forced an
> update.
>
> I'd certainly be interested and willing to contribute to this effort as
> well as to looking into a possible solution for GPGPUs, although I am
> going to need some guidance and help.
>
> I will respond to each topic in its appropriate thread for continuity.
>
> Given that the modularization of HPX seems to be the key to the successful
> integration of C++ modules, presumably, this should be fairly
> straightforward - the library module boundaries should probably correspond
> to the C++ module boundaries.
>
> Having said that, I will need someone to help me / review with me how to
> map the hpx modules into Modules TD (e.g. "super" - modules which just
> import other modules, which modules might be best broken into submodules,
> etc.)
>
> Is there anyone in the community who might be interested and able to
> provide some guidance?

The modularization effort is spearheaded by the guys at CSCS (Mikhal and
Auriane, both should be subscribed here). They might be the best to contact in
order to coordinate things. Alternatively, they normally can be reached
through our IRC channel.

Thanks!
Regards Hartmut
---
http://stellar.cct.lsu.edu
https://github.com/STEllAR-GROUP/hpx


>
> Thanks,
> Michael
>
> --
> On May 30, 2019 7:50:57 a.m. "Hartmut Kaiser" 
> wrote:
>
> > Hey Michael,
> >
> >> I am wondering whether there are any plans in the short or mid-term
> >> to provide support in HPX for the new Modules TS which is expected to
> >> be part of C++20?
> >>
> >>
> >> While compile times are not really the main benefit of modules, I
> >> understand that we can expect to see some variable degree of
> >> improvement when using modules as compared to the existing approach.
> >>
> >>
> >> At a very early stage of the Modules TS, I took a quick look through
> >> the HPX codebase to see whether I thought it might be possible at all
> >> to try and add some conditional code to allow for possible modules TS
> >> support and was quickly discouraged.
> >>
> >>
> >> However, it also seems to me that the current modularization
> >> initiative might make it possible to export HPX as modules.
> >>
> >>
> >> Has anyone given this any consideration?
> >> Is this an ultimate goal in anyone's vision?
> >> Would an attempt to incorporate the features of the modules TS be
> >> welcomed by the community? At least, perhaps, as a proof-of-concept?
> >>
> >>
> >> I'd appreciate your thoughts and feedback
> >
> > This topic has not been discussed so far and I personally have not
> > looked much into C++20 modules. However, I believe that the current
> > modularization effort 

[hpx-users] HPX 1.3.0 released!

2019-05-23 Thread Simberg Mikael
The STE||AR Group is proud to announce the release of HPX 1.3.0!

This release focuses on performance and stability improvements. Make sure to 
read the full release 
notes
 to see all new and breaking changes. Thank you once again to everyone in the 
STE||AR Group and all the volunteers who have provided fixes, opened issues, 
and improved documentation.

Download the release from our download 
page, or GitHub 
page.

The highlights of this release are:

  *   Significant performance improvements. Thanks to improvements in the 
schedulers and executors most parallel algorithms have reduced overheads and 
perform better especially with small grain sizes.
  *   Many stability improvements. Most notably many issues reported by Clang 
sanitizers have been fixed.
  *   To improve usability in single-node usage, HPX now defaults to not 
turning on networking if running on a single node. This means that it is now 
possible to run multiple instances of HPX on a single node by default.
  *   We have added back single-page 
HTML
 documentation after the move to Sphinx. We also generate 
PDF 
documentation now.

For a complete list of new features and breaking changes please see our release 
notes.
 If you have any questions, comments, or exploits to report you can reach us on 
IRC (#stellar on Freenode), or email us at 
hpx-users@stellar.cct.lsu.edu. We value 
on your input!
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] [Stellar] HPX 1.3.0 release candidate 1

2019-05-13 Thread Simberg Mikael
Dear all,

I've just tagged release candidate 2. Get it by checking out the `1.3.0-rc2` 
tag.

Patrick, if you can start the builds for Fedora again that would be great!

The biggest change is the addition of John's new numa_binding_allocator which 
we need to check is correctly disabled on platforms that don't support it.

Mikael

From: Patrick Diehl [pdi...@cct.lsu.edu]
Sent: Saturday, May 04, 2019 2:56 PM
To: Simberg  Mikael; hpx-users@stellar.cct.lsu.edu; 
hpx-de...@stellar.cct.lsu.edu; stel...@cct.lsu.edu
Subject: Re: [Stellar] HPX 1.3.0 release candidate 1

Dear HPX users,

I started a build on the Fedora build servers for future fedora 31 [0].

Most of the architectures were successful and we need to wait for arm
and aarch64.

Best,

Patrick

[0] https://koji.fedoraproject.org/koji/taskinfo?taskID=34612962

On 5/3/19 10:56 AM, Simberg  Mikael wrote:
> Dear HPX users,
>
> The first release candidate of HPX 1.3.0 is now available. Get it by
> cloning the repository and checking out the 1.3.0-rc1 tag.
>
> Please let us know about any problems that you find by opening an issue
> on GitHub
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FSTEllAR-GROUP%2Fhpx%2Fissues=02%7C01%7Cpatrickdiehl%40lsu.edu%7C0d3697cd246e4de1b33b08d6cfe009e3%7C2d4dad3f50ae47d983a09ae2b1f466f8%7C0%7C0%7C636924958489792133=qIZ%2BdQBKMII0rIwtRJh7ENJXab0nYytAfA6ciR6qzAs%3D=0>.
> Currently still known issues are:
> - A breaking API change in Boost small_vector. This means that the
> local_dataflow_boost_small_vector test does not build.
> - A potential regression with the latest Slurm (18.08). We are still
> investigating the cause for the regression.
>
> ___
> Stellar mailing list
> stel...@mail.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/stellar
>

--
Patrick Diehl
Center for Computation and Technology
Louisiana State University
Digital Media Center
340 E Parker Blvd, Baton Rouge, LA 70803, USA
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] HPX 1.3.0 release candidate 1

2019-05-03 Thread Simberg Mikael
Dear HPX users,

The first release candidate of HPX 1.3.0 is now available. Get it by cloning 
the repository and checking out the 1.3.0-rc1 tag.

Please let us know about any problems that you find by opening an issue on 
GitHub. Currently still known 
issues are:
- A breaking API change in Boost small_vector. This means that the 
local_dataflow_boost_small_vector test does not build.
- A potential regression with the latest Slurm (18.08). We are still 
investigating the cause for the regression.
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Regarding GSoC2019

2019-02-25 Thread Simberg Mikael
Hi,

The first thing you should do is have a look through our wiki on project ideas 
and writing successful proposals: 
https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-2019-Project-Ideas,
 and https://github.com/STEllAR-GROUP/hpx/wiki/Hints-for-Successful-Proposals. 
The second thing would be to compile HPX (and Phylanx if relevant for the 
project) and play around with it. Try to write some small programs using HPX. 
Once you've done that (or while you're doing it) you might want to join our IRC 
channel (ste||ar on freenode) where we can discuss your interests and potential 
issues for you to solve.

Mikael

From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of MALLA SAI YASWANTH REDDY 
[msy...@iitbbs.ac.in]
Sent: Sunday, February 24, 2019 3:11 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] Regarding GSoC2019

Hello Everyone,
 my name is Yashwanth. I am a third year Undergraduate from IIT Bhubaneswar. I 
am proficient in C++,Python. I have experience in working with Machine Learning 
algorithms. I am planning to apply for GSoc19 with your organisation. I need 
help to get started .so,please assign me some task/subgoal to start 
contributing and work along with you.

M. Yashwanth
msy...@iitbbs.ac.in


Disclaimer: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error please notify the system 
manager. This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Irc

2019-02-19 Thread Simberg Mikael
Hi,

I don't know for sure what's causing that error, but make sure you're on the 
correct network (freenode). In addition, you'll need to register to be allowed 
to join: https://freenode.net/kb/answer/registration.

Kind regards,
Mikael

From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of Abhishek Kashyap 
[abhishek.kasya...@gmail.com]
Sent: Tuesday, February 19, 2019 3:39 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] Irc

Hi
i am not able to connect to irc channel #ste||ar.
Its giving error you are not irc operator.While other organisation  i am able 
to connect.
Please help
best
Abhishek

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Google Summer of Code 2019

2019-01-15 Thread Simberg Mikael
Hi Rashul,

A good starting point is to simply get familiar with HPX, meaning compile HPX, 
look around the source code, and try to implement your favorite algorithm using 
HPX and see how well you can get it to scale.

Make sure you join our IRC channel (#ste||ar on freenode) where you can ask 
questions if you get stuck. Once you've done the above we can try to find you a 
suitable issue to work on, and discuss what project you would be interested in.

Kind regards,
Mikael Simberg

From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of Rashul Chutani 
[rashul.chutani.mt...@maths.iitd.ac.in]
Sent: Monday, January 14, 2019 6:29 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] Google Summer of Code 2019

Dear Sir

I am Rashul Chutani , 1st year student of Indian Institute of
Technology, Delhi majoring in Btech and Mtech Mathematics and Computing.

Giving a brief about myself, I am very well versed with good programming
skills having knowledge of C, C++, Python, HTML, CSS, Javascript. I have
even acquired good knowledge of git and linux. I have qualified National
Standard Examination in Physics and Chemistry. A KVPY scholar both in
class 11th and 12th.
Showing my web front end development  skills, I have made my website ,
link is :
https://rashulchutani.github.io/

I have been following Ste||ar HPX since a long time, and I wish to
contribute to this wonderful open source community.
I saw the GSoC 2019 Project ideas ste||ar group HPX.
Fortunately, there are a lot of ideas for which I have good knowledge
and which can easily be fulfilled by my knowledge.

Also, I request you sir to please give me some assignment to start with
as I am new in this area. But I would like to show my skills to whatever
level possible so that it gives dual benefit both to me and the
organization.

Thanks and regards

Rashul Chutani
Btech & Mtech
Mathematics and Computing
IIT Delhi
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSoC 2019

2018-12-18 Thread Simberg Mikael
Hi,

I'm forwarding this to the hpx-users mailing list to keep the discussion public 
and to let others chime in as well.

I'd suggest you start getting familiar with HPX itself by building it, running 
examples, trying to write some small programs of your own. Once you've done 
that you can start looking at the current functionality we have for writing csv 
files, both interface and implementation. Since this is most likely quite an 
easy project technically, there will be more freedom for you to choose 
something interesting to do with the output. The project description already 
has some ideas, but you're free to come up with your own ideas there.

Lastly, I suggest you join us on IRC (#ste||ar on freenode) or Slack (#hpx 
channel on the cpplang Slack) since it can be easier to discuss details there. 
The mailing list is also good. The important thing is we keep these discussions 
open so that they can benefit others interested in the same projects.

Kind regards,
Mikael


From: Mohita Bipin [mohitabi...@gmail.com]
Sent: Wednesday, December 19, 2018 8:01 AM
To: Simberg Mikael
Subject: GSoC 2019

Sir

I had seen a project proposal to augment CSV files under GSoC 2019 and I was 
hoping to get involved with with particular project

With a reasonable base in c++ and I am familiar with python and the basics of 
pandas I was hoping you could point me in the right direction and give me a 
starting point.

Hope to hear from you soon
Mohita Liza Bipin
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] HPX 1.2.0 released!

2018-11-12 Thread Simberg Mikael
The STE||AR Group is proud to announce the release of HPX 1.2.0!


This release is the first in our more frequent release schedule. We are aiming 
to produce one release every six months in an effort to get new features and 
stable releases out to users more quickly. As a result this release is smaller 
than many previous releases, but nevertheless contains many important 
improvements. This release includes among others performance improvements, a 
new implementation of hpx_main.hpp, scheduler hints, and many stability 
improvements. This release also removes many previously deprecated features. 
Make sure you read the full release 
notes
 to see which deprecated features were removed.


Thank you to everyone who has contributed from all over the world!


Download the release from our download 
page, or GitHub 
page.


The highlights of this release are:

  *   Thanks to the work of our Google Summer of Code student, Nikunj Gupta, we 
now have a new implementation of hpx_main.hpp on supported platforms (Linux, 
BSD and MacOS). This is intended to be a less fragile drop-in replacement for 
the old implementation relying on preprocessor macros. The new implementation 
does not require changes if you are using the CMake or pkg-config. The old 
behaviour can be restored by setting HPX_WITH_DYNAMIC_HPX_MAIN=OFF during CMake 
configuration. The implementation on Windows is unchanged.
  *   We have added functionality to allow passing scheduling hints to our 
schedulers. These will allow us to create executors that for example target a 
specific NUMA domain or allow for HPX threads to be pinned to a particular 
worker thread.
  *   We have significantly improved the performance of our futures 
implementation by making the shared state atomic.
  *   We have replaced Boostbook by Sphinx for our documentation. This means 
the documentation is easier to navigate with built-in search and table of 
contents. We have also added a quick start section and restructured the 
documentation to be easier to follow for new users. The latest stable 
documentation can always be found 
here.
  *   HPXMP is a portable, scalable, and flexible application programming 
interface using the OpenMP specification that supports multi-platform shared 
memory multiprocessing programming in C and C++. HPXMP can be enabled within 
HPX by setting HPX_WITH_HPXMP=ON during CMake configuration.

For a complete list of new features and breaking changes please see our release 
notes.
 If you have any questions, comments, or exploits to report you can reach us on 
IRC (#stellar on Freenode), or email us at 
hpx-users@stellar.cct.lsu.edu. We value 
on your input!
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Contributing to HPX

2018-10-25 Thread Simberg Mikael
Hi Ahmed,

One thing that you could work on is adding more (CDash) timing outputs to our 
benchmarks. John already started this with 
https://github.com/STEllAR-GROUP/hpx/pull/3421 and 
https://github.com/STEllAR-GROUP/hpx/issues/3422. I know this is not 
necessarily technically very challenging, but it is something we would 
appreciate and it would expose you to parts of HPX that you might not have seen 
before. Maybe you'll find a part of HPX that you'd be interested in working on, 
either unrelated to benchmarking or directly related by improving the 
performance of something in HPX.

If you think this is interesting a good starting point are the two links above. 
After that essentially everything in "tests/performance" could have timing 
output added (at least everything that benchmarks HPX performance). Some unit 
tests could also have timing output added (as John already started doing).

Kind regards,
Mikael


From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of Ahmed Samir 
[asami...@hotmail.com]
Sent: Thursday, October 25, 2018 7:20 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: Re: [hpx-users] Contributing to HPX

Patrick,

Great. Do you recommend a specific issue that I could start with? I took a look 
on the issues and found out that I need someone to direct me to some keys to 
start with any issue. So could you give me some pointers on an issue to get 
started?

Best,
Ahmed Samir

Sent from Mail for Windows 10


From: hpx-users-boun...@stellar.cct.lsu.edu 
 on behalf of Patrick Diehl 

Sent: Thursday, October 25, 2018 2:17:28 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: Re: [hpx-users] Contributing to HPX

Ahmed,

I would recommend to look at the issues for HPX. One first step could be to 
understand one of the issues and start to work on it.

Best,

Patrick

On Thu, Oct 25, 2018, 3:55 AM Ahmed Samir 
mailto:asami...@hotmail.com>> wrote:
John,

Currently I am not working on a project in which I could integrate HPX. I’ll 
think about that.

It would be great if you got any ideas about small parts in HPX in which I 
could go on with, without going deeply in the HPX architecture. If you got any 
ideas please send it to me.

Best,
Ahmed Samir

Sent from Mail for Windows 10


From: 
hpx-users-boun...@stellar.cct.lsu.edu
 
mailto:hpx-users-boun...@stellar.cct.lsu.edu>>
 on behalf of Biddiscombe, John A. mailto:biddi...@cscs.ch>>
Sent: Wednesday, October 24, 2018 10:14:17 PM
To: hpx-users@stellar.cct.lsu.edu; 
hartmut.kai...@gmail.com

Subject: Re: [hpx-users] Contributing to HPX

Ahmed

I'm a little bit frightened by the idea of you working on hwloc+hpx - the 
reason is the same as the one that caused problems in gsoc.

* we don't have a well defined plan on what we want to do with hwloc yet. I 
would like to clean it up and make it better, but only because it is missing a 
couple of functions that I would like - but I don't really know hwloc that well 
either. (same as I didn't really know fflib)

If you are working on a a problem and the tools you have don't quite solve the 
problem, then you can tweak the tools to work - for this reason I am looking at 
hwloc and asking myself - how do I find out which socket on the node is 
'closest' to the gpu? Should I create a thread pool for GPU managment on socket 
0, or socket 1 (or 2,3.4 ...). I will be working on a machine with 6 gpu's and 
2 processors. it is probably safe to assume that gpu's 0,1,2 are closest to 
socket 0, and 3,4,5 to socket 1 - but maybe it's 0,2,4 and 1,3,5 - I'd like to 
be able to query hwloc and get this info and build it into our resource 
partitioned in such a way that the programmer can just say "which gpus should I 
use from this socket" or vice versa.
So I know what I want - but I don't really have a plan yet, and usually, one 
knocks up a bit of code, get's something that half works and is good enough, 
then tries to make it fit into the existing framework and realize that 
something else needs to be tweaked to make it fit nicely with the rest of HPX. 
This requires a deal of understanding of the hpx internals and how it all fits 
together.

So in summary, its better for you to find a project that already interests you 
(part of your coursework?) and use HPX for that project - this gives you time 
to learn how hpx works and use it a bit - then extend it gradually as your 
knowledge grows. hwloc/topology is quite a low level part of hpx and does 
require a bit of help. If you were here in the office next door to me, it'd be 
no problem - cos you could get help any time, but working on your own is going 
to be tough.

I'm not saying do't do it (I did suggest it after all), but 

Re: [hpx-users] Segmentation fault with mpi4py

2018-10-23 Thread Simberg Mikael
Hi,

hopefully someone else can chime in on the MPI and Python side of things, but 
thought I'd comment shortly on the runtime suspension since I implemented it.

The reason for requiring a only a single locality for runtime suspension is 
simply that I never tested it with multiple localities. It may very well 
already work with multiple localities, but I didn't want users to get the 
impression that it's a well-tested feature. So if this is indeed useful for you 
you could try removing the check (you probably already found it, let me know if 
that's not the case) and rebuilding HPX.

I suspect though that runtime suspension won't help you here since it doesn't 
actually disable MPI or anything else. All it does is put the HPX worker 
threads to sleep once all work is completed.

In this case there might be a problem with our MPI parcelport interfering with 
mpi4py. It's not entirely clear to me if you want to use the networking 
features of HPX in addition to MPI. If not you can also build HPX with 
HPX_WITH_NETWORKING=OFF which will... disable networking. This branch is also 
meant to disable some networking related features at runtime if you're only 
using one locality: https://github.com/STEllAR-GROUP/hpx/pull/3486.

Kind regards,
Mikael

From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of Vance, James 
[va...@uni-mainz.de]
Sent: Tuesday, October 23, 2018 4:38 PM
To: hpx-users@stellar.cct.lsu.edu
Subject: [hpx-users] Segmentation fault with mpi4py

Hi everyone,

I am trying to gradually port the molecular dynamics code Espresso++ from its 
current pure-MPI form to one that uses HPX for the critical parts of the code. 
It consists of a C++ and MPI-based shared library that can be imported in 
python using the boost.python library, a collection of python modules, and an 
mpi4py-based library for communication among the python processes.

I was able to properly initialize and terminate the HPX runtime environment 
from python using the methods in hpx/examples/quickstart/init_globally.cpp and 
phylanx/python/src/init_hpx.cpp. However, when I use mpi4py to perform 
MPI-based communication from within a python script that also runs HPX, I 
encounter a segmentation fault with the following trace:

-
{stack-trace}: 21 frames:
0x2abc616b08f2  : ??? + 0x2abc616b08f2 in 
/lustre/miifs01/project/m2_zdvresearch/vance/hpx/builds/gcc-openmpi-bench/install/lib/libhpx.so.1
0x2abc616ad06c  : hpx::termination_handler(int) + 0x15c in 
/lustre/miifs01/project/m2_zdvresearch/vance/hpx/builds/gcc-openmpi-bench/install/lib/libhpx.so.1
0x2abc5979b370  : ??? + 0x2abc5979b370 in /lib64/libpthread.so.0
0x2abc62755a76  : mca_pml_cm_recv_request_completion + 0xb6 in 
/cluster/easybuild/broadwell/software/mpi/OpenMPI/2.0.2-GCC-6.3.0/lib/libmpi.so.20
0x2abc626f4ac9  : ompi_mtl_psm2_progress + 0x59 in 
/cluster/easybuild/broadwell/software/mpi/OpenMPI/2.0.2-GCC-6.3.0/lib/libmpi.so.20
0x2abc63383eec  : opal_progress + 0x3c in 
/cluster/easybuild/broadwell/software/mpi/OpenMPI/2.0.2-GCC-6.3.0/lib/libopen-pal.so.20
0x2abc62630a75  : ompi_request_default_wait + 0x105 in 
/cluster/easybuild/broadwell/software/mpi/OpenMPI/2.0.2-GCC-6.3.0/lib/libmpi.so.20
0x2abc6267be92  : ompi_coll_base_bcast_intra_generic + 0x5b2 in 
/cluster/easybuild/broadwell/software/mpi/OpenMPI/2.0.2-GCC-6.3.0/lib/libmpi.so.20
0x2abc6267c262  : ompi_coll_base_bcast_intra_binomial + 0xb2 in 
/cluster/easybuild/broadwell/software/mpi/OpenMPI/2.0.2-GCC-6.3.0/lib/libmpi.so.20
0x2abc6268803b  : ompi_coll_tuned_bcast_intra_dec_fixed + 0xcb in 
/cluster/easybuild/broadwell/software/mpi/OpenMPI/2.0.2-GCC-6.3.0/lib/libmpi.so.20
0x2abc62642bc0  : PMPI_Bcast + 0x1a0 in 
/cluster/easybuild/broadwell/software/mpi/OpenMPI/2.0.2-GCC-6.3.0/lib/libmpi.so.20
0x2abc64cea17f  : ??? + 0x2abc64cea17f in 
/cluster/easybuild/broadwell/software/lang/Python/2.7.13-foss-2017a/lib/python2.7/site-packages/mpi4py/MPI.so
0x2abc59176f9b  : PyEval_EvalFrameEx + 0x923b in 
/cluster/easybuild/broadwell/software/lang/Python/2.7.13-foss-2017a/lib/libpython2.7.so.1.0
0x2abc5917879a  : PyEval_EvalCodeEx + 0x87a in 
/cluster/easybuild/broadwell/software/lang/Python/2.7.13-foss-2017a/lib/libpython2.7.so.1.0
0x2abc59178ba9  : PyEval_EvalCode + 0x19 in 
/cluster/easybuild/broadwell/software/lang/Python/2.7.13-foss-2017a/lib/libpython2.7.so.1.0
0x2abc5919cb4a  : PyRun_FileExFlags + 0x8a in 
/cluster/easybuild/broadwell/software/lang/Python/2.7.13-foss-2017a/lib/libpython2.7.so.1.0
0x2abc5919df25  : PyRun_SimpleFileExFlags + 0xd5 in 
/cluster/easybuild/broadwell/software/lang/Python/2.7.13-foss-2017a/lib/libpython2.7.so.1.0
0x2abc591b44e1  : Py_Main + 0xc61 in 
/cluster/easybuild/broadwell/software/lang/Python/2.7.13-foss-2017a/lib/libpython2.7.so.1.0
0x2abc59bccb35  : __libc_start_main + 0xf5 in /lib64/libc.so.6
0x40071e: ??? + 0x40071e in python
{what}: Segmentation fault
{config}:
  

[hpx-users] HPX 1.2.0 release candidate 1

2018-10-17 Thread Simberg Mikael
Dear HPX users,

The first release candidate of HPX 1.2.0 is now available! Get it by 
downloading it from 
https://github.com/STEllAR-GROUP/hpx/releases/tag/1.2.0-rc1, or by cloning the 
repository and checking out the tag:

git clone https://github.com/STEllAR-GROUP/hpx.git
git checkout 1.2.0-rc1

We've had a lot of important changes since the previous release. Please let us 
know about any problems that you find by opening an issue on GitHub: 
https://github.com/STEllAR-GROUP/hpx/issues.
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Gsoc 2019

2018-08-24 Thread Simberg Mikael
Hi,

I'm forwarding this to our mailing list so that others in our community can 
respond as well.

A good place to start is to have a look at our git repositories at 
https://github.com/STEllAR-GROUP, clone for example HPX and set up a 
development environment to get familiar with the codebase. We also have a page 
with project ideas for GSoC which you can find here: 
https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-2018-Project-Ideas.

The mailing list archives may also contain some useful information given to and 
from previous GSoC students: http://envelope.cct.lsu.edu/pipermail/hpx-users/. 
You can also join the IRC channel (#ste||ar on freenode) and discuss with us 
over there.

Kind regards,
Mikael


From: Shubham Verma [shubhu...@gmail.com]
Sent: Friday, August 24, 2018 7:29 AM
To: Simberg Mikael
Subject: Gsoc 2019


 Hello sir,
I am a engineering student and I am interested in your organisation for doing 
some good project .so
I wanna take part in gsoc next year. What do you suggest I should do for doing 
better in project.
Thankyou
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] Feedback on documentation (from GSoC students, new users, old users)

2018-03-28 Thread Simberg Mikael
Dear HPX users,

I'd like to ask you for feedback on our current documentation over on 
GitHub. We would really like 
to make it as easy as possible for you to get going with HPX but for that we 
need to hear your experiences.

I would be especially happy if GSoC students and new users who have recently 
been poking around HPX and who have their (possible) struggles or positive 
experiences fresh in mind would take a moment to comment on the linked issue.

Thank you in advance!

Kind regards,
Mikael Simberg
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] HPX 1.1.0 Released!

2018-03-24 Thread Simberg Mikael
The STE||AR Group is proud to announce the release of HPX 1.1.0, 10 years after 
the first commit! This 
release contains 2300 commits since the previous release and has closed over 
150 issues. HPX 1.1.0 brings users full control over how HPX uses processing 
units, improvements to parallel algorithms and many other usability 
improvements. This release would not have been possible without the help of all 
the people who have contributed bug reports, testing, code and improvements to 
the documentation. Thank you!


Download the release from our download 
page or from our GitHub 
page.


The highlights of this release are:

  *   We have changed the way HPX manages the processing units on a node. We no 
longer implicitly bind all available cores to a single thread pool. The user 
has now full control over what processing units are bound to what thread pool, 
each with a separate scheduler. It is now also possible to create your own 
scheduler implementation and control what processing units this scheduler 
should use. We added the hpx::resource::partitioner that manages all available 
processing units and assigns resources to the used thread pools. The runtime, 
thread pools and individual threads can now be suspended and resumed 
independently. This functionality helps in running HPX concurrently to code 
that is directly relying on OpenMP or MPI.
  *   We have continued to implement various parallel algorithms. HPX now 
almost completely implements all of the parallel algorithms as specified by the 
C++17 standard. We have also continued to implement these algorithms for the 
distributed use case (for segmented data structures, such as 
hpx::partitioned_vector).
  *   The parallel algorithms adopted for C++17 restrict the iterator 
categories usable with those to at least forward iterators. Our implementation 
of the parallel algorithms supported input iterators (and output iterators) as 
well by simply falling back to sequential execution. We have now made our 
implementations conforming by requiring at least forward iterators.
  *   We have added a compatibility layer for std::thread, std::mutex, and 
std::condition_variable allowing for the code to use those facilities where 
available and to fall back to the corresponding Boost facilities otherwise.
  *   We have added a new launch policy hpx::launch::lazy that allows to defer 
the decision on what launch policy to use to the point of execution.
  *   We have added several improvements to how components can be constructed.

For a complete list of new features and breaking changes please see our release 
notes.
 If you have any questions, comments, or exploits to report you can comment 
below, reach us on IRC (#stellar on Freenode), or email us at 
hpx-users@stellar.cct.lsu.edu. We value 
on your input!
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] HPX 1.1.0-rc1 available

2018-03-20 Thread Simberg Mikael
Dear HPX users,

The first release candidate HPX 1.1.0 is now available! Get it by downloading 
it from https://github.com/STEllAR-GROUP/hpx/releases/tag/1.1.0-rc1, or by 
cloning the repository and checking out the tag:

git clone https://github.com/STEllAR-GROUP/hpx.git
git checkout 1.1.0-rc1

Let us know about any issues that you find by opening an issue on GitHub: 
https://github.com/STEllAR-GROUP/hpx/issues.
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] [GSoC 2018] Histogram Performance Counter

2018-02-20 Thread Simberg Mikael
Hi Saurav,

I'd also be happy to hear more about HdrHistogram. It seems to me that the main 
feature is the HDR-ness. This means roughly the same as having exponentially 
spaced bins in the histogram?

More generally, if HdrHistogram offers compelling features over the one we 
already have, it is definitely a useful addition. Concurrency is in any case a 
must. If you haven't already seen the "More Arithmetic Performance Counters" 
and "Augment CSV Files" projects have a look at them as parts of those can 
(more likely should) be combined into one project. "More Arithmetic Performance 
Counters" has already been done by Hartmut in PR 2745, but more operations 
could potentially be useful (log, exp?). If you'd like to do more data analysis 
you should look at "Augment CSV Files". You could mix and match parts of these 
into a nice package.

Looking at the histogram implementation that Hartmut linked is a good place to 
start, as is the rest of the performance counter framework. That should give 
you a better idea of what we already have and what we might be lacking.

Hop onto IRC (#ste||ar on freenode) if you have more detailed questions!

Kind regards,
Mikael

From: hpx-users-boun...@stellar.cct.lsu.edu 
[hpx-users-boun...@stellar.cct.lsu.edu] on behalf of Hartmut Kaiser 
[hartmut.kai...@gmail.com]
Sent: Monday, February 19, 2018 10:57 PM
To: hpx-users@stellar.cct.lsu.edu; 'Saurav Sachidanand'
Subject: Re: [hpx-users] [GSoC 2018] Histogram Performance Counter

Hey Saurav,

> My name is Saurav Sachidanand and I wish to participate in GSoC 2018.
> I'm intrigued with the Histogram Performance Counter project. I've
> previously worked with HdrHistorgram [1], which is a histogram
> implementation that can record integer and float values with high range
> and precission, with fixed space and time costs. Implementations in
> several languages exist, but not in C++. The reference Java version [2]
> provides several more features, including a concurrent version of the
> histogram. Would implementing a generic C++ concurrent HdrHistgram
> perforamance counter, supporting all features from the Java version and
> utilizing HPX's APIs, be a useful addition?
>
> This idea came to mind beacuse I participated in GSoC last year [3], where
> I built a Performance Co-Pilot instrumentation library in Rust, and I had
> to integrate HdrHistogram into the API [4].
>
> Any guidance and feedback will be greatly appreciated.

I don't know anything about the HdrHistgram you're referring to. Would you care 
to elaborate?

We have a histogram implementation in HPX 
(https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/util/histogram.hpp) which 
is currently used for a special performance counter in the parcel (message) 
coalescing layer. But this does not have to be used for a general purpose 
counter.

Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu

>
> Thanks,
> Saurav
>
> [1] - https://hdrhistogram.github.io/HdrHistogram/
> [2] - https://github.com/HdrHistogram/HdrHistogram
> [3] -
>  https://summerofcode.withgoogle.com/archive/2017/projects/479329678209843
> 2/
> [4] - https://github.com/performancecopilot/hornet#histogram

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] HPX 1.1.0 preparation

2018-01-09 Thread Simberg Mikael
Dear HPX users and developers,

We are preparing to release HPX 1.1.0 and are aiming for a release candidate by 
January 31st and a release two weeks later on February 14th. Changes still go 
into master as usual until the relase candidate.

We still have quite a few open issues for the release: 
https://github.com/STEllAR-GROUP/hpx/issues?page=1=is%3Aopen+is%3Aissue+milestone%3A1.1.0.
 Please have a look and comment if you can work on any of them, if you think 
they are not relevant anymore or can be moved to future releases.

You can help us test HPX by running your applications with the latest master 
branch. If you encounter any issues report them here: 
github.com/ste||ar-group/hpx/issues.

If there are examples or benchmarks that you'd like to see added to or removed 
from HPX for the release comment here: 
https://github.com/STEllAR-GROUP/hpx/issues/3049.

Finally, if you have been on the fence about contributing to HPX, I'd like to 
remind you that every little helps. Writing documentation is just as important 
as implementing new features!

Kind regards,
Mikael Simberg

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users