FYI -- Github announced changes to their pricing plans today
(https://github.com/blog/2164-introducing-unlimited-private-repositories).
They have a new "per user" pricing model, which makes sense for some
organizations.
It does *not* make sense for us -- we are paying $300/year (i.e., $25/mo)
If there is a size_t vs PRIsize_t mismatch then it is probably a result of
less-than-perfect integration in the Clang private headers (stddef.h in
particular).
I will make a point not to trust Clang for this class of warning in the
future.
What about the sharedfp warnings such as the following:
I took a look at this, and the problem isn’t in the print statements. The
problem is that PRIsize_t is being incorrectly set to “unsigned long” instead
of something correct for the -m32 directive in that environment
> On May 6, 2016, at 9:48 AM, Paul Hargrove wrote:
>
>
:+1: This is helpful. Thanks!
On Wed, May 11, 2016 at 10:19 AM, Ralph Castain wrote:
> Hi folks
>
> For PRs on the master, I have added two labels:
>
> Target: 2.x
> Target: 1.10
>
> These are intended to mark that this PR should be ported to the target
> branch once it has
Hi folks
For PRs on the master, I have added two labels:
Target: 2.x
Target: 1.10
These are intended to mark that this PR should be ported to the target branch
once it has been committed to the master. I’m hoping it helps us to avoid
missing PRs that address problems found on release
the parameters on the webpage are for ompio in 2.x. For 1.10 its a bit
more complicated, you would have to set the number of aggregators for
each fcoll component separatly, e.g.
--mca fcoll two_phase_num_io_procs x
I would however start without setting the number of aggregators, since
we
Hi,
I am looking at the online FAQ for ompio
which seems to show that the following parameters are defined:
io_ompio_num_aggregators
io_ompio_call_timing
But on OMPI version 1.10.1 or 1.8.3:
1: setting mpirun -mca io ompio -mca io_ompio_coll_timing_info
appears to not produce a summary.
2:
to use ompio
mpirun --mca io ompio ...
to use romio (v2.x)
mpirun --mca io romio314 ...
to use romio (v1.10)
mpirun --mca io romio ...
Cheers,
Gilles
On Wednesday, May 11, 2016, Michael Rezny wrote:
> Hi Sreenidhi,
> you need to specify --collective as an input
Hi Sreenidhi,
you need to specify --collective as an input parameter to mpi_tile_io
kindest regards
Mike
On 11 May 2016 at 12:01, Sreenidhi Bharathkar Ramesh <
sreenidhi-bharathkar.ram...@broadcom.com> wrote:
>
> Thank you so much for the details.
>
>
> 1. while running the "Tile I/O"
Thank you so much for the details.
1. while running the "Tile I/O" benchmark, I see the following message:
$ mpirun -np 28 ./mpi-tile-io --nr_tiles_x 7 --nr_tiles_y 4 --sz_tile_x 100
--sz_tile_y 100 --sz_element 32 --filename file1g
...
# collective I/O off
How do I enable collective I/O ?
2.
10 matches
Mail list logo