Dear Sir or Madam,

(We apologize if you receive multiple copies of this message)

============================================================

Recent Advances in Message Passing Interface. 20th European MPI Users' Group 
Meeting (EuroMPI 2013) 

EuroMPI 2013 is being held in cooperation with SIGHPC 

Madrid, Spain, September 15-18, 2013 

www.eurompi2013.org

BACKGROUND AND TOPICS 
------------------------------- 

EuroMPI is the preeminent meeting for users, developers and researchers to 
interact and discuss new developments and applications of message-passing 
parallel computing, in particular in and related to the Message Passing 
Interface (MPI). The annual meeting has a long, rich tradition, and the 20th 
European MPI Users' Group Meeting will again be a lively forum for discussion 
of everything related to usage and implementation of MPI and other parallel 
programming interfaces. Traditionally, the meeting has focused on the efficient 
implementation of aspects of MPI, typically on high-performance computing 
platforms, benchmarking and tools for MPI, short-comings and extensions of MPI, 
parallel I/O and fault tolerance, as well as parallel applications using MPI. 
The meeting is open towards other topics, in particular application experience 
and alternative interfaces for high-performance heterogeneous, hybrid, 
distributed memory systems. 

The meeting will feature contributed talks on the selected, peer-reviewed 
papers, invited expert talks covering upcoming and future issues, a vendor 
session where selected vendors will present their new developments in hybrid 
and heterogeneous cluster and high-performance architectures, a poster session, 
and a tutorial day. 

TUTORIALS
------------------------------- 

- Understanding applications performance with Paraver

Writing parallel applications that make a good use of the resources is not an 
easy task. Performance analysis tools support developers on the evaluation, 
tuning and optimization of their codes. In this tutorial we will describe the 
performance tools developed at BSC (www.bsc.es/performance_tools), an 
open-source project targeting not only to detect performance problems but to 
understand the applications' behavior. The key component is Paraver, a 
performance analyzer based on traces with a great flexibility to explore the 
collected data. The Dimemas simulator can be used to predict the application's 
behavior under different scenarios. Provided with such tools, the analyst can 
generate and validate hypothesis to investigate the trails provided by the 
execution trace.

- Using Coprocessors in MPI programs

Implementing the high performance version 2.2 of the Message Passing 
Interface(MPI)-2 specification on multiple fabrics, Intel® MPI Library 4.1 
focuses on making applications perform better on IA-based clusters. The ease of 
use of Intel® MPI and analysis tools like the Intel® Trace Analyzer and 
Collector is available for both the Intel® Xeon and Intel® Many Integrated 
Core (MIC) architecture. Intel® MPI Library supports different programming 
models on Xeon Ph(TM) coprocessors and we will demonstrate these models with 
examples in addition to the basics of debugging Intel® MPI codes on Xeon 
Ph(TM) coprocessors.

In this tutorial, as a starting point, you will learn how a basic MPI program 
can utilize the card as a coprocessor by leveraging the offload model where all 
MPI ranks are run on the main Xeon host, and the application utilizes offload 
directives to run on the Intel Xeon Phi coprocessor card. Well-known threading 
technologies, such as OpenMP, can implement the offload regions.  We'll then 
focus on how natively-compiled MPI applications can be executed directly on the 
card.  The final method explored will emphasize how to run on a mix of 
architectures via distribution of the ranks on both the coprocessor card and 
the host node.  As part of the final segment, we'll also discuss how to analyze 
the load balance of an MPI application which runs in such a heterogeneous 
environment.


KEYNOTE SPEAKERS
------------------------------- 

- Prof. Alok N. Choudhary
Exascale Systems, Big Data and Knowledge Discovery - Challenges and 
Opportunities

The primary goal of exascale systems is to accelerate scientific discoveries 
and engineering innovation, yet the impact of exascale will not only be 
measured by how fast a single simulation is performed, but rather by the speed 
and acceleration on overall knowledge discoveries. Modern experiments and 
simulations involving satellites, telescopes, high-throughput instruments, 
imaging devices, sensor networks, accelerators, and supercomputers yield 
massive amounts of data. That is, processing, mining and analyzing this data 
effectively and efficiently will be a critical component of the knowledge 
discovery process, as we can no longer rely upon traditional ways of dealing 
with the data due to its scale and speed. Interestingly, an exascale system can 
be thought of another instrument generating "big data". But, unlike most other 
instruments, such a system also presents an opportunity for big data analytics. 
Thus the fundamental question is what are the challenges and opportunities for 
exascale systems to be an effective platform for not only perform traditional 
simulations, but will also be suitable for data-intensive and data driven 
computing to accelerate time to insights. That has implications on how 
computing, communication, analytics and IO are performed. This talk will 
address these emerging challenges and opportunities.


- Prof. Alex Ramirez
The Mont-Blanc approach towards Exascale

The Mont-Blanc project is developing a European Exascale approach leveraging on 
the strengths of European embedded industry. We believe that we can build on 
commodity components coming from the energy-efficient embedded market to deploy 
a massively parallel system capable of achieving Exaflop performance at a lower 
cost and power consumption than other alternatives. This talk will describe the 
Mont-Blanc philosophy, the architecture of our first prototype system, and the 
programming challenges involved in such massively parallel systems.


- Prof. Jesper Larsson TrÀff
Unique Features of MPI: Collective Operations on Structured Data

MPI, the Message-Passing Interface, owes a large part of its pervasiveness to 
the orthogonality and completeness of the specification; another large part to 
the existence of efficient implementations of the full standard. We examine key 
concepts of the collective communication model of MPI (including extensions in 
the MPI 3.0 specification) and their relations to the derived datatype 
mechanism for describing structured application data. Together with the 
mechanisms for creating process subsets, these models provide still unreaped 
descriptive and performance advantages for applications and libraries, while on 
the other hand still posing significant implementation challenges.


WORKSHOPS
------------------------------- 

IMUDI SPECIAL SESSION ON IMPROVING MPI USER AND DEVELOPER INTERACTION

The IMUDI special session, to be held as a full-day meeting at the EuroMPI 2013 
conference in Madrid, Spain, focuses on bringing together the MPI end-user and 
MPI implementor communities through discussions on MPI usage experiences, 
techniques, and optimizations. This meeting will focus on evaluating the MPI 
standard from the perspective of the MPI end-user (application and library 
developers) and address concerns and insights of MPI implementors and vendors. 
Unlike workshops associated with other conferences, the IMUDI session is still 
considered to be a part of the Euro MPI conference. Submissions will be 
reviewed separately to facilitate bringing together research publications 
falling into these "special focus" areas.

More info at: http://press.mcs.anl.gov/imudi/


PBIO 2013: INTERNATIONAL WORKSHOP ON PARALLELISM IN BIOINFORMATICS

In Bioinformatics, we can find a variety of problems which are affected by huge 
processing times and memory consumption, due to the large size of biological 
data sets and the inherent complexity of biological problems. In fact, 
Bioinformatics is one of the most exciting research areas in which Parallelism 
finds application. Successful examples are mpiBLAST or ClustalW-MPI, among many 
others. In conclusion, Bioinformatics allows and encourages the application of 
many different parallelism-based technologies. The focus of this workshop is on 
MPI-based approaches, but we welcome any technique based on: multicore and 
cluster computing, supercomputing, grid computing, cloud computing, hardware 
accelerators as GPUs, FPGAs, or Cell processors, etc.

More info at: http://arco.unex.es/mavega/pbio2013/



SCHEDULE, IMPORTANT DATES 
------------------------------- 

- Early registration: June 15th, 2013 
- Tutorial(s): September 15th, 2012 
- Conference: September 16th-18th, 2013 


CONFERENCE SPONSORS
-------------------------------

Platinum: CISCO
Gold:     Bull 
          NVidia


CONFERENCE FEES 
------------------------------- 

The conference will hopefully be well supported by sponsors, which will help 
directly towards keeping the fees low. As far as possible, special student fees 
will be offered.

Reply via email to