Intel® MPI Library delivers a flexible, multi-fabric-enabled message-passing interface for developers and users of cluster applications. It also provides a high performance implementation of MPI-2 standard functionality.
The Direct Access Programming Library (DAPL) architecture provides the communication interface that permits software developers to easily test and run their applications on a variety of network fabrics.
Features
Build a single version of your message-passing interface (MPI) application that runs on multiple network fabrics, and maintains high execution performance while lowering development and validation costs.
10 Reasons to Choose Intel® MPI Library
- Just a single MPI library is needed to develop, test, and distribute MPI applications for all major cluster configurations.
- Get a high performance implementation of the MPI-2 Standard.
- Obtain portability with the most general multi-fabric support for all major network configurations.
- Standard multi-fabric interface [DAPL] in place to incorporate future network-fabrics without changing applications.
- Flexible support of multiple interconnects, e.g. InfiniBand*, TCP/IP1, and ‘Shared Memory’ on advanced multi-core and shared-memory processor (SMP) configurations.
- Easy to install, and comes with diligent documentation and reliable support.
- Supports most major Linux* platforms and a range of compilers.
- Easily invoke a parallel debugger of choice.
- Benefit from excellent tuning tools through Intel® Trace Analyzer and Collector enhanced performance analysis interface.
- Free runtime environment kit, available for pre-installation or redistribution.
Intel® MPI Library Supports Multiple Hardware Fabrics
- Provide an accelerated multi-fabric layer for fast interconnects via the Direct Access Programming Library (DAPL) methodology (Figure 1).
- Supports TCP, shared memory, and many DAPL-based interconnects, including InfiniBand, Myrinet*, and others.
- Utilize a fast shared - memory lane and a sockets fallback, when appropriate. Execution failure can be avoided even if interconnect selection fails. This is especially true for batch computing. For such situations, the sockets interface will automatically be selected (Figure 1) as a backup.
|