Showing posts with label Infiniband. Show all posts
Showing posts with label Infiniband. Show all posts

Encountering State Initializing after installing Voltaire Drivers

First thing first,

For more information on downloading the appropriate Voltaire Drivers, see blog entry Download Voltaire OFED Drivers for CentOS. After you have downloaded the appropriate and install Voltaire Drivers, you may encounter something like:

CA 'mlx4_0'
CA type: MT26428
Number of ports: 1
Firmware version: 2.6.0
Hardware version: a0
Node GUID: 0x0008f1476328oaf0
System image GUID: 0x0008fd6478a5af3
Port 1:
State: Initializing
Physical state: LinkUp
Rate: 40
Base lid: 2
LMC: 0
SM lid: 14
Capability mask: 0x0251086a
Port GUID: 0x0008f103467a5af1

It is due to the Subnet Manager (opensm) not being installed. Voltaire OFED Installation package installed only the OpenIB Packages. Even if you have install the opensm before installing the Voltaire OFED Package, the Voltaire OFED Package will uninstall the existing opensm packages to ensure openib packages are fully compatible.

To install the Opensm packages nicely, do read on the blog entry Installing Voltaire QDR Infiniband Drivers for CentOS 5.4

Other good reasing materials:
  1. RHEL and Infiniband - basic diagnostics

Installing Voltaire QDR Infiniband Drivers for CentOS 5.4

This blog writeup Installing Voltaire QDR Infiniband Drivers for CentOS 5.4 (linuxcluster.wordpress.com) is on how you can
  1. Download the drivers from the Voltaire Website
  2. Install the Infiniband Drivers
  3. Install the Subnet Manager - opensmd
  4. Test the State is "Active"
  5. Connectivity Test between Server and Client
For more information, on Voltaire Infiniband installation, do read on
  1. Voltaire OFED 1.5 User Manual
  2. Infiniband HOWTO by Guy Coates

Testing the Infiniband Interconnect Performance with Intel MPI Benchmark

This writeup focuses on verifying the performance of the Infiniband Interconnects or RDMA/iWARP Interconnects with Intel MPI Benchmark. For more information, do look at my Linux Cluster Blog
  1. Testing the Infiniband Interconnect Performance with Intel MPI Benchmark (Part I)
  2. Testing the Infiniband Interconnect Performance with Intel MPI Benchmark (Part II) 

Compiling Infiniband or OpenIB with OpenMPI and Intel Compilers on CentOS

Building on the Blog Entry "Useful Information to Compile Infinifband with OpenMPI", here is a more detailed writeup

I'm assuming you have compiled the Intel Compilers and Unzip the OpenMPI Package. Some of these information can be found on simple Ethernet-based Building OpenMPI with Intel Compiler (Ver 2) .
Don't configure and make yet for the OpenMPI Package.


Firstly to compile with OpenIB support,
#./configure --prefix=/usr/mpi/intel/ \ 
CC=icc CXX=icpc F77=ifort FC=ifort \
--with-openib \
--with-openib-libdir=/usr/lib64/

# make all install 

To test whether you have compiled and installed correctly, you can run the "ompi_info" command and look for components for your networks.
# ompi_info | grep openib
You should get something like this depending on your component
MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.1)

You are in good shape.

For more information, see OpenMPI, see
  1. 26. How do I build Open MPI with support for Open IB (Infiniband), mVAPI (Infiniband), GM (Myrinet), and/or MX (Myrinet)?

Useful Information to compile Infiniband with OpenMPI

Taken from the excellent OpenMPI FAQ. "How do I build Open MPI with support for Open IB (Infiniband), mVAPI (Infiniband), GM (Myrinet), and/or MX (Myrinet)?"

To compile OpenMPI against Infiniband
you only have to specify the directory where its support header files and libraries were installed to Open MPI's configure script

--with-openib=
Build support for OpenFabrics (previously known as "Open IB", for Infiniband and iWARP networks -- note that iWARP support was added in the v1.3 series).
# ./configure --with-openib=/path/to/openib/installation

For more information on myrinet etc, do go to the FAQs.