Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Our HPC cluster benchmarks at 10 terraflops--around 100 times the performance of a high-end workstation. It’s been designed for loads that require parallel processing of distributed data sets.

Our HPC Cluster

Model

IBM nx360 M4

Number of compute nodes

48 nodes

Node CPU

Dual Intel Xeon Processor E5-2620 v2 6C

Total cores per node

6 cores per CPU x 2 CPUs = 12 cores with 64GB RAM

Hardware threads per core

12

Hardware threads per node

12 cores x 12 threads = 144 total threads

Clock rate

2.1GHz

RAM

8x 8GB (1x8GB, 2Rx8, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM

Cache

15MB Cache 1600MHz 80W

Node storage

500GB per node

Internode network

56gbit/second Infiniband

Cluster storage

108 TB of GPFS storage

Cluster file system

GPFS / Spectrum Scale

Operating System

Red Hat Enterprise Linux [Liam, please add version]

...

Please work with a faculty member to sponsor your work.

Accessing the HPC Cluster

...

Shell and Data Access

Shell Access

  • Use SSH to access the command shell use SSH; to upload files use SFTP or SCP:

    • Host: login.hpc.lab

    • Port: 22

    • Credentials : your Florida Poly username and password

[Liam, I don’t understand the following.]

  • Submitting jobs to the LSF (use bsub)

    1. Compile the hello_world example code provided by default

      1. /opt/ibm/platform_mpi/bin/mpicc -o hello_world.exe /opt/ibm/platform_mpi/help/hello_world.c

    2. Submit a job through LSF to test the message passing

      1. bsub -n 10 -R "span[ptile=1]" -o %J.out "/opt/ibm/platform_mpi/bin/mpirun -lsf -vapi /home/(Username)/hello_world.exe; wait"

    3. Check the output of the %J.out file to verify results

Applications

Spack

File Upload

  • Use SFTP or SCP to upload files with the access parameters shown above.

Applications and Packages

Libraries

MPI

The Message Passing Interface (MPI) is a library specification that allows the HPC to pass information between its various nodes and clusters.

Compiling Source

The following compilers are installed:

  • IBM PE Runtime (mpicc, et. al) v09.01.02.00u including C, C++, and Fortran.

  • GNU Compiler Collection v4.4.7 including C, C++, Objective-C, Fortran, Ada, Go, and D.

Packaged Binaries

You can install applications on our HPC using Spack: a Linux package manager that makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.

  • To list all available packages:

    • spack find

  • To load a package into your environment:

    • spack load

    • You can specify a software version as part of the load:

    • spack load python@3.7.3 loads Python 3.7.3

    Once you’ve loaded Python

Python and PIP

If you install Python using Spack you can use PIP to install

...

other modules:

  • spack load python

  • python3 -mpip install matplotlib

Apache Hadoop 2.6.0

[Liam, does more need to be said here?]Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model.

Apache Spark 1.3.1

[Liam, does more need to be said here?]Apache Spark is an open-source distributed general-purpose cluster-computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.

Other Applications

If you need an application that’s not available through Spack please contact the Helpdesk: helpdesk@floridapoly.edu or 863.874.8888.

Submitting a Job

Linux binaries

You'll submit jobs through bsub, part of IBM’s LSF workload management platform. IBM provides documentation for bsub.

You may wish to use IBM’s mpirun script to abstract the job from underlying hardware. IBM provides documentation for mpirun.

Some examples:
bsub -n 10 -R "span[ptile=1]"-o ~/hello_world.out "mpirun -lsf -vapi ~/hello_world”

bsub -n 20 -R "span[ptile=6]"-o ~/hello_world.out "python3 ~/hello_world.py”

Python

  • Install Python
    spack load python

  • Install Python bsub
    python3 -mpip install bsub

  • Run your code with bsub
    bsub -n 10 -o my_job.out "python3 my_job.py"

Other

For help with other jobs please contact the Helpdesk: helpdesk@floridapoly.edu or 863.874.8888.

...