Overview
The term high performance computing (HPC) refers to any computational activity requiring more than a single computer to execute a task. Super computers and computer clusters are used to solve advanced computation problems.
Our HPC cluster benchmarks at 10 terraflops--around 100 times the performance of a high-end workstation. It’s been designed for loads that require parallel processing of distributed data sets.
Our HPC Cluster
Model | IBM nx360 M4 |
---|---|
Number of compute nodes | 48 nodes |
Node CPU | Dual Intel Xeon Processor E5-2620 v2 6C |
Total cores per node | 6 cores per CPU x 2 CPUs = 12 cores with 64GB RAM |
Hardware threads per core | 12 |
Hardware threads per node | 12 cores x 12 threads = 144 total threads |
Clock rate | 2.1GHz |
RAM | 8x 8GB (1x8GB, 2Rx8, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM |
Cache | 15MB Cache 1600MHz 80W |
Node storage | 500GB per node |
Internode network | 56gbit/second Infiniband |
Cluster storage | 108 TB of GPFS storage |
Cluster file system | GPFS / Spectrum Scale |
Operating System | Red Hat Enterprise Linux [Liam, please add version] |
Requesting Access
Faculty:
Please contact the Helpdesk to request access to our HPC cluster: helpdesk@floridapoly.edu or 863.874.8888.
Students:
Please work with a faculty member to sponsor your work.
Accessing the HPC Cluster
To access the command shell use SSH; to upload files use SFTP or SCP:
Host: login.hpc.lab
Port: 22
Credentials : your Florida Poly username and password
Submitting a job to LSF using bsub
Compile the hello_world example code provided by default
/opt/ibm/platform_mpi/bin/mpicc -o hello_world.exe /opt/ibm/platform_mpi/help/hello_world.c
Submit a job through LSF to test the message passing
bsub -n 10 -R "span[ptile=1]" -o %J.out "/opt/ibm/platform_mpi/bin/mpirun -lsf -vapi /home/(Username)/hello_world.exe; wait"
Check the output of the %J.out file to verify results
Applications
Spack
You can install applications on our HPC using Spack: a Linux package manager that makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.
To list all available packages:
spack find
To load a package into your environment:
spack load
You can specify a software version as part of the load:
spack load python@3.7.3
loads Python 3.7.3
Python and PIP
If you load Python using Spack (above) you can use PIP to install other modules:
python3 -mpip install matplotlib
Apache Hadoop 2.6.0
Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model.
Apache Spark 1.3.1
Apache Spark is an open-source distributed general-purpose cluster-computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
Other Applications
If you need an application that’s not available through Spack please contact the Helpdesk: helpdesk@floridapoly.edu or 863.874.8888.