Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 17 Next »

Overview

The term high performance computing (HPC) refers to any computational activity requiring more than a single computer to execute a task. Super computers and computer clusters are used to solve advanced computation problems.

Our HPC cluster benchmarks at 10 terraflops--around 100 times the performance of a high-end workstation. It’s been designed for loads that require parallel processing of distributed data sets.

Our HPC Cluster

Model

IBM nx360 M4

Number of compute nodes

48 nodes

Node CPU

Dual Intel Xeon Processor E5-2620 v2 6C

Total cores per node

6 cores per CPU x 2 CPUs = 12 cores with 64GB RAM

Hardware threads per core

12

Hardware threads per node

12 cores x 12 threads = 144 total threads

Clock rate

2.1GHz

RAM

8x 8GB (1x8GB, 2Rx8, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM

Cache

15MB Cache 1600MHz 80W

Node storage

500GB per node

Internode network

56gbit/second Infiniband

Cluster storage

108 TB of GPFS storage

Cluster file system

GPFS / Spectrum Scale

Operating System

Red Hat Enterprise Linux [Liam, please add version]

Requesting Access

Faculty:

Please contact the Helpdesk to request access to our HPC cluster: helpdesk@floridapoly.edu or 863.874.8888.

Students:

Please work with a faculty member to sponsor your work.

Shell and Data Access

Shell Access

  • Use SSH to access the command shell:

    • Host: login.hpc.lab

    • Port: 22

    • Credentials : your Florida Poly username and password

File Upload

  • Use SFTP or SCP to upload files with the access parameters shown above.

Applications and Packages

Compiling Source

The following compilers are installed:

  • mpicc : /opt/ibm/platform_mip/bin/mpicc

  • gcc : Including front ends for C, C++, Objective-C, Fortran, Ada, Go, and D.

Packaged Binaries

You can install applications on our HPC using Spack: a Linux package manager that makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.

  • To list all available packages:

    • spack find

  • To load a package into your environment:

    • spack load

    • You can specify a software version as part of the load:

    • spack load python@3.7.3 loads Python 3.7.3

Python and PIP

If you install Python using Spack you can use PIP to install other modules:

  • spack load python

  • python3 -mpip install matplotlib

Apache Hadoop 2.6.0

Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model.

Apache Spark 1.3.1

Apache Spark is an open-source distributed general-purpose cluster-computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.

Other Applications

If you need an application that’s not available through Spack please contact the Helpdesk: helpdesk@floridapoly.edu or 863.874.8888.

Submitting a Job

Linux binaries

You'll submit jobs through bsub and mpi.

  • MPI

    • The Message Passing Interface (MPI) is a library specification that allows the HPC to pass information between its various nodes and clusters.

  • LSF

    • IBM® Platform™ LSF® is a powerful workload management platform for demanding, distributed HPC environments. It provides a comprehensive set of intelligent, policy-driven scheduling features that enable you to utilize all of your compute infrastructure resources and ensure optimal application performance.

  1. Submit a job through LSF to test the message passing

    1. bsub -n 10 -R "span[ptile=1]" -o %J.out "/opt/ibm/platform_mpi/bin/mpirun -lsf -vapi /home/(Username)/hello_world.exe; wait"
      bsub -n 10 -R "span[ptile=1]"-o ~/hello_world.out "mpirun -lsf -vapi ~/hello_world”

Python

  • Install Python
    spack load python

  • Install Python bsub
    python3 -mpip install bsub

  • Run your code with bsub
    bsub -n 10 -o my_job.out my_job.py

Other

For help with other jobs please contact the Helpdesk: helpdesk@floridapoly.edu or 863.874.8888.

  • No labels