Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 8 Next »

Overview

The term high performance computing (HPC) refers to any computational activity requiring more than a single computer to execute a task. Super computers and computer clusters are used to solve advanced computation problems.

Our HPC cluster benchmarks at 10 terraflops--around 100 times the performance of a high-end workstation. It’s been designed for parallel processing

Our HPC Cluster

Model

IBM nx360 M4

Number of compute nodes

48 nodes

Node CPU

Dual Intel Xeon Processor E5-2620 v2 6C

Total cores per node

6 cores per CPU x 2 CPUs = 12 cores with 64GB RAM

Hardware threads per core

12

Hardware threads per node

12 cores x 12 threads = 144 total threads

Clock rate

2.1GHz

RAM

8x 8GB (1x8GB, 2Rx8, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM

Cache

15MB Cache 1600MHz 80W

Node storage

500GB per node

Internode network

56gbit/second Infiniband

Cluster storage

108 TB of GPFS storage

Cluster file system

GPFS / Spectrum Scale

Operating System

Red Hat Enterprise Linux [Liam, please add version]

Requesting Access

Faculty:

Please contact the Helpdesk to request access to our HPC cluster: helpdesk@floridapoly.edu or 863.874.8888.

Students:

Please work with a faculty member to sponsor your work.

Accessing the HPC Cluster

  • To access the command shell use SSH; to upload files use SFTP or SCP:

    • Host: login.hpc.lab

    • Port: 22

    • Credentials : your Florida Poly username and password

[Liam, I don’t understand the following.]

  • Submitting jobs to the LSF (use bsub)

    1. Compile the hello_world example code provided by default

      1. /opt/ibm/platform_mpi/bin/mpicc -o hello_world.exe /opt/ibm/platform_mpi/help/hello_world.c

    2. Submit a job through LSF to test the message passing

      1. bsub -n 10 -R "span[ptile=1]" -o %J.out "/opt/ibm/platform_mpi/bin/mpirun -lsf -vapi /home/(Username)/hello_world.exe; wait"

    3. Check the output of the %J.out file to verify results

Applications

Spack

You can install applications on our HPC using Spack: a Linux package manager that makes installing scientific software easy. With Spack, you can build a package with multiple versions, configurations, platforms, and compilers, and all of these builds can coexist on the same machine.

  • To list all available packages:

    • spack find

  • To load a package into your environment:

    • spack load

    • You can specify a software version as part of the load:

    • spack load python@3.7.3 loads Python 3.7.3

  • Once you’ve loaded Python you can use PIP to install necessary modules:

    • python3 -mpip install matplotlib

Apache Hadoop 2.6.0

[Liam, does more need to be said here?]

Apache Spark 1.3.1

[Liam, does more need to be said here?]

Other Applications

If you need an application that’s not available through Spack please contact the Helpdesk: helpdesk@floridapoly.edu or 863.874.8888.

  • No labels