Overview
What is HPC? - or - What is a Super Computer?
The term high performance computing (HPC) refers to any computational activity requiring more than a single computer to execute a task. Super computers and computer clusters are used to solve advanced computation problems.
How many cores does the Super Computer have? (Storage, RAM...)
48 compute nodes at 12 cores each = 576 cores
48 compute nodes at 64GB of RAM each = 768GB or roughly 3/4 of TB
48 compute nodes at 500GB local storage each = 23 TB
108 TB of GPFS storage. General Parallel File System, now called Spectrum Scale, is a high-performance clustered file system developed by IBM.
How fast is it?
10 TFlops as reported by LINPACK. LINPACK is the benchmarking software used to rate and rank super computers.
The network for intranode communication is based on Infiniband and is rated for 56Gbits/second.
Comparison: A CPU currently found in some gaming PCs is the Intel I7 3930K which normally runs in the 3GHz range overclocked to 5GHz will net about 100 GFlops.
Florida Poly's Supercomputer is getting 10TFlops without overclocking and with plenty of room to grow in capacity and speed.
Who made it?
This HPC platform was made by IBM. The same people who made Watson the Jeopardy playing Super Computer!
What is it for? - or - What can it be used for?
The platform is very flexible and can be used for a number of things modeling and simulation mathematics and chemistry to assisting in video rendering or "big-data".
What Operating System (OS) does it use?
At this time, Florida Poly is using Red Hat Enterprise Linux.
What else is in the racks?
In addition to the HPC cluster Florida Poly has a Virtualization suite.
Florida Poly is currently running VMware's vSphere 5.5 update 1
Storage for the suite consists of 21 TB SAS and 2 TB of SSD
Hypervisors and storage are connected via 10GB links
Requesting Access
Professors
Please contact helpdesk to request access to the supercomputer.
Students
A professor must request access to the HPC for the student. (see above)
Useful Information and commands.
Most users will use Putty to ssh into the HPC using their Poly email address and password. Usually going to Login.hpc.lab (unless a special environment is needed, in which case the user/s will be given the address after it is created.)
Submitting jobs to the LSF (Use bsub)
Compile the hello_world example code provided by default
/opt/ibm/platform_mpi/bin/mpicc -o hello_world.exe /opt/ibm/platform_mpi/help/hello_world.c
Submit a job through LSF to test the message passing
bsub -n 10 -R "span[ptile=1]" -o %J.out "/opt/ibm/platform_mpi/bin/mpirun -lsf -vapi /home/(Username)/hello_world.exe; wait"
Check the output of the %J.out file to verify results
For the EmberDB cluster please ssh into ember.hpc.lab using your FLPoly username and password.
Once connected use
Mysql –u (your username) -p –h ember-db
It will then ask for your FLPoly password.
HPC Components
IBM nx360 M4 | |
---|---|
Model: | Dual Intel Xeon Processor E5-2620 v2 6C |
Total cores per node: | 12 core with 64GB RAM |
Hardware threads per core: | 12 |
Hardware threads per node | 12 x 12 = 144 |
Clock rate: | 2.1GHz |
RAM: | 8 8GB (1x8GB, 2Rx8, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM |
Cache: | 15MB Cache 1600MHz 80W |
Local storage: | 108 TB of GPFS storage |
Management & Login Nodes
IBM System x3550 M4
The x3550 M4 is a cost- and density-balanced 1U, 2-socket server.
Intel Xeon processor E5-2600 v2 product family.
Supports up to 1866 MHz memory speeds.
Supports up to 768 GB memory with 32 GB LRDIMMs.
Network
The RackSwitch G8124E has support for 1G or 10G, this switch is designed for leveraging 10G Ethernet or have plans to in the future.