...
Our HPC cluster benchmarks at 10 terraflops--around 100 times the performance of a high-end workstation. It’s been designed for loads that require parallel processing of distrib
Our HPC Cluster
Model | IBM nx360 M4 |
---|---|
Number of compute nodes | 48 nodes |
Node CPU | Dual Intel Xeon Processor E5-2620 v2 6C |
Total cores per node | 6 cores per CPU x 2 CPUs = 12 cores with 64GB RAM |
Hardware threads per core | 12 |
Hardware threads per node | 12 cores x 12 threads = 144 total threads |
Clock rate | 2.1GHz |
RAM | 8x 8GB (1x8GB, 2Rx8, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM |
Cache | 15MB Cache 1600MHz 80W |
Node storage | 500GB per node |
Internode network | 56gbit/second Infiniband |
Cluster storage | 108 TB of GPFS storage |
Cluster file system | GPFS / Spectrum Scale |
Operating System | Red Hat Enterprise Linux [Liam, please add version] |
...
To list all available packages:
spack find
To load a package into your environment:
spack load
You can specify a software version as part of the load:
spack load python@3.7.3
loads Python 3.7.3
Once you’ve loaded Python you can use PIP to install necessary modules:
python3 -mpip install matplotlib
Apache Hadoop 2.6.0
[Liam, does more need to be said here?]Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model.
Apache Spark 1.3.1
[Liam, does more need to be said here?]Apache Spark is an open-source distributed general-purpose cluster-computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance.
Other Applications
If you need an application that’s not available through Spack please contact the Helpdesk: helpdesk@floridapoly.edu or 863.874.8888.
...