2. Overview#
The Esrum cluster is a cluster managed by the Data Analytics Platform (formerly the Phenomics Platform) at CBMR. Hosting and technical support is handled by KU-IT.
In addition to the documentation provided here, KU-IT also provides documentation for the UCPH computing/HPC Systems on KUNet.
2.1. Architecture#
The cluster consists of a head node, 12 compute nodes, 1 GPU/Hi-mem node, 1 Rstudio server, and 1 server for running containers:
Node |
RAM |
CPUs |
GPUs |
Name |
|
|---|---|---|---|---|---|
1 |
Head |
2 TB |
2x24 core AMD EPYC 7413 |
esrumhead01fl |
|
12 |
Compute |
2 TB |
2x32 core AMD EPYC 7543 |
esrumcmpn*fl |
|
1 |
GPU / Hi-mem |
4 TB |
2x32 core AMD EPYC 75F3 |
2x NVIDIA A100 80GB |
esrumgpun01fl |
1 |
Rstudio |
2 TB |
2x32 core AMD EPYC 7543 |
esrumweb01fl |
|
1 |
Container |
2 TB |
2x32 core AMD EPYC 7543 |
esrumcont01fl |
Users connect to the "head" node, from which jobs can be submitted to the individual compute nodes using the Slurm Workload Manager for running tasks. An R web server and a Public and private Shiny servers server, both managed by KU-IT, are also available.
2.2. Software#
The nodes all run Red Hat Enterprise Linux 8 and a range of scientific and other software is made available using environment modules. Missing software can be requested via KU-IT.
2.4. Backup policies and quotas#
Your /home and the apps. data, and people folders in
each projects are automatically backed up. The scratch folders are
NOT backed up. The specific frequency and duration of backups differ for
each type of folder and may also differ for individual projects.
As a rule folders for projects involving GDPR protected data (indicated
by the project name ending with -AUDIT) is subject to more frequent
backups. However, on-site backups are kept for a shorter time to prevent
the unauthorized recovery of intentionally deleted data.
See Projects, data, and home folders for more information.
2.5. Additional resources#
Official UCPH computing/HPC Systems documentation on KUNet.