Skip to content

Flywheel HPC Client

Introduction

The Flywheel HPC Client is a self-service solution designed to facilitate the execution of Flywheel jobs (created by running gears) on High Performance Computing (HPC) environments. By utilizing on-premise hardware, it allows for the efficient handling of highly-concurrent scientific workloads.

The primary tasks of the HPC Client are to:

  • Check for queued HPC jobs on a Flywheel instance
  • Create and run executable scripts (.sh files) that submit jobs to the HPC job scheduler (e.g., Slurm)
  • Enable communication (i.e., logging and file transfers) between Flywheel and the job being run on the HPC.

The HPC Client should be installed on a system (i.e., a computer, VM, or login/head node) that has the ability to submit jobs to the HPC job scheduler.

Compute nodes should have access to the same directories as this system. This document provides a complete guide for configuring and deploying this system.

Instruction Steps

As mentioned in the resources, these instructions are for the Slurm scheduler. Other schedulers may require additional support

Minimum Requirements

The minimum system requirements are listed below:

  • RAM: 32 GB
  • Storage: 64 GB
  • CPUs: 4
  • Operating System: Linux

For in-depth descriptions of the above and of the software requirements, see the Minimum System Requirements document.

Getting Started

A detailed Getting Started guide is provided for specific steps and considerations for succesfully configuring and deploying the hpc-client.


Resources

To explore architecture and scheduler types supported, please see the Resources document.

FAQs

Frequently Asked Questions are answered to assist with troubleshooting and ease-of-use with regard to successful operation of the hpc-client.