Managing Research Computing Clusters with Ansible

Our research computing cluster at work is slowly gathering more users, more storage, more applications, more physical machines etc. Managing everything consistently and predictably was beginning to get complicated (or maybe I’m just getting old?). There’s lots of buzz in DevOps circles about tools for managing this kind of scenario; Chef, Salt, Puppet and Ansible […]

“interactive” script for SLURM

I recently rolled out a new distributed model for our research computing cluster at work. We’re using GlusterFS for networked home directories and SLURM for job/resource scheduling. GlusterFS allows us to scale storage with minimal downtime or service disruption, and SLURM allows us to treat compute nodes as generic resources for running users’ jobs (ie, […]

Sysadmin is happy when users use SLURM

I recently rolled out a new distributed model for our research computing cluster at work. We’re using GlusterFS for networked home directories and SLURM for job/resource scheduling. GlusterFS allows us to scale storage with minimal downtime or service disruption, and SLURM allows us to treat compute nodes as generic resources for running users’ jobs (ie, […]

SLURM “scaling CPU count by factor of 2”

I recently re-worked our SLURM configuration to extrapolate the number of logical CPUs based on Sockets, Cores, and Threads. Therefore, a machine with four eight-core Xeons and HyperThreading would have 64 logical CPUs available to SLURM; 4 * 8 * 2 = 64. I thought I was being clever, but it turned out to bite […]