Zurada User Documentation
  • Policies
    • Zurada’s System Use Policies
      • User Account
      • Running Jobs
      • Data Storage (Disk Usage)
      • Large-Memory Node Utilization
      • GPU Resources Utilization
      • Installing packages system-wide
  • Accounts and Support
    • Request an account
      • Accounts for UofL individuals
    • UofL VPN Connection
    • Request Support (Tickets)
  • Getting Started
    • Usage Agreement
    • HPC system overview
      • About the cluster
      • About Scientific Software
      • About Jobs
    • Quickstart
      • Logging into the cluster
        • Using the command line
        • Using MobaXterm
      • Copying files to/from the cluster
        • Using the command line
        • Using MobaXterm
      • Using software installed in the cluster
        • List available software
        • Load software
        • List currently loaded software
        • Unloading software
      • Queues and jobs
    • Resource Restrictions
      • Running applications on the login nodes
      • Job runtime restrictions
  • Guides
    • HPC System Guide
      • Introduction
        • What is a Shell?
        • What is case-sensitivity?
      • Connecting to the Cluster
      • Understanding Filesystems
        • What is a Filesystem?
        • Filesystem Hierarchies
        • Navigating the filesystem
        • Using Modules
    • Slurm Queueing System Guide
      • Basic Slurm Terminology
        • Basic Slurm Terminology
      • Job Types and Job Submission
        • Interactive Jobs
        • Batch Jobs
        • Job Arrays
        • Job Dependencies
      • Job Environment
        • Slurm environmental variables
      • Strategies When Submitting Slurm Jobs
        • Single Job with Multiple Job Steps
        • Job Arrays
        • Requesting CPU Nodes with Multiple Parallel Tasks
        • Single GPU node with one task per GPU and CPU cores evenly distributed across tasks
      • Preemption
        • Preemptable Jobs
    • Storage Guide
      • Understanding Storage on Compute Nodes
        • Storage types based on node accessibility
        • Filesystem locations users should understand
      • Recommended Workflow
        • Copying Data Between Home and Scratch
  • Software
    • Conda (Anaconda/Miniconda/Miniforge)
      • Basics
        • About Anaconda, Miniconda and Miniforge
        • What is a Conda environment?
        • Why use multiple Conda environments?
      • Using Conda
        • Loading the miniforge3 module and the base environment
        • Creating and activating an environment
        • Installing packages with conda and mamba
        • Installing packages with pip
        • Cloning an environment
        • Miscellaneous
      • Conda in a batch job
      • Conda in an interactive job
    • Gaussian
      • Running Gaussian
        • Example Slurm Job Script
    • Jupyter
      • Launching Jupyter through a batch job
        • 1. (Optional) Create a jupyter environment
        • 2. Create the submission script
        • 3. Connect to jupyter from your web browser
      • Launching Jupyter through an interactive job
        • 1. (Optional) Create a jupyter environment
        • 2. Submit an interactive job
        • 3. Manually launch Jupyter
        • 4. Access Jupyter from your workstation
      • Transitioning from Jupyter to Python Script
        • 1. Export the Notebook
        • 2. Clean Up the Script
        • 3. Replace Notebook-Specific Features
        • 4. Handle File Paths and Inputs
        • 5. Test the Script
    • MATLAB
      • Basics
        • Workers and pools
        • Parallel and distributed execution
        • Cluster profiles
      • Submitting a batch job
        • Submit jobs through a batch script
        • Submit jobs through MATLAB’s command prompt
        • Submit jobs through a batch script and a MATLAB submission script
      • Creating a cluster profile for Zurada
    • LAMMPS
      • Running LAMMPS
        • Example Slurm Job Script
      • Building LAMMPS
    • PyTorch
      • Verifying GPU Availability
      • Using GPUs in PyTorch
        • Moving Tensors to GPU
        • Model Training on GPU
        • Monitoring GPU Usage
      • Multi-GPU Usage in PyTorch
        • Single Node, Multi-GPU (DataParallel or DDP)
    • R
      • Using R
      • Installing R Packages
      • Installing R packages in custom locations
      • Installing R Packages with External Library Dependencies
        • Example: Installing the units Package
        • Example: Installing the sf Package
        • Simplifying with Conda
      • Using the pak Package Manager
    • RStudio
      • Pre-launch
      • Launch RStudio Server
    • Tensorflow
      • Verifying GPU Availability
      • Single Node, Multi-GPU Training
    • VASP
      • License Restrictions
      • Running VASP
        • VASP on GPU nodes
        • VASP on CPU-only nodes
        • Example Slurm Job Script
  • AI Use Cases
    • Pneumonia detection based on Chest X-Ray
      • 1. Ingesting the data
      • 2. Training the models
        • CNN Training and Validation
        • Transfer Learning Training and Validation
        • Fine Tuning Training and Validation
      • 3. Visualize Metrics
      • 4. Save your results for further analyses
    • Med-BERT
Zurada User Documentation
  • Search


© Copyright 2025, ITS - Research Computing.

Built with Sphinx using a theme provided by Read the Docs.