To do a full install on a local machine or HPC, see the Installation page.


For a faster way to get started, consider using a Singularity image for RiboPipe, hosted here.

1) Singularity offers a fast, reproducible way of running RiboPipe where all dependencies and OS are bundled into a single disk image. These singularity “containers” are widely used in cloud computing. Download if you do not already have it (must be on a Linux OS). Often, cloud computing environments come with this software pre-installed.

  1. Download a RiboPipe singularity container:
$ singularity pull library://sylabsed/linux/ribopipe
  1. Run Ribopipe:
$ raw_data=/path/to/raw/data
$ output_data=/path/to/output/data
$ singularity exec ribopipe.sif riboseq -i $raw_data -o $output_data ...


  1. Move raw data to directory of choice
  2. Create empty output directory
  3. Run ribopipe:
$ raw_data=/path/to/raw/data
$ output_data=/path/to/output/data
$ ribopipe riboseq -i $raw_data -o $output_data ...
  1. Collect raw_counts.csv output in $output_data/assembly/counts and edit sample_info.csv
  2. Run diffex:
$ ribopipe_path=/path/to/ribopipe
$ ribopipe diffex -i $output_data/assembly/counts/raw_counts.csv -d $ribopipe_path/resources/sample_info.csv -o output_name --type riboseq


1) Modify hpc_run_template.sh in the resources folder for an example script for submitting the pipeline job to the HPC and make sure dependencies listed in this script are on the HPC system, else they need to be locally installed 2) Run the script by executing the following:

$ sbatch hpc_run_template.sh

If you want the slurm output file to be sent to the SLURM directory to avoid storage space issues on your interactive node, then in the #SBATCH -o slurmjob-%j line, replace it with the path to your SLURM directory:

$ #SBATCH -o /scratch/general/lustre/INPUT_USER_ID_HERE/slurmjob-%j