Pipeline Management

A set of computational tools, which run either sequentially or parallelly in order to achieve a specific data analysis objective. Tools/commands are designated as steps in a pipeline.

Favorites Pipelines

Pipelines, from any project, can be added or deleted from the favorites by clicking the icon next to the pipeline name in the list view i.e. My Pipelines. Stanome owned pipelines can’t be added to the favorites. Favorite pipelines are visible in the Pipeline Library and My Pipelines (filtered based on the Owner,  Hub, and Category).

My Pipelines

My Pipelines show the list of pipelines within a project. Pipelines can be created, viewed, edited, and deleted within the project scope. My Pipelines is empty, by default, and users can create new pipelines de novo or copy the pre-configured pipelines from the Pipeline Library. Follow the instructions in the next section to create a pipeline.

Pipeline Creation

The new pipeline creation window is accessed from two locations. Either click on the project window or click on the My Pipelines window to create a new pipeline (Fig. 1).

Fig. 1. New pipeline creation window.

 

 

The new pipeline creation window displays three icons in the upper right corner.

Exit out of the pipeline creation without saving.

Save the pipeline. 

Copy pipeline.

 

New pipelines can be added to a project in three ways.

Pipeline Filters

Pipelines on the platform are tightly connected with the projects. The Data type field in the projects is tied with the Hub field in the pipelines. Based on the Data Type definition in the project setup, only relevant pipelines are shown. The following table shows the corresponding terms between projects and pipelines.

 

Data Type in Project

Pipeline Hub

Whole Genome

Genome

Microbiome

Micro

Transcriptome

Transcript

Targeted Genome

Variant

Table 4: Terms associated between projects and pipelines

Copy Pipeline

A new pipeline can be created by copying an existing pipeline from other projects or a pre-configured pipeline from Pipeline Library

Click in the upper right corner of the pipeline creation window to see the existing pipelines via the Select pipeline dialog box (Fig.). Using the project setup information, a list of prefiltered pipelines is listed. Users can use multiple field combinations to filter the pipelines. Pipelines from the can be viewed by selecting the owner as “Stanome”. Select a pipeline and click on to copy a pipeline into the current project. The pipeline steps, tools, parameters, and other details are auto-populated (except the pipeline name). Name the new pipeline uniquely  (duplicate names are not allowed) and verify the tools and commands before saving the pipeline.

HINT: Pipeline name should be less than 50 characters long and only alphanumeric characters and spaces are allowed.

 

Fig. 1. The select Pipeline dialog box

 

De Novo Pipeline

De novo pipeline building requires bioinformatics expertise. Please contact the technical support team for assistance.

The creation of a brand new pipeline is more challenging than copying an existing pipeline. Fill in the following details on the pipeline creation window (Fig. 1) to create a new pipeline. Mandatory fields are indicated with asterisks (*).

At least one step is required for a functional pipeline.

Click the icon to add a new step (or tool) to the pipeline(Fig. 1). There are eight fields in each step:

HINT: Only positive integers are allowed

HINT: The name should be unique.

HINT: Only positive integers are allowed

HINT: The first step can’t be a merge step

HINT: Input sources from multiple steps are allowed. (Example: BAM and BAI files created in different steps required for Variant calling).

HINT: Currently, Data Store is allowed for the first step only

  1.   Command Builder

Commands are preconfigured by the platform admin. Users can only edit the commands.

This is a generic command building process. You are NOT making the actual file selections required for the analysis. The platform does it automatically based on your definitions.

Fig. 1. The Command Builder dialog box

 

The first tab of the Command Builder describes the generic details(summary) about a command.

Default pattern: #command #options #arguments #input #output

The pattern should ALWAYS start with #command and can’t be edited.

Allowed character: Parameter words, #, space, and >

“>” is allowed preceding the #output ONLY

The second tab of the Command Builder (Fig. 2) describes the Options parameter. Details of the Options tab are described below: 

Fig. 2. The options tab

 

Single-word parameters should be defined as options (Examples: --ignore, --1, PE, SE). All the options are listed in a table format. New row(s) can be added using the ‘+’ sign at the bottom of the table. Six fields are available under each option.

CAUTION - Verify usage of each option before using

Field Type

Value

Annotation

  • Variant annotation files (Mills1000G_INDELS, DBSNP, 1000G_HC, 1000G_OMNI, and HAPMAP), GATK
  • Pathway or GO
  • VEP Cache and VEP Cache Version
  • GTF
  • ABR

Constant

  • Any constant value (alphanumerics) (Examples: -o, --i, and --single)

Metadata

  • Experimental Design
  • Targets
  • Genelist
  • Amplicon ranges

Reference

Define references to select

  • References: Genome/Transcriptome
  • Indexed references: BWA, Bowtie2, etc

Threshold

Define threshold values to use

  • qvalue 
  • pvalue

Variable

Native variables of the platform 

  • JobID
  • Organism
  • Ploidy
  • Sample Name
  • Reference Version
  • Sequencing Platform

Table. Available Field Types and their corresponding Values.

CAUTION - Please refer to the Arguments section for defining the parameters with key-value pairing

Input and output files are defined under INPUTS and OUTPUTS tabs, respectively. Eight fields are available under each of these parameters (Fig. 3).

CAUTION - Allowed delimiters are =, -, :, and  ; 

CAUTION - The file extensions should be precise; even the FASTQ and FQ are treated distinctly.

 

Input file names

Regular expression

Example 1

castor1_R1.fastq

R1

Example 2

castor1_R1_trimmed.fastq

R1_trimmed

Example 3

abcd_1.fastq

_1

Example 4

abcd_1_R.fastq

_1_R

(Example: ${sampleName}_trim.fastq for trimmomatic step). This helps track the files across the entire pipeline execution. 

Fig. 3A. The Command Builder Inputs view.

 

Fig. 3B. The Command Builder Outputs view

 

Parameters defined as a key-value pair should be defined as arguments (Fig. 4). Arguments can be used for any parameters supported by the tools and other required files (reference files, gtf or annotation files, target or hotspot files). They are defined by the following eight features:

CAUTION - Please refer to the Options section for defining the singleton parameters

Arguments are grouped into categories to support diverse tools and commands. In arguments, two fields (Type and Value) work together to define an argument.

CAUTION - Allowed delimiters are =, -, :, %, and

Fig. 4. The Command Builder arguments view.

 

Click on the bottom right corner to save the changes to the command. This is the completion of the first step in the pipeline. Continue adding all the steps until the pipeline is complete. Steps can be dragged and dropped at any position with the icon. Step number, predecessor, and input source get automatically readjusted for all the steps. Click to save the pipeline.

 

Pipeline Execution

Once a pipeline is successfully created and validated within a project, it's ready for execution. A newly created pipeline is shown in Fig. 1.

Fig. 1. Pipeline creation window

 

HINT: The contents of the metadata files can be viewed by clicking the icon

CAUTION - Differential Suite pipelines need at least two conditions with two replicates for each. Variant Suite pipelines need properly formatted target files.

 

Fig. 2. The Run Pipeline dialog box. Files displayed in this dialog box are the actual files used in the pipeline execution.

 

 

Computing resources are initialized upon pipeline execution. The pipeline window automatically refreshes and redirects to the jobs window. Executed jobs appear in the jobs table. Refresh the window if the job is not visible. Jobs wait in the queue until computing resources are available and the status appears as pending and changes to Running. An email is sent when the job starts and also upon completion.

Click the STOP button on the JOBs window to cancel an active pipeline execution and this will abort the run. 

Click to delete a Pipeline from the Pipeline window (Fig. 1). This action deletes the pipeline records entirely from a project and can’t be retrieved.