PROJECT MANAGEMENT

Self-contained mini-workspaces, where sample sets can be analyzed independently without interference from other data, using multiple pipelines.

SAMPLE SET MANAGEMENT

Sample Set is a group of sequence files that belong to an experiment and can be analyzed together. Sample Set window allows new sample set creation and provides a list view of the existing Sample Sets within a project (Fig. 1).

 

Fig. 1. The Sample Set window shows the list of sample sets.

 

 

Sample Set Creation

Sample Set can be created by selecting the samples available in the. Click from the Project Main window or click on the Sample Set window to create a new Sample Set (Fig. 2).

Fig. 2. New Sample Set creation window 

 

 

Fill in the following details on the Sample Set creation window:

Only alphanumeric characters and spaces are allowed.

Sample Set names are helpful while executing the pipelines.

CAUTION - Names shouldn’t be reused.

Samples from different organisms can be selected to form a single sample set

CAUTION - Unless intended, samples from multiple organisms in a sample set are not recommended.

Samples from different tags can be selected to create one sample set.

CAUTION - Select either SE or PE samples only. The platform doesn’t allow mixed selection.

Click on the to filter samples based on the sample quality. Six filters are supported.

          1. The total number of reads: This shows the samples with the total number of reads greater than the number entered. 
          2. Number of poor quality reads: This shows the samples which contain, number of poor quality reads less than the number entered. The quality of reads is determined by the fastqc tool.
          3. Sequence length: Shows the samples with reads whose sequence length ranges the number entered. 
          4. Per base sequence quality: Shows the samples with reads per base sequence quality tagged as Pass/Fail/Warn. 
          5. Sequence length distribution: This shows the samples with reads sequence length distribution tagged as Pass/Fail/Warn.
          6. Adapter content: Shows the samples with read adapter content tagged as Pass/Fail/Warn.

 

Click on the upper right corner of the window to save the Sample Set. Multiple sample sets can be created under one project with different sequence files.

A sample set can be deleted by clicking icon on the Sample Set window. This action deletes the sample set only, however, the samples are still available in the Sequence Data for future usage.

 

 

Project Setup

A new project can be created in two ways. 

  1. From the Dashboard window - click the icon
  2. From the Projects window - click on the upper right corner

Fig. 1. New Project setup window. All fields are mandatory.

A new project can be created from the Project setup window (Fig. 1). Fill in the seven mandatory fields with details and appropriate descriptions. The data in this form is used for automatic reference file(s) selection and will be exported to the final report.

CAUTION - Organism enables appropriate reference file(s) selection.

HINT -  Project name cannot be longer than 30 characters and alphanumeric characters and spaces are only allowed.

CAUTION - Project name should be unique.

HINT - Select Other if the organism is not listed and contact Stanome technical support team to add a new organism to the list.

HINT - Carefully choose the Data type to get the relevant pipeline suggestions.

Multiple options can be selected.

Click to save the project. The new project is saved and displays the Project Main window (Fig. 2), where the project-specific menu is available on the left menu. The project can be navigated with Sample Set, My Pipelines, Jobs, and Reports, and details of these four features are described in the following sections.

Fig. 2. Project Main window.

 

Projects can be deleted using the icon from the Projects window. All the components within a project (sample sets, pipelines, jobs, and reports) are deleted permanently.

 

 

Pipeline Management

A set of computational tools, which run either sequentially or parallelly in order to achieve a specific data analysis objective. Tools/commands are designated as steps in a pipeline.

Pipeline Management

Favorites Pipelines

Pipelines, from any project, can be added or deleted from the favorites by clicking the icon next to the pipeline name in the list view i.e. My Pipelines. Stanome owned pipelines can’t be added to the favorites. Favorite pipelines are visible in the Pipeline Library and My Pipelines (filtered based on the Owner,  Hub, and Category).

Pipeline Management

My Pipelines

My Pipelines show the list of pipelines within a project. Pipelines can be created, viewed, edited, and deleted within the project scope. My Pipelines is empty, by default, and users can create new pipelines de novo or copy the pre-configured pipelines from the Pipeline Library. Follow the instructions in the next section to create a pipeline.

Pipeline Management

Pipeline Creation

The new pipeline creation window is accessed from two locations. Either click on the project window or click on the My Pipelines window to create a new pipeline (Fig. 1).

Fig. 1. New pipeline creation window.

 

 

The new pipeline creation window displays three icons in the upper right corner.

Exit out of the pipeline creation without saving.

Save the pipeline. 

Copy pipeline.

 

New pipelines can be added to a project in three ways.

Pipeline Management

Pipeline Filters

Pipelines on the platform are tightly connected with the projects. The Data type field in the projects is tied with the Hub field in the pipelines. Based on the Data Type definition in the project setup, only relevant pipelines are shown. The following table shows the corresponding terms between projects and pipelines.

 

Data Type in Project

Pipeline Hub

Whole Genome

Genome

Microbiome

Micro

Transcriptome

Transcript

Targeted Genome

Variant

Table 4: Terms associated between projects and pipelines

Pipeline Management

Copy Pipeline

A new pipeline can be created by copying an existing pipeline from other projects or a pre-configured pipeline from Pipeline Library

Click in the upper right corner of the pipeline creation window to see the existing pipelines via the Select pipeline dialog box (Fig.). Using the project setup information, a list of prefiltered pipelines is listed. Users can use multiple field combinations to filter the pipelines. Pipelines from the can be viewed by selecting the owner as “Stanome”. Select a pipeline and click on to copy a pipeline into the current project. The pipeline steps, tools, parameters, and other details are auto-populated (except the pipeline name). Name the new pipeline uniquely  (duplicate names are not allowed) and verify the tools and commands before saving the pipeline.

HINT: Pipeline name should be less than 50 characters long and only alphanumeric characters and spaces are allowed.

 

Fig. 1. The select Pipeline dialog box

 

Pipeline Management

De Novo Pipeline

De novo pipeline building requires bioinformatics expertise. Please contact the technical support team for assistance.

The creation of a brand new pipeline is more challenging than copying an existing pipeline. Fill in the following details on the pipeline creation window (Fig. 1) to create a new pipeline. Mandatory fields are indicated with asterisks (*).

At least one step is required for a functional pipeline.

Click the icon to add a new step (or tool) to the pipeline(Fig. 1). There are eight fields in each step:

HINT: Only positive integers are allowed

HINT: The name should be unique.

HINT: Only positive integers are allowed

HINT: The first step can’t be a merge step

HINT: Input sources from multiple steps are allowed. (Example: BAM and BAI files created in different steps required for Variant calling).

HINT: Currently, Data Store is allowed for the first step only

  1.   Command Builder

Commands are preconfigured by the platform admin. Users can only edit the commands.

This is a generic command building process. You are NOT making the actual file selections required for the analysis. The platform does it automatically based on your definitions.

Fig. 1. The Command Builder dialog box

 

The first tab of the Command Builder describes the generic details(summary) about a command.

Default pattern: #command #options #arguments #input #output

The pattern should ALWAYS start with #command and can’t be edited.

Allowed character: Parameter words, #, space, and >

“>” is allowed preceding the #output ONLY

The second tab of the Command Builder (Fig. 2) describes the Options parameter. Details of the Options tab are described below: 

Fig. 2. The options tab

 

Single-word parameters should be defined as options (Examples: --ignore, --1, PE, SE). All the options are listed in a table format. New row(s) can be added using the ‘+’ sign at the bottom of the table. Six fields are available under each option.

CAUTION - Verify usage of each option before using

Field Type

Value

Annotation

  • Variant annotation files (Mills1000G_INDELS, DBSNP, 1000G_HC, 1000G_OMNI, and HAPMAP), GATK
  • Pathway or GO
  • VEP Cache and VEP Cache Version
  • GTF
  • ABR

Constant

  • Any constant value (alphanumerics) (Examples: -o, --i, and --single)

Metadata

  • Experimental Design
  • Targets
  • Genelist
  • Amplicon ranges

Reference

Define references to select

  • References: Genome/Transcriptome
  • Indexed references: BWA, Bowtie2, etc

Threshold

Define threshold values to use

  • qvalue 
  • pvalue

Variable

Native variables of the platform 

  • JobID
  • Organism
  • Ploidy
  • Sample Name
  • Reference Version
  • Sequencing Platform

Table. Available Field Types and their corresponding Values.

CAUTION - Please refer to the Arguments section for defining the parameters with key-value pairing

Input and output files are defined under INPUTS and OUTPUTS tabs, respectively. Eight fields are available under each of these parameters (Fig. 3).

CAUTION - Allowed delimiters are =, -, :, and  ; 

CAUTION - The file extensions should be precise; even the FASTQ and FQ are treated distinctly.

 

Input file names

Regular expression

Example 1

castor1_R1.fastq

R1

Example 2

castor1_R1_trimmed.fastq

R1_trimmed

Example 3

abcd_1.fastq

_1

Example 4

abcd_1_R.fastq

_1_R

(Example: ${sampleName}_trim.fastq for trimmomatic step). This helps track the files across the entire pipeline execution. 

Fig. 3A. The Command Builder Inputs view.

 

Fig. 3B. The Command Builder Outputs view

 

Parameters defined as a key-value pair should be defined as arguments (Fig. 4). Arguments can be used for any parameters supported by the tools and other required files (reference files, gtf or annotation files, target or hotspot files). They are defined by the following eight features:

CAUTION - Please refer to the Options section for defining the singleton parameters

Arguments are grouped into categories to support diverse tools and commands. In arguments, two fields (Type and Value) work together to define an argument.

CAUTION - Allowed delimiters are =, -, :, %, and

Fig. 4. The Command Builder arguments view.

 

Click on the bottom right corner to save the changes to the command. This is the completion of the first step in the pipeline. Continue adding all the steps until the pipeline is complete. Steps can be dragged and dropped at any position with the icon. Step number, predecessor, and input source get automatically readjusted for all the steps. Click to save the pipeline.

 

Pipeline Management

Pipeline Execution

Once a pipeline is successfully created and validated within a project, it's ready for execution. A newly created pipeline is shown in Fig. 1.

Fig. 1. Pipeline creation window

 

HINT: The contents of the metadata files can be viewed by clicking the icon

CAUTION - Differential Suite pipelines need at least two conditions with two replicates for each. Variant Suite pipelines need properly formatted target files.

 

Fig. 2. The Run Pipeline dialog box. Files displayed in this dialog box are the actual files used in the pipeline execution.

 

 

Computing resources are initialized upon pipeline execution. The pipeline window automatically refreshes and redirects to the jobs window. Executed jobs appear in the jobs table. Refresh the window if the job is not visible. Jobs wait in the queue until computing resources are available and the status appears as pending and changes to Running. An email is sent when the job starts and also upon completion.

Click the STOP button on the JOBs window to cancel an active pipeline execution and this will abort the run. 

Click to delete a Pipeline from the Pipeline window (Fig. 1). This action deletes the pipeline records entirely from a project and can’t be retrieved.

 

Reports

Results of a pipeline execution are aggregated into easily understandable formats for quick viewing.

Reports

Overview

Each pipeline execution generates an HTML report. The final report and other files can be accessed through the Reports window (Fig.). Reports can also be accessed through the ReportID on the job details page.

Fig. 1. Reports Window

 

Reports are generated dynamically based on the analysis type and each report is divided into sections based on the tools used in the pipeline. The first two sections are generated for all the jobs to provide the job overview: analysis summary and sample quality.

This section consolidates the information related to the job: project, samples, and the experiment in four sub-sections:

Details of sample (sequencing) quality are provided in this section. It has two sub-sections:

 

The remaining sections are dynamically generated based on the pipeline type and the tools used. Two sample reports are provided below to understand the features of each report.

 

Reports

Variant Report

Variant calling pipelines contains two exclusive sections.

  Mapping

This section provides details of the abundance/mapping statistics. This section also has three sub-sections:This

      1. Data Table: This (Fig. 1) shows several alignment statistics such as the number of total processed reads, the number of mapped or multi-mapped reads, and the uniquely mapped reads.
      2. Sample Coverage: Allows users to explore the alignment quality through a series of plots. Sample depth of coverage in total read counts. Sample depth of coverage in percentages. On-target mapping quality in a 96-well plate format
      3. Read Lengths: Histograms and 96-well plate plots show the read length distributions for mapped and unmapped reads.
      4. Genome Browser: Aligned reads against the reference genome can be viewed for each sample. The genome browser is interactive and allows exploratory analysis.

Fig. 1. Alignment statistics in the report.

 

 Variant Calling

This section provides details of the abundance/mapping statistics. This section also has three sub-sections: 

        1. Call Rate Summary: Provides a summary of the genotype calls.
        2. Target Call Rates: Genotype calling metrics for the top 100 targets are shown in the table format. The complete list can be downloaded from the Reports section. The position field is cross-linked to the Variant Browser. This section also shows:
          1. Genotype call distribution
          2. Genotype Heatmap
        3. Sample Call Rates: Histograms and 96-well plate plots show the call rate distributions from all the samples.
        4. Genotypes: Shows table of genotypes obtained for each marker across all the samples
        5. Variant Browser: Sequencing reads support for each variant can be viewed for each sample (Fig. 2). The genome browser is interactive and allows exploratory analysis.

 

Fig. 2. The variant browser in the report.

 

Reports

Transcript Report

This report contains three exclusive sections.

This section provides details of the abundance/mapping statistics. This section also has two sub-sections:

      1. Data Table: This shows several quasi-alignment statistics such as the number of total processed reads, the number of mapped or multi-mapped reads, and the uniquely mapped reads.
      2. Plots: Show the sample depth of coverage plots ( mapped and unmapped reads) in raw numbers and percentages.

Details of the differential expression analysis are dynamically displayed for the pipelines which contain the DE step. This has the following four sub-sections:

      1. Data Table: Top 200 most significant Differentially Expressed Genes (DEGs) are listed in tabular format. The table also provides fold change values, confidence levels, and various other parameters. The tool and its various parameters that were used for identifying the DEGs are described briefly below the table. All the important parameters are also described.
      2. Heatmap: The heatmap (Fig. 1) of the top 200 DEGs. Heatmap visualizes the comparison of DEGs expression across samples and within the sample. A brief description below the map enables users to understand and interpret heatmaps. The red color rectangle in the heatmap indicates the upregulation of a gene and the blue color indicates downregulation.
      3. PCA & Volcano Plots: A PCA plot enables users to visualize the variability in the replicates of the two experimental conditions compared. All replicates of a condition are depicted with the same color. Grouping of samples indicates if replicates are similar among the same condition versus between the conditions. 
      4. The volcano plot shows significantly differentially expressed genes. It is a scatter plot between log fold change of expression among different biological conditions and the significance of the change determined from the p-value. Volcano plots enable visual inspection of expression change across all the genes.
      5. Density: Shows normalization plots of samples normalized to overcome bias due to Read size and mRNA content.

 

Fig. 1. Heatmap of DE genes in the report.

 

Describes Pathway analysis for DEGs (Fig.2). The top 100 enriched pathways are shown in the table along with their p-value and the total number of DEGs belonging to the pathway. This section also shows a bubble plot of the enriched pathways wherein the location of the bubble is determined from %DE genes in the enriched pathway to the total number of genes in the pathway.

 

Fig. 2. Pathway annotation of DE genes in the report.

 

Jobs

Completed or unfinished pipeline runs are listed in the jobs table on the Jobs window (Fig. 1).

 

Fig. 1. The Jobs list window.

 

 

Fig. 2. Sample Deep-dive.