Multimodal overview

After the main.nf pipeline has been successfully run, WebAtlas can optionally process a group of multimodal datasets that share common features. This step will prepare the unified multimodal visualision for the web app.

The data outputs generated by running the main.nf conversion pipeline serve as inputs for this multimodal integration pipeline.

Tasks completed by the pipeline

The multimodal integration pipeline performs several tasks:

  1. Reindex each dataset by a user-inputted offset so ID’s do not clash between modalities.

  2. Optionally, concatenate other observation-by-feature matrices or categorical values to the expression matrix to enable their visualisation as continuous values. For example, a celltype prediction matrix and/or celltype categories.

  3. Find the intersection of features between all datasets and subset them to visualise only the intersection (as including features not present in all datasets can produce misleading visualisations.) Note the features are intersected using their index in the AnnData objects (var table). All datasets must use the same type of data as index for the intersection to be correctly computed. For example, all datasets use names as index, or all datasets use IDs as index.

Running the multimodal pipeline

Follow the instructions below to run the multimodal pipeline.

  1. Configure the parameters file for the multimodal.nf pipeline

  2. Run the multimodal.nf pipeline

  3. Visualise the multimodal data in a web browser