Reference: Input Parameters/Flags#

The documentation below is automatically generated from the input schema and contains additional technical detail. Parameters in bold are required and must be set by the user.

Setting parameters#

Parameters can be set in two ways:

  1. Storing parameters in a configuration file using the -params-file option. See How to set parameters in a file for more details.

  2. In a terminal using two dashes, e.g.:

$ nextflow run pgscatalog/pgsc_calc \
    -profile test,docker \
    --liftover \
    --target_build GRCh38

Parameters with a single dash (e.g. -profile) configure nextflow directly.

Setting parameters with a configuration file is the recommended method of working with the pipeline, because it helps you to keep track of your analysis.

For examples about setting max job request options, see How do I run pgsc_calc on larger datasets and more powerful computers?.

Advanced parameters#

Some parameters have been hidden below to improve the readability of this page. You can view the entire list by running:

$ nextflow run pgscatalog/pgsc_calc --help

Or by downloading the schema and opening it in a text editor

Parameter schema#

pgscatalog/pgsc_calc pipeline parameters#

This pipeline applies scoring files from the PGS Catalog to target set(s) of genotyped samples

https://raw.githubusercontent.com/pgscatalog/pgsc_calc/master/nextflow_schema.json

type

object

Input/output options#

Define where the pipeline should find input data and save output data.

type

object

properties

  • input

Path to input samplesheet

type

string

default

None

  • format

Format of input samplesheet

type

string

enum

csv, json

default

csv

  • scorefile

Path to a scoring file in PGS Catalog format. Multiple scorefiles can be specified using wildcards (e.g., --scorefile "path/to/scores/*.txt")

type

string

  • pgs_id

A comma separated list of PGS score IDs, e.g. PGS000802

type

string

default

None

  • pgp_id

A comma separated list of PGS Catalog publications, e.g. PGP000001

type

string

default

None

  • trait_efo

A comma separated list of PGS Catalog EFO traits, e.g. EFO_0004214

type

string

default

None

  • target_build

Genome build of input data

type

string

enum

GRCh37, GRCh38

  • ref

Path to reference database

type

string

default

https://gitlab.ebi.ac.uk/nebfield/test-datasets/-/raw/master/pgsc_calc/reference_data/pgsc_calc_ref.sqlar

  • copy_genomes

Copy harmonised genomes (plink2 pgen/pvar/psam files) to outdir

type

boolean

  • outdir

Path to the output directory where the results will be saved.

type

string

default

./results

  • email

Email address for completion summary.

type

string

pattern

^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

Variant Matching QC & Filtering#

type

object

default

properties

  • liftover

Lift scoring files to match your target genomes. Requires build information in the header of the scoring files.

type

boolean

  • min_lift

Minimum proportion of variants required to successfully remap a scoring file to a different genome build

type

number

maximum

1

minimum

0

default

0.95

  • keep_multiallelic

Allow matches of scoring file variants to multiallelic variants in the target dataset

type

boolean

  • keep_ambiguous

Keep matches of scoring file variants to strand ambiguous variants (e.g. A/T and C/G SNPs) in the target dataset. This assumes the scoring file and target dataset report variants on the same strand.

type

boolean

  • fast_match

Enable fast matching, which significantly increases RAM usage (32GB minimum recommended)

type

boolean

  • min_overlap

Minimum proportion of variants present in both the score file and input target genomic data

type

number

maximum

1

minimum

0

default

0.75

Max job request options#

Set the top limit for requested resources for any single job.

type

object

properties

  • max_cpus

Maximum number of CPUs that can be requested for any single job.

type

integer

default

16

  • max_memory

Maximum amount of memory that can be requested for any single job.

type

string

pattern

^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$

default

128.GB

  • max_time

Maximum amount of time that can be requested for any single job.

type

string

pattern

^(\d+\.?\s*(s|m|h|day)\s*)+$

default

240.h

Generic options#

Less common options for the pipeline, typically set in a config file.

type

object

properties

  • help

Display help text.

type

boolean

  • publish_dir_mode

Method used to save pipeline results to output directory.

type

string

enum

symlink, rellink, link, copy, copyNoFollow, move

default

copy

  • email_on_fail

Email address for completion summary, only when pipeline fails.

type

string

pattern

^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

  • plaintext_email

Send plain-text email instead of HTML.

type

boolean

  • monochrome_logs

Do not use coloured log outputs.

type

boolean

  • tracedir

Directory to keep pipeline Nextflow logs and reports.

type

string

default

${params.outdir}/pipeline_info

  • validate_params

Boolean whether to validate parameters against the schema at runtime

type

boolean

default

True

  • show_hidden_params

Show all params when using –help

type

boolean

  • enable_conda

Run this workflow with Conda. You can also use ‘-profile conda’ instead of providing this parameter.

type

boolean

  • singularity_pull_docker_container

Instead of directly downloading Singularity images for use with Singularity, force the workflow to pull and convert Docker containers instead.

type

boolean

  • platform

What platform is the pipeline executing on?

type

string

enum

amd64, arm64

default

amd64

  • parallel

Enable parallel calculation of scores. This is I/O and RAM intensive.

type

boolean