Benchmark annotation engines on the same input
Source:R/benchmark.R
benchmark_annotation_engines.RdRuns one or more annotation engines on the same expression and metadata inputs, then summarizes coverage, ambiguity, and provenance for comparison.
Usage
benchmark_annotation_engines(
expr,
meta = NULL,
species = c("auto", "human", "mouse"),
annotation_preset = NULL,
engines = c("none", "biomart", "orgdb", "ensdb", "hybrid"),
fields = NULL,
strip_version = TRUE,
biomart_version = 102,
biomart_host = NULL,
biomart_mirror = NULL,
assay_name = NULL,
gene_id_col = NULL,
sample_col = "sample",
output_file = NULL,
coverage_file = NULL,
verbose = TRUE
)Arguments
- expr
Expression table with
gene_idfirst, or aSummarizedExperiment-like object.- meta
Metadata table with
samplefirst. LeaveNULLwhenexpris aSummarizedExperiment.- species
Either
"auto","human", or"mouse".- annotation_preset
Optional preset that fixes a reproducible annotation configuration. Supported values are
"human_v102","mouse_v102","human_tpm_v102","mouse_tpm_v102","human_count_v102", and"mouse_count_v102".- engines
Character vector of annotation engines to compare.
- fields
Optional annotation fields to benchmark.
- strip_version
Whether to remove Ensembl version suffixes.
- biomart_version
Fixed Ensembl release used by the
biomaRtbackend.- biomart_host
Optional explicit Ensembl host.
- biomart_mirror
Optional Ensembl mirror name.
- assay_name
Optional assay name when
expris aSummarizedExperiment.- gene_id_col
Optional row-data column containing Ensembl IDs when
expris aSummarizedExperiment.- sample_col
Metadata column to use as
samplewhenexpris aSummarizedExperiment.- output_file
Optional CSV path for the benchmark summary.
- coverage_file
Optional CSV path for long-format coverage results.
- verbose
Whether to emit progress messages.