Simulation Experiment Recipe

Objectives

NOTE: The experiments presented herein are a subset ofthose described in “A FlexibleApproach for Predictive Biomarker Discovery” (P. Boileau, et al.2022). This subset was adapted by theoriginal author for a case study of the R packagesimChef. To reproduce this case studyand documentation, please see the repositorysimChef-case-study.

Introduction

An endeavor central to precision medicine is predictive biomarker discovery;they define patient sub-populations which stand to benefit most, or least, froma given treatment. The identification of these biomarkers is often the byproductof the related but fundamentally different task of treatment rule estimation.When the number of potentially predictive biomarkers is commensurate with, ormuch larger than, the number of observations, the set of biomarkers designatedas predictive by these methods will generally contain many false positives.

In synthetic and real-world data-inspired simulation settings emulatingrandomized control trials with high-dimensional covariate vectors, we study aproposed predictive biomarker discovery method, uniCATE. The objectives of thissimulation study are:

  1. Evaluate the proposed method as an estimator of a parameter for directly assessing the importance of potentially predictive biomarkers.

  2. Examine the empirical behavior, like type-I error rate control, of the proposed method as the sample size \(n\) grows to determine its correspondence to asymptotic results.

  3. Study the statistical robustness of the proposed method under a number of misspecified nuisance parameter estimator scenarios.

This work is motivated by the need to identify predictive biomarkers inclinical trials. Of particular interest is their detection for drug targetdiscovery and diagnostic assay development. The former requires theidentification of biomarkers causally related to the outcome of interest,whereas the latter seeks a set of strongly predictive biomarkers. Thissimulation study is therefore representative of these applications.

Notation

Consider \(n\) identically and independently distributed random vectors \(X_i =(W_i, A_i, Y^{(1)}_i, Y^{(0)}_i) \sim P_X\), \(i = 1, \ldots, n\), correspondingto complete but unobserved data generated by participants in an idealizedrandomized control trial or observational study, where:

  • \(V\): pre-treatment covariates, such as location and income, of dimension \(p\).
  • \(B\): pre-treatment biomarkers, such as gene expression data, of dimension \(q\).
  • \(W = (V, B)\): a \((q+p)\)-length random vector.
  • \(A\): a binary random variable representing a treatment assignment.
  • \(Y^{(1)}\) and \(Y^{(0)}\): partially observed random variables corresponding to the potential outcomes of clinical interest under both treatment and control conditions, respectively, for each patient.
  • \(P_X\): The unknown data-generating distribution of the full data \((W, A, Y^{(0)}, Y^{(1)})\).
  • \(Y = AY^{(1)} + (1-A)Y^{(0)}\): observed outcomes where \(Y_i = Y_i(1)\) if patient \(i\) received treatment, and \(Y_i = Y_i(0)\) otherwise.
  • \(P_0\): The unknown data-generating distribution of the observed data \((W, A, Y)\).
  • \(P_n\): The observed empirical distribution of a sample of size \(n\) drawn from \(P_0\).

Clinically predictive biomarker importance parameter

Clinically relevant predictive biomarkers are often those that have a stronginfluence on the outcome of interest on the absolute scale. As such, an idealtarget of inference when these outcomes are continuous and the number ofcovariates small is the CATE conditioning on the set of biomarkers:

\[\begin{equation*} \mathbb{E}_{P_X}\left[ Y^{(1)} - Y^{(0)} \big| B = b \right].\end{equation*}\]

Accurate and interpretable estimation of this parameter is generally challengingwhen \(p\) is large, preventing the accurate recovery of predictive biomarkers.

Indexing the biomarkers of by \(j = 1, \ldots, p\), such that \(B = (B_{1},\ldots, B_{p})\), centering them such that \(\mathbb{E}_{P_X}[B_{j}] = 0\), andassuming that \(\mathbb{E}_{P_X}[B^2_{j}] > 0\), we instead target the full-datavariable importance parameter \(\Psi^F(P_X) = (\Psi^F_1(P_X), \ldots,\Psi_p^F(P_X))\) where

\[\begin{equation*} \Psi_{j}^F(P_X) \equiv \frac{\mathbb{E}_{P_X}\left[\left(Y^{(1)} - Y^{(0)}\right)B_{j}\right]} {\mathbb{E}_{P_X}\left[B_{j}^2\right]}.\end{equation*}\]

Under the assumption that the expected difference in potential outcomes admits alinear form when conditioning on any given \(B_j\), \(\Psi^F(P_X)\) is the vectorof expected simple linear regression coefficients produced by regressing thedifference in potential outcomes against each biomarker. While the truerelationship between the difference of potential outcomes and a predictivebiomarker is almost surely nonlinear, \(\Psi^F(P_X)\) is a generally informativetarget of inference. Biomarkers with the largest absolute values in\(\Psi^F(P_X)\) generally modify the effect of treatment the most.

Unfortunately, we generally don’t observe both \(Y^{(0)}\) and \(Y^{(1)}\) for anygiven random unit. We only observe \(Y\). \(P_X\) is censored by a treatmentassignment mechanism; we can only interrogate parameters of the observeddata-generating distribution \(P_0\). Luckily, with assumptions of no unmeasuredconfounding and overlapping treatment support in which observations may beassigned to either treatment condition regardless of covariates, we have\[\begin{equation} \begin{split} \Psi_j(P_0) & \equiv \frac{\mathbb{E}_{P_0}\left[\left(\bar{Q}_0(1, W) - \bar{Q}_0(0, W)\right)B_{j}\right]} {\mathbb{E}_{P_0}\left[B_{j}^2\right]} \\ & = \Psi_j^F(P_X) \end{split}\end{equation}\]where \(\bar{Q}_0(a, w) \equiv \mathbb{E}_{P_0}[Y|A = a, W = w]\) is theconditional expected outcome given treatment and covariates.

It follows that \(\Psi(P_0) = \Psi^F(P_X)\) in a randomized control trial.\(\Psi_j(P_0)\) therefore measures the full data predictive biomarker importance.We propose a method, uniCATE, to perform inference about this parameter.

Data Generation

LM with TEM

“LM with TEM” stands for “Linear Model with Treatment Effect Modification”.Its generative model is as follows:\[\begin{equation} \begin{split} W = B & \sim N(0, I_{100 \times 100}) \\ A | W = A & \sim \text{Bernoulli}(1/2) \\ Y | A, W & \sim N\left(W^\top \left(\beta + I(A = 1)\gamma^{(1)} + I(A = 0)\gamma^{(0)}\right), 1/2\right). \end{split}\end{equation}\]Here, \(\beta = (\beta_1, \ldots, \beta_{100})^\top\) such that \(\beta_1 = \ldots= \beta_{20} = 2\) and \(\beta_{21} = \ldots = \beta_{100} = 0\), and\(\gamma^{(a)} = (\gamma_1^{(a)}, \ldots, \gamma_{100}^{(a)})^\top\) where\(\gamma_1^{(1)} = \ldots = \gamma_{50}^{(1)} = 5\), \(\gamma_1^{(0)} = \ldots =\gamma_{50}^{(0)} = 5\) and \(\gamma_{51}^{(1)} = \ldots = \gamma_{100}^{(1)} =0\) for \(a = \{0, 1\}\).

Two hundred datasets each of 125, 250 and 500 observations are generated fromthis model.

Function

## function (n = 125, cov_mat = diag(1, nrow = 100)) 
## {
##     beta_main <- c(rep(2, 20), rep(0, 80))
##     beta_0 <- rep(0, 100)
##     beta_1 <- c(rep(5, 50), rep(0, 50))
##     W <- MASS::mvrnorm(n = n, mu = rep(0, 100), Sigma = cov_mat)
##     colnames(W) <- paste0("W", seq_len(100))
##     A <- ifelse(runif(n) < 0.5, 0, 1)
##     epsilon_0 <- rnorm(n = n, mean = 0, sd = 0.5)
##     epsilon_1 <- rnorm(n = n, mean = 0, sd = 0.5)
##     W_t <- t(W)
##     main_effects <- crossprod(W_t, beta_main)
##     Y_0 <- as.vector(main_effects + crossprod(W_t, beta_0) + 
##         epsilon_0)
##     Y_1 <- as.vector(main_effects + crossprod(W_t, beta_1) + 
##         epsilon_1)
##     Y <- ifelse(A == 0, Y_0, Y_1)
##     sample_list <- list(Y = Y, A = A, W = W)
##     return(sample_list)
## }

Input Parameters

## list()

Kinked with TEM

“Kinked with TEM” stands for “Kinked Model with Treatment Effect Modification”.Its generative model is as follows:\[\begin{equation} \begin{split} W = B & \sim N(0, I_{100 \times 100}) \\ A | W = A & \sim \text{Bernoulli}(1/2) \\ Y | A, W & \sim N\left( W^\top \left(I(A = 1)\gamma + I(A = 0)\;\text{diag}(I(W > 0))\;\gamma\right), 1/2 \right), \end{split}\end{equation}\]where \(\gamma = (\gamma_1, \ldots, \gamma_{100})\), \(\gamma_{1} = \ldots= \gamma_{50} = 10\), \(\gamma_{51} = \ldots = \gamma_{100} = 0\), and\(\text{diag}(\cdot)\) is a diagonal matrix whose diagonal equals theinput vector.

As with the LM with TEM mode, 200 datasets of 125, 250 and 500observations are generated.

Function

## function (n = 125, cov_mat = diag(1, nrow = 100)) 
## {
##     beta_tem <- c(rep(10, 50), rep(0, 50))
##     W <- MASS::mvrnorm(n = n, mu = rep(0, 100), Sigma = cov_mat)
##     colnames(W) <- paste0("W", seq_len(100))
##     A <- ifelse(runif(n) < 0.5, 0, 1)
##     epsilon_1 <- rnorm(n = n, mean = 0, sd = 0.5)
##     W_t <- t(W)
##     Y_1 <- as.vector(crossprod(W_t, beta_tem) + epsilon_1)
##     epsilon_0 <- rnorm(n = n, mean = 0, sd = 0.5)
##     W_mod <- W[, 1:50]
##     W_mod[W_mod < 0] <- 0
##     W_mod_t <- t(cbind(W_mod, W[, 51:100]))
##     Y_0 <- as.vector(crossprod(W_mod_t, beta_tem) + epsilon_0)
##     Y <- ifelse(A == 0, Y_0, Y_1)
##     sample_list <- list(Y = Y, A = A, W = W)
##     return(sample_list)
## }

Input Parameters

## list()

NLM with TEM

“NLM with TEM” stands for “Nonlinear Model with Treatment Effect Modification”.Its generative model is as follows:\[\begin{equation} \begin{split} W = B & \sim N(0, I_{100 \times 100}) \\ A | W = A & \sim \text{Bernoulli}(1/2) \\ Y | A, W & \sim N\left( W^\top \left(I(A = 1)\gamma + I(A = 0)\;\text{diag}(I(W > 0))\;\gamma\right), 1/2 \right), \end{split}\end{equation}\]where \(\beta_1 = \ldots = \beta_{20} = 1\) and \(\beta_{21} = \ldots =\beta_{100} = 0\), and where \(\gamma_1 = \ldots = \gamma_{50} = 5\) and\(\gamma_{51} = \ldots = \gamma_{100} = 0\).

Again, 200 datasets are generated for each sample size: 125, 250 and 500.

Function

## function (n = 125, cov_mat = diag(1, nrow = 100)) 
## {
##     beta_main <- c(rep(1, 20), rep(0, 80))
##     beta_0 <- rep(0, 100)
##     beta_1 <- c(rep(5, 50), rep(0, 50))
##     W <- MASS::mvrnorm(n = n, mu = rep(0, 100), Sigma = cov_mat)
##     colnames(W) <- paste0("W", seq_len(100))
##     A <- ifelse(runif(n) < 0.5, 0, 1)
##     epsilon_0 <- rnorm(n = n, mean = 0, sd = 0.5)
##     epsilon_1 <- rnorm(n = n, mean = 0, sd = 0.5)
##     W_t <- t(W)
##     main_effects <- rowSums(exp(abs(t((W_t * beta_main)))))
##     Y_0 <- as.vector(main_effects + crossprod(W_t, beta_0) + 
##         epsilon_0)
##     Y_1 <- as.vector(main_effects + crossprod(W_t, beta_1) + 
##         epsilon_1)
##     Y <- ifelse(A == 0, Y_0, Y_1)
##     sample_list <- list(Y = Y, A = A, W = W)
##     return(sample_list)
## }

Input Parameters

## list()

Methods and Evaluation

Methods

uniCATE (LASSO)

We propose a cross-validated estimator that uses all available data. Begin byrandomly partitioning the \(n\) observations of \(P_n\) into \(K\) independentvalidation sets \(P_{n, 1}^1, \ldots, P_{n, K}^1\) of approximately equal size.For \(k=1,\ldots, K\), define the training set as, in a slight abuse of notation,\(P_{n, k}^0 = P_n \setminus P_{n,k}^1\). Then the cross-validated estimator of\(\Psi_j(P_0)\) is defined as:

\[\begin{equation} \Psi_j^{(\text{CV})}(P_n) = \frac{1}{K} \sum_{k=1}^K \frac{\sum_{i=1}^n I(O_i \in P_{n, k}^1)\tilde{T}(O_i; P_{n,k}^0)B_{ij}} {\sum_{i=1}^n I(O_i \in P_{n, k}^1) B_{ij}^2},\end{equation}\]where \(\tilde{T}(O;P)\) is the difference in augmented inverse probabilityweight transformed outcomes. That is,\[\begin{equation} \tilde{T}(O; P) = \left(\frac{I(A=1)}{g(1, W)} - \frac{I(A=0)}{g(0,W)}\right) (Y - \bar{Q}(A, W)) + \bar{Q}(1, W) - \bar{Q}(0,W),\end{equation}\]where \(g(a,W) = \mathbb{P}_{P}[A=a|W]\). Here, \(P\) is omitted from the subscriptof \(g(a,W),\bar{Q}(A,W)\) to simplify notation.

This cross-validated estimator estimates \(\bar{Q}_0(A,W)\) on the training setusing a LASSO regression that includes main and treatment-biomarker interactionterms. The simple linear regression coefficient of the \(j\text{th}\) biomarkerregressed on the difference in potential outcomes is then computed on thevalidation set using the estimate of \(\bar{Q}_0(A,W)\). This procedure isrepeated \(K\) times, and the estimate is defined as the mean of the estimatedslopes for each validation set.

In a randomized control trial with known treatment assignment mechanism, thisestimator is asymptotically normal about the true parameter with variance givenby its efficient influence curve. Hypothesis testing is therefore possiblethrough the use of Wald-type confidence intervals. We define the nullhypothesis as \(\Psi(P_0)=0\), and use an FDR cutoff of 5% throughout thesimulation study. For more information, see (P. Boileau, et al.2022).

Function

## function (Y, A, W, use_sl = FALSE) 
## {
##     sample_tbl <- tibble::tibble(Y = Y, A = A)
##     sample_tbl <- dplyr::bind_cols(sample_tbl, W)
##     biomarker_names <- colnames(W)
##     propensity_score_ls <- list(`1` = 0.5, `0` = 0.5)
##     if (use_sl) {
##         interactions <- lapply(biomarker_names, function(b) c("A", 
##             b))
##         lrnr_interactions <- sl3::Lrnr_define_interactions$new(interactions)
##         lrnr_lasso <- sl3::make_learner(sl3::Pipeline, lrnr_interactions, 
##             sl3::Lrnr_glmnet$new())
##         lrnr_enet <- sl3::make_learner(sl3::Pipeline, lrnr_interactions, 
##             sl3::Lrnr_glmnet$new(alpha = 0.5))
##         lrnr_ridge <- sl3::make_learner(sl3::Pipeline, lrnr_interactions, 
##             sl3::Lrnr_glmnet$new(alpha = 0))
##         lrnr_spline <- sl3::make_learner(sl3::Pipeline, lrnr_interactions, 
##             sl3::Lrnr_polspline$new())
##         lrnr_rf <- sl3::make_learner(sl3::Pipeline, lrnr_interactions, 
##             sl3::Lrnr_ranger$new())
##         lrnr_mean <- sl3::Lrnr_mean$new()
##         learner_library <- sl3::make_learner(sl3::Stack, lrnr_spline, 
##             lrnr_lasso, lrnr_enet, lrnr_ridge, lrnr_rf, lrnr_mean)
##         super_learner <- sl3::Lrnr_sl$new(learners = learner_library, 
##             metalearner = sl3::make_learner(sl3::Lrnr_nnls))
##     }
##     else {
##         super_learner <- NULL
##     }
##     unicate_tbl <- uniCATE::unicate(data = sample_tbl, outcome = "Y", 
##         treatment = "A", covariates = biomarker_names, biomarkers = biomarker_names, 
##         propensity_score_ls = propensity_score_ls, v_folds = 5L, 
##         super_learner = super_learner)
##     tems <- unicate_tbl %>% dplyr::filter(p_value_bh <= 0.05) %>% 
##         dplyr::pull(biomarker)
##     tems <- list(tems = tems)
##     return(tems)
## }

Input Parameters

## $use_sl
## [1] FALSE

Modified Covariates

uniCATE’s capacity to identify predictive biomarkers was compared to that ofpopular CATE estimation methods: the modified covariates approach and itsaugmented counterpart of Tian et al.(2012).Briefly, the former directly estimates the linear model coefficients of thetreatment-biomarker interactions, using a linear working model for these terms,without having to model or estimate the main effects. While Tian et al.’smethod is flexible since it avoids making any assumptions about the functionalform of the main biomarker effects, it can lack finite sample precision insmall-sample, high-dimensional settings. When using a penalized method like theLASSO, biomarkers are designated as predictive when their estimatedcoefficients are non-zero.

Function

## function (Y, A, W) 
## {
##     propensity_func <- function(x, trt) 0.5
##     mod_cov <- personalized::fit.subgroup(x = W, y = Y, trt = A, 
##         propensity.func = propensity_func, loss = "sq_loss_lasso", 
##         nfolds = 10)
##     coefs <- mod_cov$coefficients[paste0("W", seq_len(100)), 
##         ]
##     tems <- names(coefs[which(coefs != 0)])
##     tems <- list(tems = tems)
##     return(tems)
## }

Input Parameters

## list()

Augmented Modified Covariates

Tian et al. (2012) proposed an “augmented” version of the modified covariatesmethod that explicitly account for the variation due to the main effects of thebiomarkers. When the outcome is continuous, it is equivalent to fitting a(penalized) multivariate linear regression with biomarker andtreatment-biomarker interaction terms. Again, biomarkers are classified aspredictive when their estimated treatment-biomarker coefficients are non-zero.For more information on these methods, please see “A Simple Method forEstimating Interactions Between a Treatment and a Large Number of Covariates”(L. Tian, et al.2012).

Function

## function (Y, A, W) 
## {
##     propensity_func <- function(x, trt) 0.5
##     augment_func <- function(x, y) {
##         df.x <- data.frame(x)
##         form <- eval(paste(" ~ 1 + ", paste(colnames(df.x), collapse = " + ")))
##         mm <- model.matrix(as.formula(form), data = df.x)
##         cvmod <- glmnet::cv.glmnet(y = y, x = mm, nfolds = 10)
##         predictions <- predict(cvmod, newx = mm, s = "lambda.min")
##         predictions
##     }
##     aug_mod_cov <- personalized::fit.subgroup(x = W, y = Y, trt = A, 
##         propensity.func = propensity_func, loss = "sq_loss_lasso", 
##         augment.func = augment_func, nfolds = 10)
##     coefs <- aug_mod_cov$coefficients[paste0("W", seq_len(100)), 
##         ]
##     tems <- names(coefs[which(coefs != 0)])
##     tems <- list(tems = tems)
##     return(tems)
## }

Input Parameters

## list()

Evaluation

Empirical FDR

The empirical false discovery rate (FDR) for each method is computed as themean of the false discovery proportions (FDP) stratified over eachdata-generating process and sample size. For any given simulated dataset, theFDP is computed as follows:\[\begin{equation} \text{FDP} = \frac{\text{# of biomarkers erroneously classified as predictive}} {\text{# of biomarkers classified as predictive}}.\end{equation}\]When no biomarkers are classified as predictive, \(\text{FDP} = 0\).

Function

## function (fit_results) 
## {
##     group_vars <- c(".dgp_name", ".method_name", "n")
##     eval_out <- fit_results %>% mutate(fdp = purrr::map_dbl(tems, 
##         fdp_fun)) %>% dplyr::group_by(dplyr::across({
##         {
##             group_vars
##         }
##     })) %>% summarize(fdr = mean(fdp), .groups = "drop")
##     return(eval_out)
## }

Input Parameters

## list()

Empirical TPR

The empirical true positive rate (TPR) is the mean of the true positiveproportions (TPP) stratified over each data-generating process and sample size.For any given simulated dataset, the TPP is computed as\[\begin{equation} \text{TPP} = \frac{\text{# of predictive biomarkers classified as predictive}} {\text{# of predictive biomarkers}}.\end{equation}\]

Function

## function (fit_results) 
## {
##     group_vars <- c(".dgp_name", ".method_name", "n")
##     eval_out <- fit_results %>% mutate(tpp = purrr::map_dbl(tems, 
##         tpp_fun)) %>% dplyr::group_by(dplyr::across({
##         {
##             group_vars
##         }
##     })) %>% summarize(tpr = mean(tpp), .groups = "drop")
##     return(eval_out)
## }

Input Parameters

## list()

Empirical TNR

The empirical true negative rate (TNR) is the mean of the true negativeproportions (TNP) stratified over each data-generating process and sample size.For any given simulated dataset, the TNP is computed as\[\begin{equation} \text{TNP} = \frac{\text{# of non-predictive biomarkers not classified as predictive}} {\text{# of non-predictive biomarkers}}.\end{equation}\]

Function

## function (fit_results) 
## {
##     group_vars <- c(".dgp_name", ".method_name", "n")
##     eval_out <- fit_results %>% mutate(tnp = purrr::map_dbl(tems, 
##         tnp_fun)) %>% dplyr::group_by(dplyr::across({
##         {
##             group_vars
##         }
##     })) %>% summarize(tnr = mean(tnp), .groups = "drop")
##     return(eval_out)
## }

Input Parameters

## list()

Visualizations

Summary Plot

The simulation study results are summarized in a multi-panel plot comparing theempirical FDR, TPR and TNR of all methods across a range of sample sizes.

Function

## function (eval_results) 
## {
##     emp_fdr_tbl <- eval_results$`Empirical FDR` %>% dplyr::mutate(metric = "FDR", 
##         value = fdr, yintercept = 0.05) %>% dplyr::select(-fdr)
##     emp_tpr_tbl <- eval_results$`Empirical TPR` %>% dplyr::mutate(metric = "TPR", 
##         value = tpr, yintercept = NA) %>% dplyr::select(-tpr)
##     emp_tnr_tbl <- eval_results$`Empirical TNR` %>% dplyr::mutate(metric = "TNR", 
##         value = tnr, yintercept = NA) %>% dplyr::select(-tnr)
##     comb_tbl <- bind_rows(emp_fdr_tbl, emp_tpr_tbl, emp_tnr_tbl)
##     plt <- ggplot2::ggplot(comb_tbl) + ggplot2::aes(x = n, y = value, 
##         colour = as.factor(.method_name)) + ggplot2::geom_point(alpha = 0.7) + 
##         ggplot2::geom_line(alpha = 0.7) + ggplot2::geom_hline(aes(yintercept = yintercept), 
##         colour = "red", linetype = 2, alpha = 0.5) + ggplot2::facet_grid(cols = ggplot2::vars(.dgp_name), 
##         rows = ggplot2::vars(metric)) + ggplot2::xlab("Sample Size") + 
##         ggplot2::ylab("Value") + ggplot2::scale_colour_viridis_d(name = "Method", 
##         option = "E", end = 0.8) + ggplot2::theme_bw()
##     return(plt)
## }
## <bytecode: 0x7f940b7966c0>

Input Parameters

## list()

LM with TEM-Kinked with TEM-NLM with TEM

Varying n-n-n

Empirical FDR

Empirical TPR

Empirical TNR

Summary Plot

Parameter Values

## $dgp
## $dgp$`LM with TEM`
## $dgp$`LM with TEM`$n
## [1] 125 250 500
## 
## 
## $dgp$`Kinked with TEM`
## $dgp$`Kinked with TEM`$n
## [1] 125 250 500
## 
## 
## $dgp$`NLM with TEM`
## $dgp$`NLM with TEM`$n
## [1] 125 250 500
## 
## 
## 
## $method
## list()
---
title: "`r params$sim_name`"
author: "`r params$author`"
date: "`r format(Sys.time(), '%B %d, %Y')`"
header-includes:
    - \usepackage{float}
    - \usepackage{amsmath}
    - \usepackage{gensymb}
output:
  vthemes::vmodern:
css: css/simchef.css
params:
  author: 
    label: "Author:"
    value: ""
  sim_name:
    label: "Simulation Experiment Name:"
    value: ""
  sim_path:
    label: "Path to Simulation Experiment Folder:"
    value: ""
  eval_order:
    label: "Order of Evaluators:"
    value: NULL
  viz_order:
    label: "Order of Visualizers:"
    value: NULL
  verbose:
    label: "Verbose Level:"
    value: 2
---

<script src="js/simchefNavClass.js"></script>

```{r setup, include=FALSE}
options(width = 10000)
knitr::opts_chunk$set(
  echo = FALSE,
  warning = FALSE,
  message = FALSE,
  cache = FALSE,
  fig.align = "center",
  fig.pos = "H",
  fig.height = 12,
  fig.width = 10
)

options(knitr.kable.NA = 'NA',
        dplyr.summarise.inform = FALSE)

# scrollable text output
local({
  hook_output <- knitr::knit_hooks$get('output')
  knitr::knit_hooks$set(output = function(x, options) {
    if (!is.null(options$max.height)) options$attr.output <- c(
      options$attr.output,
      sprintf('style="max-height: %s;"', options$max.height)
    )
    hook_output(x, options)
  })
})

chunk_idx <- 1
doc_dir <- file.path(params$sim_path, "docs")
```

```{r helper-funs}

#' Get order of objects to display
#'
#' @param obj_names Vector of all object names that need to be displayed.
#' @param obj_order Vector of object names in the desired appearance order.
#' @return Vector of object names in the order in which they will be displayed.
getObjOrder <- function(obj_names, obj_order = NULL) {
  if (is.null(obj_order)) {
    return(obj_names)
  } else {
    return(intersect(obj_order, obj_names))
  }
}

#' Get all experiments under a given directory name
#'
#' @param dir_name name of directory
#' @return list of named experiments
getDescendants <- function(dir_name) {
  experiments <- list()
  for (d in list.dirs(dir_name)) {
    if (file.exists(file.path(d, "experiment.rds"))) {
      if (identical(d, params$sim_path)) {
        exp_name <- "Base"
      } else {
        exp_name <- stringr::str_replace_all(
          stringr::str_remove(d, paste0(params$sim_path, "/")),
          "/", " - "
        )
      }
      experiments[[exp_name]] <- readRDS(file.path(d, "experiment.rds"))
    }
  }
  return(experiments)
}

#' Check if experiment exists
#'
#' @param dir_name name of directory or vector thereof
#' @param recursive logical; if TRUE, checks if experiment exists under the
#'   given directory(s); if FALSE, checks if any experiment exists under the
#'   directory(s) and its descendants
#' @return TRUE if experiment exists and FALSE otherwise
experimentExists <- function(dir_name, recursive = FALSE) {
  res <- purrr::map_lgl(dir_name,
                        function(d) {
                          if (!recursive) {
                            exp_fname <- file.path(d, "experiment.rds")
                            return(file.exists(exp_fname))
                          } else {
                            descendants <- getDescendants(d)
                            return(length(descendants) > 0)
                          }
                        })
  return(any(res))
}

#' Displays content for specified part of recipe
#'
#' @param field_name part of recipe to show; must be one of "dgp", "method",
#'   "evaluator", or "visualizer"
#' @return content for recipe
showRecipePart <- function(field_name = c("dgp", "method",
                                          "evaluator", "visualizer")) {

  field_name <- match.arg(field_name)
  func_name <- dplyr::case_when(field_name == "evaluator" ~ "eval",
                                field_name == "visualizer" ~ "viz",
                                TRUE ~ field_name)
  descendants <- getDescendants(dir_name = params$sim_path)
  objs <- purrr::map(descendants, ~.x[[paste0("get_", field_name, "s")]]())
  obj_names <- unique(purrr::reduce(sapply(objs, names), c))

  obj_header <- "<p style='font-weight: bold; font-size: 20px'> %s </p>"
  invis_header <- "\n\n### %s {.tabset .tabset-pills .tabset-recipe .tabset-circle}\n\n"
  showtype_header <- "\n\n#### %s {.tabset .tabset-pills}\n\n"
  exp_header <- "\n\n##### %s \n\n"

  if (all(sapply(objs, length) == 0)) {
    return(cat("N/A"))
  }

  for (idx in 1:length(obj_names)) {
    cat(sprintf(invis_header, ""))
    obj_name <- obj_names[idx]

    cat("<div class='panel panel-default padded-panel'>")
    cat(sprintf(obj_header, obj_name))

    cat(sprintf(showtype_header, fontawesome::fa("readme", fill = "white")))
    pasteMd(file.path(doc_dir, paste0(field_name, "s"),
                      paste0(obj_name, ".md")))

    cat(sprintf(showtype_header, fontawesome::fa("code", fill = "white")))
    keep_objs <- purrr::map(objs, obj_name)
    keep_objs[sapply(keep_objs, is.null)] <- NULL
    if (all(purrr::map_lgl(keep_objs,
                           ~isTRUE(check_equal(.x, keep_objs[[1]]))))) {
      obj <- keep_objs[[1]]
      cat("<b>Function</b>")
      vthemes::subchunkify(obj[[paste0(func_name, "_fun")]],
                  chunk_idx, other_args = "max.height='200px'")
      chunk_idx <<- chunk_idx + 1
      cat("<b>Input Parameters</b>")
      vthemes::subchunkify(obj[[paste0(func_name, "_params")]],
                  chunk_idx, other_args = "max.height='200px'")
      chunk_idx <<- chunk_idx + 1
    } else {
      for (exp in names(objs)) {
        obj <- objs[[exp]][[obj_name]]
        if (is.null(obj)) {
          next
        }
        cat(sprintf(exp_header, exp))
        cat("<b>Function</b>")
        vthemes::subchunkify(obj[[paste0(func_name, "_fun")]],
                    chunk_idx, other_args = "max.height='200px'")
        chunk_idx <<- chunk_idx + 1
        cat("<b>Input Parameters</b>")
        vthemes::subchunkify(obj[[paste0(func_name, "_params")]],
                    chunk_idx, other_args = "max.height='200px'")
        chunk_idx <<- chunk_idx + 1
      }
    }
    cat("</div>")
  }
}

#' Reads in file if it exists and returns NULL if the file does not exist
#'
#' @param filename name of .rds file to try reading in
#' @return output of filename.rds if the file exists and NULL otherwise
getResults <- function(filename) {
  if (file.exists(filename)) {
    results <- readRDS(filename)
  } else {
    results <- NULL
  }
  return(results)
}

#' Displays output (both from evaluate() and visualize()) from saved results under
#' a specified directory
#'
#' @param dir_name name of directory
#' @param depth integer; depth of directory from parent/base experiment's folder
#' @param base logical; whether or not this is a base experiment
#' @param show_header logical; whether or not to show section header
#' @param verbose integer; 0 = no messages; 1 = print out directory name only;
#'   2 = print out directory name and name of evaluators/visualizers
#' @return content results from evaluate() and visualize() from the experiment
showResults <- function(dir_name, depth, base = FALSE, show_header = TRUE,
                        verbose = 1) {
  if (verbose >= 1) {
    message(rep("*", depth), basename(dir_name))
  }

  if (depth == 1) {
    header_template <- "\n\n%s %s {.tabset .tabset-pills .tabset-vmodern}\n\n"
  } else {
    if (base | !experimentExists(dir_name)) {
      header_template <- "\n\n%s %s {.tabset .tabset-pills}"
    } else {
      header_template <- "\n\n%s %s {.tabset .tabset-pills .tabset-circle}"
    }
  }

  if (show_header) {
    cat(sprintf(header_template,
                paste(rep("#", depth), collapse = ""),
                basename(dir_name)))
  }

  if (base) {
    cat(paste0("\n\n",
               paste(rep("#", depth + 1), collapse = ""),
               " Base - ", basename(dir_name),
               " {.tabset .tabset-pills .tabset-circle}\n\n"))
    depth <- depth + 1
  }

  showtype_template <- paste0(
    "\n\n", paste(rep("#", depth + 1), collapse = ""), " %s\n\n"
  )
  figname_template <- "<h3 style='font-weight: bold'> %s </h3>"
  invisible_header <- paste0(
    "\n\n", paste(rep("#", depth + 2), collapse = ""),
    " {.tabset .tabset-pills}\n\n"
  )
  plt_template <- paste0(
    "\n\n", paste(rep("#", depth + 3), collapse = ""), " %s\n\n"
  )

  exp_fname <- file.path(dir_name, "experiment.rds")
  # fit_fname <- file.path(dir_name, "fit_results.rds")
  eval_fname <- file.path(dir_name, "eval_results.rds")
  viz_fname <- file.path(dir_name, "viz_results.rds")

  exp <- getResults(exp_fname)
  # fit_results <- getResults(fit_fname)
  eval_results <- getResults(eval_fname)
  viz_results <- getResults(viz_fname)

  if (!is.null(eval_results)) {
    cat(sprintf(showtype_template, fontawesome::fa("table", fill = "white")))
    eval_names <- getObjOrder(names(eval_results), params$eval_order)
    for (eval_name in eval_names) {
      if (verbose >= 2) {
        message(rep(" ", depth + 1), eval_name)
      }
      evaluator <- exp$get_evaluators()[[eval_name]]
      if (evaluator$rmd_show) {
        cat(sprintf(figname_template, eval_name))
        do.call(vthemes::pretty_DT,
                c(list(eval_results[[eval_name]]), evaluator$rmd_options)) %>%
          vthemes::subchunkify(i = chunk_idx)
        chunk_idx <<- chunk_idx + 1
      }
    }
  }

  if (!is.null(viz_results)) {
    cat(sprintf(showtype_template,
                fontawesome::fa("chart-bar", fill = "white")))
    viz_names <- getObjOrder(names(viz_results), params$viz_order)
    for (viz_name in viz_names) {
      if (verbose >= 2) {
        message(rep(" ", depth + 1), viz_name)
      }
      visualizer <- exp$get_visualizers()[[viz_name]]
      if (visualizer$rmd_show) {
        cat(invisible_header)
        cat(sprintf(figname_template, viz_name))
        plts <- viz_results[[viz_name]]
        if (!inherits(plts, "list")) {
          plts <- list(plt = plts)
        }
        if (is.null(names(plts))) {
          names(plts) <- 1:length(plts)
        }
        for (plt_name in names(plts)) {
          if (length(plts) != 1) {
            cat(sprintf(plt_template, plt_name))
          }
          plt <- plts[[plt_name]]
          if (inherits(plt, "plotly")) {
            add_class <- c("panel panel-default padded-panel")
          } else {
            add_class <- NULL
          }
          vthemes::subchunkify(plt, i = chunk_idx,
                      fig_height = visualizer$rmd_options$height,
                      fig_width = visualizer$rmd_options$width,
                      other_args = "out.width = '100%'",
                      add_class = add_class)
          chunk_idx <<- chunk_idx + 1
        }
      }
    }
  }

  if (!is.null(exp)) {
    if ((length(exp$get_vary_across()$dgp) != 0) |
        (length(exp$get_vary_across()$method) != 0)) {
      cat(sprintf(showtype_template, fontawesome::fa("code", fill = "white")))
      cat("<b>Parameter Values</b>")
      vthemes::subchunkify(exp$get_vary_across(),
                  chunk_idx, other_args = "max.height='200px'")
      chunk_idx <<- chunk_idx + 1
    }
  }
}

#' Displays output of experiment for all of its (saved) descendants
#'
#' @param dir_name name of parent experiment directory
#' @param depth placeholder for recursion; should not be messed with
#' @param ... other arguments to pass into showResults()
showDescendantResults <- function(dir_name, depth = 1, ...) {
  children <- list.dirs(dir_name, recursive = FALSE)
  if (length(children) == 0) {
    return()
  }
  for (child_idx in 1:length(children)) {
    child <- children[child_idx]
    if (!experimentExists(child, recursive = TRUE)) {
      next
    }
    if (experimentExists(child, recursive = FALSE) &
        (experimentExists(list.dirs(child, recursive = TRUE)[-1]) |
         (depth == 1))) {
      base <- TRUE
    } else {
      base <- FALSE
    }
    showResults(child, depth, base = base, ...)
    showDescendantResults(child, depth + 1, ...)
  }
}


```

# Simulation Experiment Recipe {.tabset .tabset-vmodern}

## Objectives {.panel .panel-default .padded-panel}

```{r objectives, results = "asis"}
pasteMd(file.path(doc_dir, "objectives.md"))
```

## Data Generation

```{r dgps, results = "asis"}
showRecipePart(field_name = "dgp")
```

## Methods and Evaluation

### Methods

```{r methods, results = "asis"}
showRecipePart(field_name = "method")
```

### Evaluation

```{r evaluators, results = "asis"}
showRecipePart(field_name = "evaluator")
```

## Visualizations

```{r visualizers, results = "asis"}
showRecipePart(field_name = "visualizer")
```



```{r res, results = "asis"}

# show results
if (experimentExists(params$sim_path)) {
  cat(sprintf("\n\n# Base %s \n\n", params$sim_name))
  cat("\n\n## {.tabset .tabset-pills .tabset-circle}\n\n")
  message(sprintf("Creating R Markdown report for %s...", params$sim_name))
  showResults(params$sim_path, depth = 2, base = FALSE, show_header = FALSE,
              verbose = 0)
}

showDescendantResults(params$sim_path, verbose = params$verbose)

```
