The drake R Package User Manual
Chapter 1 Introduction
1.1 The drake R package
drake — or, Data Frames in R for Make — is a general-purpose workflow manager for data-driven tasks. It rebuilds intermediate data objects when their dependencies change, and it skips work when the results are already up to date. Not every runthrough starts from scratch, and completed workflows have tangible evidence of reproducibility.
Drake is more scalable than
knitr, more thorough than memoization, and more R-focused than other pipeline toolkits such as GNU Make,
remake, and snakemake.
You can choose among different versions of
drake. The latest CRAN release may be more convenient to install, but this manual is kept up to date with the GitHub version, so some features described here may not yet be available on CRAN.
# Install the latest stable release from CRAN. install.packages("drake") # Alternatively, install the development version from GitHub. install.packages("devtools") library(devtools) install_github("ropensci/drake")
1.3 Why drake?
1.3.1 What gets done stays done.
Too many data science projects follow a Sisyphean loop:
- Launch the code.
- Wait while it runs.
- Discover an issue.
- Restart from scratch.
Have you ever tried to manually salvage old results for a new runthrough?
drake, you can automatically
- Launch the parts that changed since last time.
- Skip the rest.
1.3.2 Reproducibility with confidence
The R community emphasizes reproducibility. Traditional themes include scientific replicability, literate programming with knitr, and version control with git. But internal consistency is important too. Reproducibility carries the promise that your output matches the code and data you say you used.
Suppose you are reviewing someone else’s data analysis project for reproducibility. You scrutinize it carefully, checking that the datasets are available and the documentation is thorough. But could you re-create the results without the help of the original author? With
drake, it is quick and easy to find out.
make(plan) config <- drake_config(plan) outdated(config)
With everything already up to date, you have tangible evidence of reproducibility. Even though you did not re-create the results, you know the results are re-creatable. They faithfully show what the code is producing. Given the right package environment and system configuration, you have everything you need to reproduce all the output by yourself.
When it comes time to actually rerun the entire project, you have much more confidence. Starting over from scratch is trivially easy.
clean() # Remove the original author's results. make(plan) # Independently re-create the results from the code and input data.
18.104.22.168 Independent replication
With even more evidence and confidence, you can invest the time to independently replicate the original code base if necessary. Up until this point, you relied on basic
drake functions such as
make(), so you may not have needed to peek at any substantive author-defined code in advance. In that case, you can stay usefully ignorant as you reimplement the original author’s methodology. In other words,
drake could potentially improve the integrity of independent replication.
22.214.171.124 Readability and transparency
Ideally, independent observers should be able to read your code and understand it.
Drake helps in several ways.
- The workflow plan data frame explicitly outlines the steps of the analysis, and
vis_drake_graph()visualizes how those steps depend on each other.
Draketakes care of the parallel scheduling and high-performance computing (HPC) for you. That means the HPC code is no longer tangled up with the code that actually expresses your ideas.
- You can generate large collections of targets without necessarily changing your code base of imported functions, another nice separation between the concepts and the execution of your workflow
1.3.3 Aggressively scale up.
Not every project can complete in a single R session on your laptop. Some projects need more speed or computing power. Some require a few local processor cores, and some need large high-performance computing systems. But parallel computing is hard. Your tables and figures depend on your analysis results, and your analyses depend on your datasets, so some tasks must finish before others even begin.
Drake knows what to do. Parallelism is implicit and automatic. See the high-performance computing guide for all the details.
# Use the spare cores on your local machine. make(plan, jobs = 4) # Or scale up to a supercomputer. drake_batchtools_tmpl_file("slurm") # https://slurm.schedmd.com/ library(future.batchtools) future::plan(batchtools_slurm, template = "batchtools.slurm.tmpl", workers = 100) make(plan, parallelism = "future_lapply")
The main resources to learn
- The user manual, which contains a friendly introduction and several long-form tutorials.
- The documentation website, which serves as a quicker reference.
- Kirill Müller’s
drakeworkshop from March 5, 2018.
1.4.2 Frequently asked questions
1.4.3 Function reference
The reference section lists all the available functions. Here are the most important ones.
drake_plan(): create a workflow data frame (like
make(): build your project.
loadd(): load one or more built targets into your R session.
readd(): read and return a built target.
drake_config(): create a master configuration list for other user-side functions.
vis_drake_graph(): show an interactive visual network representation of your workflow.
outdated(): see which targets will be built in the next
deps(): check the dependencies of a command or function.
failed(): list the targets that failed to build in the last
diagnose(): return the full context of a build, including errors, warnings, and messages.
Drake also has built-in example projects with code files available here. You can generate the files for a project with
drake_example("gsp")), and you can list the available projects with
drake_examples(). The beginner-oriented examples are listed below. They help you learn
drake’s main features, and they show one way to organize the files of
drake’s main example, based on Kirill Müller’s
drakepitch. This is the most accessible example for beginners.
gsp: A concrete example using real econometrics data. It explores the relationships between gross state product and other quantities, and it shows off
drake’s ability to generate lots of reproducibly-tracked tasks with ease.
packages: A concrete example using data on R package downloads. It demonstrates how
drakecan refresh a project based on new incoming data without restarting everything from scratch.
mtcars: An old example that demonstrates how to generate large workflow plan data frames using wildcard templating. Use
load_mtcars_example()to set up the project in your workspace.
1.4.7 Real example projects
Here are some real-world applications of
drake in the wild.
If you have a project of your own, we would love to add it. Click here to edit the
1.5 Help and troubleshooting
The following resources document many known issues and challenges.
- Frequently-asked questions.
- Cautionary notes and edge cases
- Debugging and testing drake projects
- Other known issues (please search both open and closed ones).
If you are still having trouble, please submit a new issue with a bug report or feature request, along with a minimal reproducible example where appropriate.
The GitHub issue tracker is mainly intended for bug reports and feature requests. While questions about usage etc. are also highly encouraged, you may alternatively wish to post to Stack Overflow and use the
1.6 Similar work
1.6.1 GNU Make
The original idea of a time-saving reproducible build system extends back at least as far as GNU Make, which still aids the work of data scientists as well as the original user base of complied language programmers. In fact, the name “drake” stands for “Data Frames in R for Make”. Make is used widely in reproducible research. Below are some examples from Karl Broman’s website.
- Bostock, Mike (2013). “A map of flowlines from NHDPlus.” https://github.com/mbostock/us-rivers. Powered by the Makefile at https://github.com/mbostock/us-rivers/blob/master/Makefile.
- Broman, Karl W (2012). “Halotype Probabilities in Advanced Intercross Populations.” G3 2(2), 199-202.Powered by the
- Broman, Karl W (2012). “Genotype Probabilities at Intermediate Generations in the Construction of Recombinant Inbred Lines.” *Genetics 190(2), 403-412. Powered by the Makefile at https://github.com/kbroman/preCCProbPaper/blob/master/Makefile.
- Broman, Karl W and Kim, Sungjin and Sen, Saunak and Ane, Cecile and Payseur, Bret A (2012). “Mapping Quantitative Trait Loci onto a Phylogenetic Tree.” Genetics 192(2), 267-279. Powered by the
There are several reasons for R users to prefer
Drakealready has a Make-powered parallel backend. Just run
make(..., parallelism = "Makefile", jobs = 2)to enjoy most of the original benefits of Make itself.
- Improved scalability. With Make, you must write a potentially large and cumbersome Makefile by hand. But with
drake, you can use wildcard templating to automatically generate massive collections of targets with minimal code.
- Lower overhead for light-weight tasks. For each Make target that uses R, a brand new R session must spawn. For projects with thousands of small targets, that means more time may be spent loading R sessions than doing the actual work. With
make(..., parallelism = "mclapply, jobs = 4"),
drakelaunches 4 persistent workers up front and efficiently processes the targets in R.
- Convenient organization of output. With Make, the user must save each target as a file.
Drakesaves all the results for you automatically in a storr cache so you do not have to micromanage the results.
Drake overlaps with its direct predecessor, remake. In fact, drake owes its core ideas to remake and Rich Fitzjohn. Remake’s development repository lists several real-world applications. Drake surpasses remake in several important ways, including but not limited to the following.
- High-performance computing. Remake has no native parallel computing support. Drake, on the other hand, has a thorough selection of parallel computing technologies and scheduling algorithms. Thanks to future, future.batchtools, and batchtools, it is straightforward to configure a drake project for most popular job schedulers, such as SLURM, TORQUE, and the Sun/Univa Grid Engine, as well as systems contained in Docker images.
- A friendly interface. In remake, the user must manually write a YAML configuration file to arrange the steps of a workflow, which leads to some of the same scalability problems as Make. Drake’s data-frame-based interface and wildcard templating functionality easily generate workflows at scale.
- Thorough documentation. Drake contains thorough user manual, a reference website, a comprehensive README, examples in the help files of user-side functions, and accessible example code that users can write with
- Active maintenance. Drake is actively developed and maintained, and issues are usually addressed promptly.
- Presence on CRAN. At the time of writing, drake is available on CRAN, but remake is not.
Memoization is the strategic caching of the return values of functions. Every time a memoized function is called with a new set of arguments, the return value is saved for future use. Later, whenever the same function is called with the same arguments, the previous return value is salvaged, and the function call is skipped to save time. The memoise package is an excellent implementation of memoization in R.
However, memoization does not go far enough. In reality, the return value of a function depends not only on the function body and the arguments, but also on any nested functions and global variables, the dependencies of those dependencies, and so on upstream.
Drake surpasses memoise because it uses the entire dependency network graph of a project to decide which pieces need to be rebuilt and which ones can be skipped.
1.6.4 Knitr and R Markdown
Much of the R community uses knitr and R Markdown for reproducible research. The idea is to intersperse code chunks in an R Markdown or
*.Rnw file and then generate a dynamic report that weaves together code, output, and prose. Knitr is not designed to be a serious pipeline toolkit, and it should not be the primary computational engine for medium to large data analysis projects.
- Knitr scales far worse than Make or remake. The whole point is to consolidate output and prose, so it deliberately lacks the essential modularity.
- There is no obvious high-performance computing support.
- While there is a way to skip chunks that are already up to date (with code chunk options
autodep), this functionality is not the focus of knitr. It is deactivated by default, and remake and
drakeare more dependable ways to skip work that is already up to date.
1.6.5 Factual’s Drake
1.6.6 Other pipeline toolkits
There are countless other successful pipeline toolkits. The
drake package distinguishes itself with its R-focused approach, Tidyverse-friendly interface, and a thorough selection of parallel computing technologies and scheduling algorithms.
Many thanks to Julia Lowndes, Ben Marwick, and Peter Slaughter for reviewing drake for rOpenSci, and to Maëlle Salmon for such active involvement as the editor. Thanks also to the following people for contributing early in development.
- Alex Axthelm
- Chan-Yub Park
- Daniel Falster
- Eric Nantz
- Henrik Bengtsson
- Ian Watson
- Jasper Clarkberg
- Kendon Bell
- Kirill Müller
Credit for images is attributed here.