Chapter 7 How to organize the files of drake projects

Unlike most workflow managers, drake focuses on your R session, and it does not care how you organize your files. This flexibility is great in the long run, but it leaves many new users wondering how to structure their projects. This chapter provides guidance, advice, and recommendations on structure and organization.

7.1 Examples

For examples of how to structure your code files, see the beginner oriented example projects:

Write the code directly with the drake_example() function.


In practice, you do not need to organize your files the way the examples do, but it does happen to be a reasonable way of doing things.

7.2 Where do you put your code?

It is best to write your code as a bunch of functions. You can save those functions in R scripts and then source() them before doing anything else.

# Load functions get_data(), analyze_data, and summarize_results()

Then, set up your drake plan.

good_plan <- drake_plan(
  my_data = get_data(file_in("data.csv")), # External files need to be in commands explicitly. # nolint
  my_analysis = analyze_data(my_data),
  my_summaries = summarize_results(my_data, my_analysis)

#> # A tibble: 3 x 2
#>   target       command                                
#>   <chr>        <expr>                                 
#> 1 my_data      get_data(file_in("data.csv"))          
#> 2 my_analysis  analyze_data(my_data)                  
#> 3 my_summaries summarize_results(my_data, my_analysis)

drake knows that my_analysis depends on my_data because my_data is an argument to analyze_data(), which is part of the command for my_analysis.

config <- drake_config(good_plan)
#> Unloading targets from environment:
#>   my_summaries

Now, you can call make() to build the targets.


If your commands are really long, just put them in larger functions. drake analyzes imported functions for non-file dependencies.

7.3 Your commands are code chunks, not R scripts

Some people are accustomed to dividing their work into R scripts and then calling source() to run each step of the analysis. For example you might have the following files.

  • get_data.R
  • analyze_data.R
  • summarize_results.R

If you migrate to drake, you may be tempted to set up a drake plan like this.

bad_plan <- drake_plan(
  my_data = source(file_in("get_data.R")),
  my_analysis = source(file_in("analyze_data.R")),
  my_summaries = source(file_in("summarize_data.R"))

#> # A tibble: 3 x 2
#>   target       command                            
#>   <chr>        <expr>                             
#> 1 my_data      source(file_in("get_data.R"))      
#> 2 my_analysis  source(file_in("analyze_data.R"))  
#> 3 my_summaries source(file_in("summarize_data.R"))

But now, the dependency structure of your work is broken. Your R script files are dependencies, but since my_data is not mentioned in a function or command, drake does not know that my_analysis depends on it.

config <- drake_config(bad_plan)


  1. In the first make(bad_plan, jobs = 2), drake will try to build my_data and my_analysis at the same time even though my_data must finish before my_analysis begins.
  2. drake is oblivious to data.csv since it is not explicitly mentioned in a drake plan command. So when data.csv changes, make(bad_plan) will not rebuild my_data.
  3. my_analysis will not update when my_data changes.
  4. The return value of source() is formatted counter-intuitively. If source(file_in("get_data.R")) is the command for my_data, then my_data will always be a list with elements "value" and "visible". In other words, source(file_in("get_data.R"))$value is really what you would want.

In addition, this source()-based approach is simply inconvenient. drake rebuilds my_data every time get_data.R changes, even when those changes are just extra comments or blank lines. On the other hand, in the previous plan that uses my_data = get_data(), drake does not trigger rebuilds when comments or whitespace in get_data() are modified. drake is R-focused, not file-focused. If you embrace this viewpoint, your work will be easier.

7.4 Workflows as R packages

The R package structure is a great way to organize the files of your project. Writing your own package to contain your data science workflow is a good idea, but you will need to

  1. Use expose_imports() to properly account for all your nested function dependencies, and
  2. If you load the package with devtools::load_all(), set the prework argument of make(): e.g. make(prework = "devtools::load_all()").

Thanks to Jasper Clarkberg for the workaround behind expose_imports().

7.4.1 Advantages of putting workflows in R packages

7.4.2 The problem

For drake, there is one problem: nested functions. drake always looks for imported functions nested in other imported functions, but only in your environment. When it sees a function from a package, it does not look in its body for other imports.

To see this, consider the digest() function from the digest package. Digest package is a utility for computing hashes, not a data science workflow, but I will use it to demonstrate how drake treats imports from packages.

g <- function(x){
f <- function(x){
plan <- drake_plan(x = f(1))

# Here are the reproducibly tracked objects in the workflow.
config <- drake_config(plan)
#> [1] "f" "g" "x"

# But the `digest()` function has dependencies too.
# Because `drake` knows `digest()` is from a package,
# it ignores these dependencies by default.
head(deps_code(digest), 10)
#> # A tibble: 10 x 2
#>    name        type   
#>    <chr>       <chr>  
#>  1 digest_impl globals
#>  2 stop        globals
#>  3 pmatch      globals
#>  4 which       globals
#>  5 as.integer  globals
#>  6 .Call       globals
#>  7 path.expand globals
#>  8 isTRUE      globals
#>  9 inherits    globals
#> 10 file.access globals

7.4.3 The solution

To force drake to dive deeper into the nested functions in a package, you must use expose_imports(). Again, I demonstrate with the digest package package, but you should really only do this with a package you write yourself to contain your workflow. For external packages, packrat is a much better solution for package reproducibility.

#> <environment: R_GlobalEnv>
config <- drake_config(plan)
new_objects <- tracked(config)
head(new_objects, 10)
#> [1] ".getCRC32PreferOldOutput" ".getSerializeVersion"    
#> [3] "digest"                   "digest_impl"             
#> [5] "f"                        "g"                       
#> [7] "base::serialize"          "x"
#> [1] 8

# Now when you call `make()`, `drake` will dive into `digest`
# to import dependencies.

cache <- storr::storr_environment() # just for examples
make(plan, cache = cache)
#> target x
head(cached(cache = cache), 10)
#> [1] "x"
length(cached(cache = cache))
#> [1] 1
Copyright Eli Lilly and Company