RStudio and GitHub go together like two wheels of a bicycle. Together, they form a low-overhead yet powerful open source workbench – a lean machine that can help take your data to far places.
In a recent informal evaluation of coral reef related research articles that included simultaneous publication of code and data, by far the most popular language used was R, and RStudio is the most popular interface for working with R.
In a CRESCYNT Data Science for Coral Reefs: Data Integration and Team Science workshop held earlier this year at NCEAS, the most powerful skill introduced was using RStudio with GitHub: writing data and code to GitHub from RStudio.
Once the link is set up, the work can continue in RStudio in the way people may be familiar with, and then one can make commits to GitHub periodically to save the work and potentially pave the way for collaboration.
Connect a repository in the GitHub account to RStudio. This takes multiple steps; here are some good options to work through the process.
You can use sections of NCEAS’s long tutorial on Introduction to Open Data Science, initially developed by their Ocean Health Index group. Use the sections on overview of R and RStudio, Markdown, intro to GitHub, and skip down to collaboration in GitHub.
There are a number of other tutorials available to show how to make and use these softwares together; a beautifully clean and clear step-by-step tutorial is from Resources at GitHub; another excellent one is from Support at RStudio.
If later you want to go further, come back for these tutorials hosted at R-Bloggers: R Blogdown Setup in GitHub, and Migrating from GitHub to GitLab with RStudio. And good news – you can now archive a snapshot of a GitHub repository to preserve and even publish a particular version of your RStudio work – plus get a doi to share – at Zenodo.
Summary: Many research scientists use RStudio as their primary analytical and visualization tool. RStudio now has the ability to connect to a GitHub repository and make commits to it from RStudio. This permits critical core functions for a simplified workbench: documenting workflows (R Markdown), preserving code and provenance, producing repeatable results, creating flexible pipelines, sharing data and code, and allowing collaboration among members of a team. Versioning and teamwork is simplified by making commits frequently and always doing fresh pulldowns prior to commit (rather than focusing on branch development). The process is valuable for individual researchers, documenting project work, and collaborating in teams.
Thanks to Jan Vincente and Ingrid Knapp for this pair of Data Rescue images.
Written while embedded in our CRESCYNT Data Science for Coral Reefs workshops. Amazingly, everyone who participated in workshop 1 – Data Science for Coral Reefs: Data Rescue – learned even more than they thought they would. We’ve had wonderful NCEAS trainers, spectacular participants with amazing datasets, and a lot of hard work over 4 days (March 7-10, 2018).
In the second intensive workshop – Data Science for Coral Reefs: Data Integration and Team Science – people will be introduced to R Studio and GitHub if they have not used them before, and then we will work on exploring techniques for integrating disparate datasets. We’ll start with a pair of datasets at a time, and efforts may involve extracting data from one dataset based on observations from another; upscaling, downscaling, resampling, or summarizing to make intervals and scales mesh – exactly the kind of process that coral reef researchers have said is a recurring challenge in asking bigger science questions.
Each workshop group is writing a paper to summarize and share lessons learned, so please stay tuned for those!
We experimented with an unusual process for these workshops: two days of training followed by two days of workathon. We’re liking it! Tell us what you think about these topics and training materials. What other workshop outputs would you like to see?
We’re extremely pleased to be able to offer two workshops in March 2018 at NCEAS. The second is CRESCYNT Data Science for Coral Reefs Workshop 2: Data Modeling, Data Integration and Team Science. Apply here.
When: March 12-15, 2018
Where: NCEAS, Santa Barbara, CA
Workshop description:
This workshop is recommended for early to mid-career and senior scientists with interest in applying technical skills to collaborative research questions and committed to subsequently sharing what they learn. Participants will learn how to structure and combine heterogeneous data sets relevant to coral reef scientists in a collaborative way. Topics covered on days 1 and 2 of the workshop will cover reproducible workflows using R/RStudio and RMarkdown, collaborative coding with GitHub, strategies for team research, data modeling and data wrangling, and advanced data integration and visualization tools. Participants will also spend 2 days working in small teams to integrate various coral reef datasets to practice the skills learned and develop workflows for data tidying and integration.
The workshop is limited to 20 participants. We encourage you to apply via this form. Workshop costs will be covered with support from NSF EarthCube – CRESCYNT RCN. We anticipate widely sharing workshop outcomes, including workflows and recommendations. Anticipate some significant pre-workshop prep effort.
We’re extremely pleased to be able to offer two workshops in March 2018 at NCEAS. The first is CRESCYNT Data Science for Coral Reefs Workshop 1: Data Rescue. Apply here.
When: March 7-10, 2018
Where: NCEAS, Santa Barbara, California, USA
Workshop description:
Recommended for senior scientists with rich “dark” data on coral reefs that needs to be harvested and made accessible in an open repository. Students or staff working with senior scientists are also encouraged to apply. Topics covered on days 1 and 2 of the workshop will cover the basic principles of data archiving and data repositories, including Darwin Core and EML metadata formats, how to write good metadata, how to archive data on the KNB data repository and elsewhere, data preservation workflow and best practices, and how to improve data discoverability and reusability. Additionally, participants will spend approximately 2 days working in pairs to archive their own data using these principles, so applying with a team member from your research group is highly recommended.
The workshop is limited to 20 participants. We encourage you to apply via this form. Workshop costs will be covered with support from NSF EarthCube – CRESCYNT RCN. Participants will publish data during the workshop process, and we anticipate widely sharing workshop outcomes, including workflows and recommendations. Because coral reef science embodies a wide range of data types (spreadsheets, images, videos, field notes, large ‘omics text files, etc.), anticipate some significant pre-workshop prep effort.
Scientific workflows have become the lingua franca of scientific research for orchestrating the execution of processes and tasks. For instance, workflows are commonly used to document analyses in publications, automate tedious and repetitive tasks, share data analysis protocols, abstract analysis complexity from users, or provide training. Despite their increasing popularity, pipelines remain difficult to create given the complexity of the processes they implement, as well as the intricacies of the underlying platforms on which they run.
Workflows often require combining tools that were designed with rigid input and output formats and requirements. For example, some programs will require that the input file be named using a predetermined literal (ex.: “input.txt”), while other programs will only take input from standard input stream (stdin). As such, being able to combine programs requires not only sufficient familiarity with their dependencies, but also requires understanding of the underlying operating system and the commands needed to comply with the programs’ input, output and execution requirements.
Workflows are most commonly written using general-purpose programming languages (GPPLs). As an example, in bioinformatics, Python and Bash are particularly popular languages for implementing workflows. While GPPLs allow for great flexibility, their lack of support for workflow-specific constructs requires that that the programmer (or domain scientist) manually implements ancillary functionality, such as task synchronization, error control, logging, etc. Furthermore, the lack of portability across platforms and architectures means that a workflow written to run on a server can rarely be run on a High Performance Computing (HPC) cluster without substantial modifications to the code. These limitations have fueled a plethora of new projects which either extend the functionality of GPPLs, or which provide new, domain-specific programming languages (DSLs) that were specifically designed for the implementation of workflows.
Implementing a Pipeline in Pseudocode
In what follows, we will see how a simple workflow using two programs can be implemented using pseudocode: a human-readable notation resembling a GPPL. We will then build upon this simple program by iteratively adding pseudocode to expand the workflow’s scope. We will conclude this blogpost by recognizing (1) the complexity associated with using GPPLs to implement workflows, and (2) the benefits of using a domain-specific language (such as NextFlow). A more advanced video tutorial of NextFlow will be provided in a follow-up to this blog.
The workflow we will implement uses two made up programs: estimateDistribution and combineDistributionEsimates. The first program, estimateDistribution, takes as input a text file of raw data for a geographical location, ex. HAWAII.TXT, and produces another text file of processed data, ex. HAWAII_PROCESSED.TXT. An example call of estimateDistribution could look like the following:
The second program, combineDistributionEstimates, takes any number of processed data files, ex. HAWAII_PROCESSED.TXT, TAIWAN_PROCESSED.TXT and AUSTRALIA_PROCESSED.TXT, and generate a final output FINAL_OUTPUT.TXT. An example call of combineDistributionEsimates could look like the following:
The pseudocode describing a workflow for running estimateDistribution and combineDistributionEsimates on four input files: HAWAII_PROCESSED.TXT, TAIWAN_PROCESSED.TXT, AUSTRALIA_PROCESSED.TXT and JAPAN_PROCESSED.TXT is given in Example 1. A graphical representation of the workflow is given in Figure 1.
Figure 1. Graphical representation of the workflow described in the pseudocode of Example 1. Programs are represented with a rectangle and inputs are represented with a curved-border rectangle. While our example only pertains to four input files representing four geographical regions, the process represented in dashed lines depicts additional input(s) that could be processed in estimateDistribution and added to the combineDistributionEsimates computation.Example 1. Workflow for running estimateDistribution and combineDistributionEsimates on four input files.
In lines 1, 2 and 3, we declare the list of inputs to and outputs of estimateDistribution. In lines 6, 7 and 8, we run estimateDistribution over each of the input files in Input_names_list and generate an output according to the naming scheme in Output_names_list. Line 10 runs the second program, combineDistributionEsimates, on all the files described in Output_names_list.
Implementing Workflow-Specific Functionality
While the workflow described in Example 1 can be viable, there are some enhancements which are not strictly necessary, but which need to be addressed to avoid runtime errors, improve efficiency and enhance cross-platform portability.
These enhancements are addressed as follows:
1- First, we need to ascertain that the input files do exist; otherwise, we need to stop the execution of the program. This functionality can be implemented in pseudocode as shown in lines 5-7 of Example 2.
2- Workflows written in traditional programming languages cannot by default resume in case of errors. For instance, in the Example 1, an error during the execution of combineDistributionEsimates would require re-running the pipeline from the beginning, even if estimateDistribution complete successfully. In a large workflow comprising dozens of steps and hundreds of input files, requiring a complete rerun of a workflow due to an error that occurred in the last step would be an utter waste of time and resources. Luckily, a check to only rerun programs that did complete successfully can be implemented using pseudocode, as shown in line 19 of Example 2.
3- When using a traditional programming language, any assumptions about the underlying execution environment need to be explicit. For instance, additional pseudocode is required for running the workflow on an HPC cluster where the inputs to estimateDistribution can be run in parallel on different nodes. Furthermore, to achieve a high level of portability, one needs to explicitly account for different HPC job schedulers, such as SLURM, PBS, SGE, and LSF. This additional functionality is illustrated in lines 8-13 of Example 2.
4- We need to manually synchronize execution of the program to ascertain that combineDistributionEsimates does not start before all four executions of estimateDistribution have completed. We can achieve this by adding pseudocode that will pause the workflow execution until estimateDistribution has completed running on all inputs. The code is illustrated in line 23 of Example 2, Updated Workflow from Example 1: the new pseudocode accounts for synchronization.
Example 2. Final extended workflow for running estimateDistribution and combineDistributionEsimates on four input files.
Workflow Frameworks: Facilitating the Implementation and Execution of Workflows
These four simple enhancements make it abundantly clear that despite the simplicity of the original workflow, the complexity associated with ancillary workflow tasks such as synchronization, fault-tolerance, cross-platform portability, parallelization, etc. can quickly bloat workflows with technical and extraneous functionality that renders the workflows difficult to understand, extend and/or maintain by third-party users. For these reasons, numerous efforts have undertaken the provision of libraries to facilitate the implementation of workflows. There are in fact dozens, if not hundreds of such libraries; see for instance https://github.com/common-workflow-language/common-workflow-language/wiki/Existing-Workflow-systems for a non-exhaustive list. Workflow Libraries differ principally on: 1- programming languages used, 2- philosophy, and 3- comprehensiveness of the provided functionality. We tackle these three points in what follows.
1- Languages supported.
Although workflow libraries exist for a plethora of programming languages, they are particularly popular for interpreted languages, such as Python, Ruby and Groovy. Preferences for frameworks are often biased by one’s programming expertise.
To make workflow implementation accessible to non-programmers who do not wish to learn a programming language for the sole purpose of writing a workflow, an increasing number of frameworks use domain-specific languages (DSL) instead. In short, DSL-based frameworks define their own languages, which comprise easy to use language constructs built on small syntax and specifically tailored for describing common workflow tasks. A particularly interesting DSL-based framework is the BigDataScript which, in my opinion, elegantly combines language expressiveness and power in an easy to master DSL.
2- Philosophy.
The term philosophy is used here as a portmanteau for various characteristics. For example: Are the inputs implicitly or explicitly determined? Is the workflow declaration coupled with its execution (as in Ruffus) or is the workflow described in configuration files (as in NSF’s Pegasus framework)? For more info on philosophy-related issues, see Jeremy Liepzig’s review of “bioinformatic pipeline frameworks”.
3- Comprehensiveness.
Another key difference between workflow frameworks is the extent of the functionality that is supported “out of the box.” For instance, does the framework support easy parallelization of independent tasks? Is it portable across architectures such as docker, cloud and HPC environments? Does it gracefully handle errors and does it keep track of what tasks were successfully run?
As mentioned above, there are over a hundred workflow frameworks, with over a dozen for Python alone. Identifying the perfect framework for a project or an organization is no easy feat and depends on various parameters, not the least of which is the user’s (implementer’s) programming or scripting expertise.
A Sample Workflow Using NextFlow
In what follows I have chosen to introduce NextFlow (https://www.nextflow.io/), an expressive, versatile and particularly comprehensive DSL framework for composing and executing workflows. I chose to focus on NextFlow since it is a DSL which also supports the full syntax and semantics of Groovy, a dynamic language that runs on the Java platform. NextFlow’s support for a full-fledged programming language provides it with an extra edge that other DSL frameworks are missing. From a software engineering perspective, NextFlow seems to be very well thought out and very well supported. Furthermore, NextFlow uses the UNIX pipe concept, which simplifies writing parallel, scalable and portable pipelines.
The workflow in Example 2 can be described in NextFlow as follows:
Example 3. NextFlow script for running the workflow described using the pseudocode in Example 2.
In lines 1-6, we declare our input files which are loaded in a Channel – think of it as a queue – which we call inputFilesChan. Note that each of the items in the queue is of type file. This informs NextFlow to “automagically” stage files as necessary under different architectures.
A less verbose declaration can be achieved using the following syntax:
In lines 8-17, we declare a NextFlow process: a workflow unit. Line 8 declares the process name. Lines 9 and 10 indicate that the list of inputs to this process will be read from the Channel named inputFilesChan.
Just as we can build Channels, or queues, to store input files, we can also do the same for output files. Here, lines 11 and 12 create a channel of outputs that store the list of files generated from the run of the process. Note that since the inputs are of type file, we can use Groovy to request the baseName of each file (base name of “HAWAII.TXT” is “HAWAII”) as is done in line 12. We can then use the base name to build an output name on the fly by appending “_PROCESSED.TXT” to it.
For each file, we run the estimateDistribution program. The command line executed is defined in line 15, enclosed by“”” as the syntax of NextFlow’s process requires.
Lines 19 to 26 declare the second process of our pipeline which is concerned with the execution combineDistributionEsimates. Since the latter requires all input to be passed as list, we use Groovy to convert our output Channel from the previous process. This is done using the toList() Groovy method.
The output of the NextFlow script in Example 3 is given below.
Output 1. Output generated by the execution of the NextFlow script from Example 3.
NextFlow submits four runs of the estimateDistribution process and one run of the combineDistributionEsimates process. Although we did not explicitly implement any of the ancillary functionality (pseudocode in red from Example 2), handling and resuming from errors, synchronization, and running on different environments is automatically processed by NextFlow. For instance, let’s erroneously misspell the input HAWAII.TXT to KAWAII.TXT. After rerunning NextFlow on our input workflow, we obtain the output below.
Output 2. Error message produced by the execution of the NextFlow script from Example 2 after introduction of an erroneous input file.
As was the case with the first example, NextFlow schedules and submits four runs of the estimateDistribution process. Each of these runs is associated with one of the four input files. Immediately after submitting the jobs, NextFlow reports an ERROR ~ ‘Error executing process > ‘estimateDistribution (1)‘. The “Command error” section of the output shows that the program estimateDistribution could not locate the file KAWAII.TXT. Note however that although the error occurred on the first element of the list, NextFlow continued the execution for the remaining inputs of estimateDistribution. After correcting the erroneous file name form KAWAII.TXT to HAWAII.TXT and rerunning the workflow using the –resume option, NextFlow output is:
Output 3. Successfully completed jobs are cached by NextFlow across runs.
We see from the output that estimateDistribution (1) – the (1) here refers to the first element of the input files’ Channel – that the first input was submitted whereas the other inputs (2,3 and 4), which were already cached were not resubmitted. We did not need to add code to explicitly check for existing files as NextFlow automatically handles that for us.
A comprehensive workflow DSL can also facilitate portability of a workflow across platforms. For instance, when using NextFlow, only a few lines of code are required to run a pipeline on a HPC cluster environment. For example, the following lines allow us to configure our workflow for execution on the University of Hawaii’s HPC.
Example 4. Configuration file for running the NextFlow script from Example 3 on a compute cluster which uses the SLURM job scheduler.
This configuration file instructs NextFlow to run both estimateDistribution and combineDistributionEsimates using SLURM on the cluster’s “community.q” queue on the cluster. The estimateDistribution and combineDistributionEsimates processes use 4 CPUs and 1 CPU respectively. It’s that simple!
The output looks identical to that obtained when running locally. However, retrieving the history of jobs on the cluster shows that two jobs “nf-estima+” and “nf=combin+” were both executed by the (SLURM) job scheduler.
Output 4. Processes described in the NextFlow script from Example 3 are executed by SLURM as two independent jobs.
Conclusion
The goal here is not to publicize NextFlow, but rather to show how a workflow DSL can streamline the implementation of complex functionality. The choice of infrastructure and products plays a critical role in helping scientists implement best practices of research transparency and reproducibility, particularly under the new paradigm of data-intensive science. Using appropriate scientific workflow frameworks can alleviate this burden by allowing researchers to abstract, manage and share complex scientific analyses with minimum effort.
The next installment of this two-part blogpost will give a brief tutorial on using NextFlow for the implementation of workflows, with a specific emphasis on bioinformatics and the Docker platform.
Thanks enormously to Dr. Mahdi Belcaid for this careful exposition and insight into how new tools can be used to streamline workflow production. It’s a major long-term goal of EarthCube, CRESCYNT, and many researchers to be able to assemble computational workflows more easily and efficiently, and we’re grateful for the vision Mahdi has shared here. Look for his follow-up screencast tutorial in a subsequent blogpost.
Scientists need better ways to analyze and integrate their data and collaborate with other scientists; new computing technologies and tools can help with this. However, it’s difficult to overcome the challenge of disparate perspectives and the absence of a common vocabulary: this is true of multidisciplinary science teams, and true when scientists try to talk with computer scientists. Workflows, as a way to help design and implement a workbench, are needed both as a collaboration space and a blueprint for implementation.
Take a look at a recent presentation to the EarthCube science committee (video) or an earlier presentation offered at ASLO 2017 (slides and voice) to see a flexible and low-tech way to simultaneously (1) facilitate necessary sci-tech interactions for your own lab and (2) begin to sketch out a blueprint for work that needs to be done. Subsequent technical implementation is possible with new tools including Common Workflow Language (CWL) as a set of specifications, Dockers as modular and sharable containers for either fully developed tools or small pieces of code, and Nextflow as an efficient and highly scalable definitive software language to make the computational work happen. Look for a post in the near future by Mahdi Belcaid to describe the technical implementation of these workflows.
OPPORTUNITY! We will be hosting one or two in-person skills training workshops in the coming months, with your expenses covered by our NSF EarthCube CRESCYNT grant, focused particularly on training early career professionals, and will work through some challenging coral reef use cases and their cyberinfrastructure needs. We collected some great use cases at ICRS, but would like additional cases to consider, so we invite you to describe your own research challenges through this google form. Please contact us for more on this, or other issues. Thanks!