The Mastery Rubric for Bioinformatics: A tool to support design and evaluation of career-spanning education and training
Authors:
Rochelle E. Tractenberg aff001; Jessica M. Lindvall aff002; Teresa K. Attwood aff003; Allegra Via aff004
Authors place of work:
Collaborative for Research on Outcomes and –Metrics, and Departments of Neurology, Biostatistics, Biomathematics and Bioinformatics, and Rehabilitation Medicine, Georgetown University, Washington, DC, United States of America
aff001; National Bioinformatics Infrastructure Sweden (NBIS)/ELIXIR-SE, Science for Life Laboratory (SciLifeLab), Department of Biochemistry and Biophysics, Stockholm University, Stockholm, Sweden
aff002; Department of Computer Science, The University of Manchester, Manchester, England, United Kingdom; The GOBLET Foundation, Radboud University, Nijmegen Medical Centre, Nijmegen, The Netherlands
aff003; ELIXIR Italy, National Research Council of Italy, Institute of Molecular Biology and Pathology, Rome, Italy
aff004
Published in the journal:
PLoS ONE 14(11)
Category:
Research Article
doi:
https://doi.org/10.1371/journal.pone.0225256
Summary
As the life sciences have become more data intensive, the pressure to incorporate the requisite training into life-science education and training programs has increased. To facilitate curriculum development, various sets of (bio)informatics competencies have been articulated; however, these have proved difficult to implement in practice. Addressing this issue, we have created a curriculum-design and -evaluation tool to support the development of specific Knowledge, Skills and Abilities (KSAs) that reflect the scientific method and promote both bioinformatics practice and the achievement of competencies. Twelve KSAs were extracted via formal analysis, and stages along a developmental trajectory, from uninitiated student to independent practitioner, were identified. Demonstration of each KSA by a performer at each stage was initially described (Performance Level Descriptors, PLDs), evaluated, and revised at an international workshop. This work was subsequently extended and further refined to yield the Mastery Rubric for Bioinformatics (MR-Bi). The MR-Bi was validated by demonstrating alignment between the KSAs and competencies, and its consistency with principles of adult learning. The MR-Bi tool provides a formal framework to support curriculum building, training, and self-directed learning. It prioritizes the development of independence and scientific reasoning, and is structured to allow individuals (regardless of career stage, disciplinary background, or skill level) to locate themselves within the framework. The KSAs and their PLDs promote scientific problem formulation and problem solving, lending the MR-Bi durability and flexibility. With its explicit developmental trajectory, the tool can be used by developing or practicing scientists to direct their (and their team’s) acquisition of new, or to deepen existing, bioinformatics KSAs. The MR-Bi is a tool that can contribute to the cultivation of a next generation of bioinformaticians who are able to design reproducible and rigorous research, and to critically analyze results from their own, and others’, work.
Keywords:
Bioinformatics – Learning – Human learning – Health informatics – Cognition – Reproducibility – Experimental design – Instructors
Introduction
During the past two decades, many commentators [1]; [2]; [3]; [4]; [5]; [6]; [7]; [8]; [9]; [10] have drawn attention to the wide gap between the amount of life-science data being generated and stored, and the level of computational, data-management and analytical skills needed by researchers to be able to process the data and use them to make new discoveries. Bioinformatics, the discipline that evolved to harness computational approaches to manage and analyze life-science data, is inherently multi-disciplinary, and those trained in it therefore need to achieve an integrated understanding of both factual and procedural aspects of its diverse component fields. Education and training programs require purposeful integration of discipline-specific knowledge, perspectives, and habits of mind that can be radically different. This is true whether the instruction is intended to support the use of tools, techniques and methods, or to help develop the next generation of bioinformaticians, and thus can be difficult to achieve, particularly in limited time-frames [11]; [12].
Several international surveys have been conducted to better understand the specific challenges for bioinformatics training (e.g., [13]; [14]; [12]; [15]). While all agree on the necessity of integrating computational skills and analytical thinking into life-science educational programs, they nevertheless acknowledge that the difficulties of achieving this in a systematic and formal way remain. To try to address these issues, various groups around the world began developing curriculum guidelines, defining core (bio)informatics competencies necessary to underpin life-science and biomedical degree programs (e.g., [16]; [17]; [18]; [19]; [20]; [21]). Competencies represent what individuals can do when they bring their Knowledge, Skills and Abilities (KSAs) together appropriately [22], and at the right level(s), for the right application, to achieve a given task. Competencies are thus multi-dimensional, highly complex, task-specific, behaviors.
Using competencies to guide curriculum development has proved problematic (e.g., [23]; [24]; [25]; [26]; [27]). In consequence, the Accreditation Council for Graduate Medical Education in the United States has shifted their definition of “competencies-based medical education” towards a specifically developmental approach that specifies and implements “milestones” [28]; [29]. This is a shift of focus for curriculum design (whether recognized or not) from the end-state of education (i.e., acquired competencies) to how learners need to change–to develop–en route to achieving those desired end-states. In other words, curriculum design also needs to include “the route” or developmental trajectory [30]. This has been acknowledged in diverse contexts: for medical education, Holmboe et al. (2016) [28] discuss the need to structure “teaching and learning experiences…[in order] to facilitate an explicitly defined progression of ability in stages”; for bioinformatics, Welch et al. (2016) [20] call for community-based efforts to “[i]dentify different levels or phases of competency” and “provide guidance on the evidence required to assess whether someone has acquired each competency.” In other words, both the steps along the way and the way are requisite to a curriculum that supports the achievement of milestones and competencies (see [31]).
In developing curricula that both promote acquisition, integration, and retention of skills and incorporate developmental considerations, part of the challenge is the time-frame available for instruction: formal education can follow structured, long-term programs, but training is generally delivered within limited time-frames [11]; [12]. In such circumstances, the time available to align instruction with learners’ levels of complexity of thinking (and experience) is limited; other considerations, including the prior experience and preparation of the learners, are also factors. Nevertheless, it is possible to address developmental considerations for both short- and long-form learning experiences [32] by appeal to Bloom’s Taxonomy of the Cognitive Domain [33], which specifies a six-level hierarchy of cognitive skills or functioning:
Remember/Reiterate—performance is based on recognition of a seen example(s);
Understand/Summarize—performance summarizes information already known/given;
Apply/Illustrate—performance extrapolates from seen examples to new ones by applying rules;
Analyze/Predict—performance requires analysis and prediction, using rules;
Create/ Synthesize–performance yields something innovative and novel, creating, describing and justifying something new from existing things/ideas;
Evaluate/Compare/Judge [34]—performance involves applying guidelines, not rules, and can involve subtle differences arising from comparison or evaluation of abstract, theoretical or otherwise not-rule-based decisions, ideas or materials. (This representation, with “evaluate/judge” at the pinnacle, is from the original Bloom taxonomy, while the 2001 revision characterized “create/synthesize” as the most cognitively complex ([34]; see [30], for discussion of how the original formulation suits the higher/graduate/post graduate context)).
The hierarchy in Bloom’s taxonomy is developmental [30]; this highlights a problem for some of the proposed competencies for practicing scientists (e.g., [17]; [19]). Specifically, most of the articulated competencies require very high-level Bloom’s, and nearly all require the use of several Bloom’s cognitive processes seamlessly, sometimes iteratively. Without a specific built-in developmental trajectory that can lead a learner from lower- to higher-level Bloom’s (and thence to the integration of the multiple cognitive processes on the spectrum of activities that bio-/medical informatics practice requires), the competencies may simply be too cognitively complex to serve as achievable end-states.
The Mastery Rubric: A curriculum-development and -evaluation tool
Given the challenges, it is perhaps not surprising that efforts to incorporate competencies into teaching and training have been problematic. Motivated by these issues, we have created a new curriculum-development and -evaluation tool: the Mastery Rubric for Bioinformatics (MR-Bi). Rubrics are typically used to provide a flexible but rigorous structure for evaluating student work [35]; a Mastery Rubric is similar but describes the entire curriculum rather than individual assignments [30]. Creating a Mastery Rubric requires three key steps: 1) identifying the KSAs that a curriculum should deliver, or that are the targets of learning; 2) identifying recognizable stages for the KSAs in a clear developmental trajectory that learners and instructors can identify, and that instructors can target in their teaching and assessment; and 3) observable Performance Level Descriptors (PLDs; [36]) on each KSA at each stage, describing evaluable changes in performance from less to more expert. Within a Mastery Rubric, PLDs clarify what instructors need to teach and assess at each stage, and articulate to students what they need to demonstrate, in order for the KSAs to be characterized as ‘achieved’ for that stage. To date, the Mastery Rubric construct has been used to design and evaluate graduate and postgraduate/professional curricula in clinical research, ethical reasoning, evidence-based medicine, and statistical literacy (see [30]).
Like competencies, KSAs can be highly complex; by contrast, however, KSAs are general–i.e., the same KSA can be deployed differently to support different task-specific competencies. Hence, this paper focuses on KSA-based teaching and learning, to promote the likelihood of learners’ adaptability to future new competencies. We emphasize fostering the development of KSAs within a structured framework that explicitly supports continuing growth and achievement. Specifically, we present the MR-Bi, a tool created to support the articulation and demonstration of actionable teaching and learning goals. The focus of this paper is on how the tool was developed; its uses and applications are the topics of our ongoing work.
Methods
Construction of the MR-Bi followed formal methods. The KSAs were derived via cognitive task analysis [37]; the stages were derived using Bloom’s taxonomy and the European guild structure ([38]; p.182), which maps how individuals can expect/be expected to grow and develop; and the PLDs evolved through a formal standard-setting procedure [39], considering key aspects of the kind of critical thinking necessary in a given discipline. This application of standard-setting methodology (for a review, see [40]) focused on qualitatively, rather than quantitatively, characterizing the performance (“body of work”) of the minimally competent performer of each KSA at each stage. The interested reader is encouraged to review the workflow presented in the Supplemental Materials (Figure A in S1 File). This comprises a detailed description of the integration of theory, the cognitive task analysis, and the drafting of the KSAs using Bloom’s taxonomy and the Messick criteria, with examples.
Face and content validity of the MR-Bi were also determined systematically. A degrees of freedom analysis [41]; [42] was used to formally explore the alignment of the KSAs with existing biomedical informatics and bioinformatics competencies ([17], [19]). Alignment of the MR-Bi with principles of andragogy ([43]) was also investigated. These alignment efforts serve to demonstrate whether and how the MR-Bi is consistent both with the goal of supporting achievement of competencies, and with andragogical objectives. The KSAs must be teachable to adults, and the PLDs must describe observable behaviors, if the MR-Bi is to be an effective and valid tool for instructors and learners.
KSA derivation via cognitive task analysis
A cognitive task analysis generated KSAs following the procedure described in the Supplemental Materials (Table A, Figure A, Text A in S1 File). KSAs that characterize the scientific method, reflecting what is requisite in scientific work (derived from [44] and [45], and adapted by [46]), were assumed to be essential to bioinformatics education and training [47]. These KSAs were refined with respect to community-derived competencies. The Welch et al. [19] competencies focus on the development of the bioinformatics workforce, while those of Kulikowski et al. [17] focus explicitly on curriculum development for doctoral training in health and medical informatics. Despite considerable overlap between them, we reviewed both sets to ensure that our cognitive task analysis, and the resulting KSAs, were comprehensive.
Identification of stages
In a Mastery Rubric, KSAs are described across a developmental continuum of increasing complexity. The specific developmental stages, derived from the European guild structure, are Novice, Beginner, Apprentice and Journeyman ([30]; [48]; see also [49] for a recent similar strategy). In the MR-Bi, these stages use Bloom’s taxonomy explicitly to characterize the interactions of the individual with scientific knowledge (and its falsifiability). Generally, someone who deals with facts (remembering them, but not questioning them or creating novel situations in which to discover them) is a Novice. A Beginner has a growing understanding of the experimental origins of facts they memorized as a Novice. The individual who can participate in experiments that are designed for them, making predictions according to rules they have learned, but not interpreting or evaluating, is an Apprentice. Although they are not explicit about this, undergraduate life-science programs generally support the transitions from Novice to Apprentice. These stages characterize the preparatory phases of anyone new to bioinformatics, irrespective of age or prior experience/training.
The guild structure characterizes the independent practitioner as a Journeyman: postgraduate education, or equivalent work experience, supports the transformation from Apprentice (learning the craft) to Journeyman (practitioner). Journeyman-level individuals are prepared for independent practice in the field, although newly independent practitioners generally still require some level of mentorship (e.g., in a post-doctoral context); such individuals are designated J1 Journeyman. The scientist whose doctoral program, or background and experience, has prepared him/her for fully independent scientific work is the J2 Journeyman. Crucially, an individual with a PhD (or equivalent) in biology, computer science or other scientific field may be a novice in bioinformatics: e.g., someone who deals with facts about programming or data resources, but who is unable to apply them to novel situations to discover new biological knowledge (which would require the higher-order cognitive functioning characteristic of the J1 Journeyman or J2 Journeyman performer). These “high level” descriptions guided the PLD-drafting process.
The developmental trajectory in a Mastery Rubric is evidence-based, not time-based: the performance of a KSA at any stage should lead to concrete and observable output or work products that can be assessed for their consistency with the PLDs by objective evaluators. Claims of achievement cannot be based on age, time-since-degree, job title, time-in-position, or other time-based indicators.
Standard setting for PLDs of each KSA at each stage
We followed the Body of Work approach [50] to writing the PLDs, refining descriptions of how a “minimally competent” individual [51] would carry out the KSAs at each stage to demonstrate that they were capable of performing them at that level. This standard-setting exercise (which commenced at a 2-day international workshop in Stockholm in September 2017) was intended to describe performance at the “conceptual boundary between acceptable and unacceptable levels of achievement” ([52], p. 433). Participants were bioinformatics experts working in the National Bioinformatics Infrastructure Sweden (https://nbis.se/). All had documented expertise within the field, ranging from programming to applied bioinformatics with a biological and/or medical focus; all were also highly engaged in training activities within the life science community across Sweden. Prior to the workshop, a white paper on the Mastery Rubric with draft KSAs and PLDs, and the Kulikowski et al. and Welch et al. competencies were sent for viewing and preparation. These and the background and methods outlined here were reviewed during the first half of day 1.
The workshop aimed to accomplish two facets of the PLDs: range finding and pin-pointing ([50]. P. 202–203). Range-finding involved writing relatively broad descriptions of performance for each KSA at each developmental stage. To orient the participants, pin-pointing initially involved whole-group evaluation and revision of the PLDs across all stages for just three of the KSAs. Afterwards, participants were divided into three small, facilitated groups (n = 4 per group); each undertook pin-pointing of the draft PLDs for four KSAs during the remainder of the two days. Participants iteratively evaluated the KSAs and PLDs to ensure that they and the stages they represent made sense, and that the PLDs were plausible, consistent within each stage and not redundant across KSAs. A particular focus was on performance levels that characterize the independent bioinformatician; specifically, to determine whether the Journeyman level for each KSA was realistically achieved at one point (J1 Journeyman), or whether a second stage of development was required (J2 Journeyman) to achieve “full” independence. Thus, all PLD drafting and review/revisions were systematic and formal, following Egan et al. (2012 [36], pp 91–92) and Kinston & Tiemann (2012 [50], pp 202–203), grounded both on expectations for earlier achievement and performance, and on how an individual could be expected to function once any given stage had been achieved.
Integrating KSAs, stages, and standards into the MR-Bi
The tasks performed by subject experts during the Stockholm workshop resulted in a completed first draft of the MR-Bi. Afterwards, final refinements (further pin-pointing and range-finding) were made during weekly online meetings (2017–2019). Here, our effort was directed at revising the PLDs to build consistently across stages within a KSA, and to describe performance within stages across KSAs. The intention was that the PLDs should support conclusions about “what is needed” as evidence that a person has achieved a given stage for any KSA. To this end, the PLDs were aligned with Bloom’s taxonomy, and written to reflect the core aspects of assessment validity outlined by Messick (1994 [53]):
What is/are the KSAs that learners should possess at the end of the curriculum?
What actions/behaviors by the learners will reveal these KSAs?
What tasks will elicit these specific actions or behaviors?
Application of the Messick criteria ensured that the PLDs would represent concrete and observable behaviors that can develop over time and with practice. In particular, we focused on Messick questions 2 and 3 so that the MR-Bi would describe specific actions/behaviors learners or instructors could recognize as demonstrating that the KSA had been acquired to minimally qualify for a given stage. Overall, the creation of the MR-Bi followed methods intended to yield a psychometrically valid tool (e.g., [54]).
Results
KSA derivation via cognitive task analysis
There are eight KSAs that characterize the scientific method (define a problem based on a critical review of existing knowledge; hypothesis generation; experimental design; identify data that are relevant to the problem; identify and use appropriate analytical methods; interpretation of results/output; draw and contextualize conclusions; communication). These were customized and enriched based on consideration of the 45 competencies articulated by Kulikowski et al. (2012) [17] and Welch et al. (2014) [19], and ultimately, 11 distinct KSAs were derived.
Both sets of competencies include at least one about ethical practice or its constituents, but neither was sufficiently concrete to support a separate KSA. The PLD-writing process therefore began with the intention of integrating features of ethical science (e.g., promoting reproducibility, emphasizing rigorous science and a positivist scientific approach, and specifying attention to transparency) wherever these are relevant. However, it became clear that respectful practice, and awareness of the features of ethical conduct and misconduct, were missing. These considerations led to the identification of a 12th KSA, “ethical practice” (and the range-finding and pinpointing exercises were carried out again–see Supplemental Materials, Figure A in S1 File):
Prerequisite knowledge—biology
Prerequisite knowledge—computational methods
Interdisciplinary integration
Define a problem based on a critical review of existing knowledge
Hypothesis generation
Experimental design
Identify data that are relevant to the problem
Identify and use appropriate analytical methods
Interpretation of results/output
Draw and contextualize conclusions
Communication
Ethical Practice
The KSAs are easily recognizable as key features of bioinformatics practice. They are not intended to be restrictively factual (i.e., literally just knowledge), nor are they content-specific, because content is apt to change quickly and often: this makes the Mastery Rubric both durable and flexible with respect to discipline, obviating the need to develop a different Mastery Rubric for every sub-discipline. Hence, only two types of Prerequisite knowledge were identified: in biology and computational methods. A distinct KSA for Interdisciplinary integration was identified as a separate need, because developing the ability to integrate across those domains, and understanding the potential need for inclusion of other domains (biomedicine, statistics, engineering, etc.), are essential in bioinformatics.
Defining a problem based on a critical review of existing knowledge underpins the application of critical evaluation skills and judgment, based on what is already known, to determine what is not yet known. This KSA supports the definition of a bioinformatician as a scientist who uses computational resources to address fundamental questions in biology [55]; [56]–i.e., it is intended to promote the bioinformatician’s ability to solve biological problems. It also derives implicitly from the competencies, and explicitly from the Wild & Pfannkuch (1999) [44] model of scientific reasoning (and highlighted in [46]). Hypothesis generation also emerged from the cognitive task analysis by appeal to theoretical and empirical scientific-reasoning models.
Experimental design, a crucial aspect of the scientific method, was included as a separate KSA in order to cover formal statistics, hypothesis testing, methodological considerations and pilot/sensitivity testing; this allowed the Prerequisite knowledge KSAs for biology and computational methods to focus on the background knowledge, and basic skills and abilities, that are foundational in each area, and the role of experiments and troubleshooting in each.
Given some level of overlap between them, consideration was given to whether KSAs needed to be separate. For example, we discussed whether statistical and engineering methods required their own KSA. The decision to include aspects of statistical inference and experimental design in each of the Prerequisite knowledge KSAs was based on a (perhaps aspirational) objective to define these basic features as “prerequisite”. This formulation left the more complex and interdisciplinary characteristics of experimental design to its own KSA. Engineering was deemed to be an optional domain, not essential to the specific KSAs required for solving biological problems.
Identify data that are relevant to the problem and Identify and use appropriate analytical methods were deemed sufficiently distinct to warrant separate KSAs. Similarly, Communication was included separately to emphasize the importance of being able to transparently write about and present scientific work, even though there are requirements for communication in several other KSAs, including Interpretation of results/output and Draw and contextualize conclusions. Finally, we recognized Ethical practice as a separate KSA, in spite of the inclusion of key attributes of ethical science in all of the PLDs relating to transparency, rigor, and reproducibility.
Identification of stages
Given these KSAs, and Bloom’s taxonomy, stages on the developmental trajectory were articulated as follows:
Novice (e.g., early undergraduate/new to bioinformatics), Bloom’s 1: remember, understand. Novices can engage with well-defined problems, with known solutions.
Beginner (e.g., late undergraduate, early Master’s), Bloom’s 2–3: understand and apply. Beginners may use, but not choose, tools; they can engage with well-defined problems and apply what they are told to apply; the answers may not be known, and the Beginner would stop once this was apparent.
Apprentice (e.g., Master’s, early doctoral student), Bloom’s 3–4, early 5: choose and apply techniques to problems that have been defined (either jointly or by others). The Apprentice can analyze and interpret appropriate data, identify basic limitations, conceptualize a need for next steps, and contextualize results with extant literature.
Early Journeyman (J1) (e.g., late doctoral student or just after graduation), Bloom’s 5, early 6: begin to evaluate (review) and synthesize novel life-science knowledge, and to develop abilities to integrate bioinformatics into research practice, with some mentorship. The J1 Journeyman can contribute to problem formulation, shows earliest establishment of independent expertise in the specific life-science area, and can confidently integrate current bioinformatics technology into that area.
Late/advanced Journeyman (J2) (e.g., doctorate holder), Bloom’s 5, late 6: expertly evaluate (review) and synthesize novel life-science knowledge, and integrate bioinformatics into research practice. The J2 Journeyman is independent and expert in a specific life-science area, and can select, apply and develop new methods. The J2 Journeyman formulates problems, considers the relevance of “what works” within this area to other life-science domains, so as to be an adaptable and creative scientific innovator without having to reinvent every wheel.
As already noted, PLD reviewers were instructed to determine whether two different developmental stages should be described for the independent bioinformatician on each of the KSAs. All KSAs were judged to require the two Journeyman levels.
Synthesizing KSAs, stages, and PLDs into the MR-Bi
As described earlier, 12 KSAs were defined, and PLDs were drafted and iteratively refined for each of the stages on the developmental trajectory. Refinements included, for example, ensuring that neither KSAs nor PLDs included specific tools, programs, or tasks (e.g., BLAST, Hadoop, creating GitHub repositories), because these may change over time but the KSAs will change less quickly, if at all. The results are shown in the MR-Bi in Table 1.
In addition to the KSAs and PLDs, considerations for “evidence of performance” were incorporated into the MR-Bi (Table 1). As noted, the Mastery Rubric is a true rubric in the sense that learners can demonstrate their achievement of any given level on any KSA using a wide variety of evidence. The development of independence as a learner, and as a practitioner, requires a level of self-assessment that is not often considered in higher education and training.
Validity evidence
The first step in validating the MR-Bi was to investigate the alignment of the KSAs with the competencies. While the competencies informed the refinement of the KSAs (see Supplemental Materials, Figure A in S1 File), our emphasis was nevertheless on the scientific method. To support the claim that the MR-Bi is valid for the domain, the final KSAs should support the competencies such that, if a curriculum develops the KSAs in learners to a sufficient level, then performance of the relevant KSA(s) should lead to the demonstration of target competencies. This alignment is presented in Table 2, and assumes that each competency would be assessed as “present”/“absent”: i.e., that all of its constituent parts are required for a person to be declared “competent” for that item. To perform the alignment, we took the highest Bloom’s level of complexity described in the competency as the minimum required to successfully exhibit that competency.
The essential features of Table 2 for purposes of validating the MR-Bi are:
almost every competency is supported by at least one KSA; and
each KSA supports the achievement of at least one competency.
Table 2 suggests that the KSAs can support both sets of competencies, providing convergent as well as content validity. However, the table shows that several of the competencies were insufficiently articulated for alignment with any KSA. Two items (labeled with a * in Table 2) could not be aligned because they lacked actionable verbs: Recognition of the need for and an ability to engage in continuing professional development [19] and Human and social context [17]. Without actionable verbs, competencies cannot be taught or assessed in any systematic way. Three other items (labeled ** in Table 2) had potentially actionable verbs, but were insufficiently specified to align with any KSA: An understanding of professional, ethical, legal, security and social issues and responsibilities [19]; Evaluate the impact of biomedical informatics applications and interventions in terms of people, organizations and socio-technical systems; and Analyze complex biomedical informatics problems in terms of people, organizations and socio-technical systems [17]. In these cases, what constitutes the sufficient demonstration of the competencies is not discernable. By contrast, the PLDs in the MR-Bi contain actionable verbs describing the learner performing each KSA across stages, providing guidance to the learner about evidence that can demonstrate their achievements, and to the instructor about how to elicit such evidence.
Alignment of the MR-Bi with the principles of andragogy
The second aspect of validation was to consider alignment of the MR-Bi with principles of andragogy [43]. This alignment is explored in Table 3. The results in Table 3 are derived from the design features of any Mastery Rubric but are also based on curriculum development, evaluation, or revision experiences in other Mastery Rubric development projects (i.e., [30]; [48]; [57], respectively).
The Mastery Rubric construct itself was created to facilitate the development of higher education curricula; the MR-Bi is thereby similarly aligned with these core principles.
Discussion
Rubrics are familiar tools–often, scoring tables–to help instructors to evaluate the quality of, and hence to grade, individual pieces of student work in a consistent way. They typically contain quality descriptors for various evaluative criteria at specific levels of achievement [35]. When shared, so that instructors can use them for marking, and students can use them to plan/monitor/evaluate their own work, they have a positive impact on learning outcomes (e.g., [58]; [59]; [60]).
The Mastery Rubric builds on this concept, but with a focus on entire curricula or training programs rather than on discrete pieces of work. As such, a Mastery Rubric provides an organising framework in which KSAs are clearly articulated, and their performance levels described and staged in such a way that they can be achieved progressively. Specifically, the framework has three components: i) a set of domain-relevant, and transferable, KSAs; ii) a defined set of developmental stages denoting progression along a path of increasing cognitive complexity, towards independence (if that is desired); and iii) descriptions of the range of expected performance levels of those KSAs. The interplay between these ‘dimensions’ affords the Mastery Rubric significant flexibility. In particular, it recognizes that individuals may be at different levels in different KSAs, and hence may have different speeds of traversal through them—thus, importantly, the measure of progression is not time, as in traditional educational systems [61] but rather, demonstrable acquisition of specific KSAs. This allows individuals (students, technicians or PIs) who wish to acquire bioinformatics skills to locate themselves within the matrix regardless of their current skill level or disciplinary background: for example, a person may be an Early Journeyman (J1) in biology (i.e., has earned a doctorate), yet be a Novice in bioinformatics; or a person may be an Apprentice-level computer scientist or ‘engineer’ yet a bioinformatics Beginner. For all learners, the MR-Bi not only identifies where they are, but it also makes the route from their current level of performance on any given KSA to a higher level explicit, without the need to articulate bespoke personal traits for individuals from every conceivable scientific background.
A strength of the MR-Bi is that it was developed and refined specifically to support decisions that are made by instructors and learners, and to promote and optimize educational outcomes. In this way, the MR-Bi can bring validity—a formal construct relating to the decisions that are supported by any test, score, or performance [62]; [63]—to bioinformatics curriculum or training program development. Although it does not focus on subject-specific content, the MR-Bi should lead to curricula that produce similarly-performing graduates (or course completers) across institutions; that is, a curriculum or training program that uses the MR-Bi can go beyond a transcript of what courses a learner completed, to represent what a learner can do and the level at which they perform. This is consistent with the European Qualifications Framework (see https://ec.europa.eu/ploteus/content/how-does-eqf-work), which defines eight hierarchical levels describing what learners know, understand, and are able to do, aiming to make it easier to compare national qualifications and render learning transferable (https://ec.europa.eu/ploteus/content/descriptors-page). The European Qualifications Framework enables mapping of the disparate characterizations of high school graduates, university graduates, and doctorate awardees into a single, coherent set of general descriptors. The MR-Bi makes this type of mapping specifically developmental.
The orientation of the MR-Bi towards a particular definition of bioinformatics education [55] and, by extension, bioinformatics practice, should be recognized. The context in which MR-Bi users principally focus is assumed to be the life sciences; more specifically, that their ultimate learning goals are oriented towards solving biological problems using computational technology/techniques. Moreover, the developmental trajectories outlined by the PLDs, and the choice of focal KSAs, support bioinformatics education and training that seek to move individuals towards independence in their practice of–or contributions to—science. Accordingly, the KSA-extraction process was heavily influenced by models of the scientific method and scientific reasoning (following [44] and [45]). It is therefore important to emphasize that using the methods described here, other investigators might generate different KSAs if neither independence nor the scientific method are essential to their objectives.
Further, although we considered competencies for both bioinformatics and medical/health informatics, the PLDs for the MR-Bi were devised by bioinformaticians–for bioinformaticians. If the goal were to derive a Mastery Rubric specific for medical/health informatics, and scientific independence was similarly important, then the same process of PLD development described here could be used; the resulting Mastery Rubric would have virtually the same KSAs, but with Prerequisite knowledge—biology, replaced by Prerequisite knowledge–medical informatics/health informatics/health systems (as appropriate). The PLDs in that Mastery Rubric would then be tailored to describe achievement and development of the health informatics practitioner.
It is also worth noting that the high-level stage descriptions (top of Table 1), which informed the PLD-drafting process, broadly track the typical development of an undergraduate who progresses to graduate school. It could be argued that these definitions are too rigid. After all, individuals differ in their motivation, and in their capacities and attitudes towards learning and growth; thus, in the ‘real world’, an undergraduate class may include students with behaviours and attitudes characteristic of Novice and Beginner levels of cognitive complexity (or, exceptionally, Apprentice level). Hence, if challenged with advanced training methods (e.g., introducing novel research projects into undergraduate biology curricula), some students in the class will be receptive to being pushed beyond their intellectual ‘comfort zones’, while many will resist, being more comfortable with lectures and “canned” exercises with known results (e.g., [64]). Nevertheless, the broad descriptions in Table 1 were ultimately those that resonated with our own practical experiences of bioinformatics education and training at all stages, and for pragmatic purposes the mapping was therefore necessarily general, like any true rubric.
It should be stressed that creating any kind of ‘framework’ to support the development of competencies involves numerous stakeholders and is hugely time-consuming, not least because marrying multiple stakeholder views is hard. For example, the current version of the European e-Competence Framework for Information and Computing Technology skills (http://www.ecompetences.eu) has taken more than 8 years to develop; similarly, the bioinformatics competencies have been evolving over at least the last 6 years–inevitably, few aspects escape dissent when stakeholders from very different backgrounds, with disparate student populations and different educational goals attempt to achieve consensus [65]. Likewise, formulation of the current version of the MR-Bi has taken more than 2 years, and further refinements are likely to be required with future additional stakeholder input.
Amongst the many challenges for initiatives developing competency frameworks is the lack of a standard vocabulary. In consequence, while the MR-Bi describes knowledge, skills and abilities, the European e-Competence Framework refers to knowledge, skills and attitudes, and the BioExcel competency framework to knowledge, skills and behaviours [66]. We use abilities because they are observable and connote more purposeful engagement than do attitudes or behaviours, and this is the terminology used in discussions of validity in education/educational assessment (e.g., see [53]).
Mastery Rubrics have strengths that can be leveraged by institutions, instructors, and scientists. They support curriculum development in any education program, making concrete and explicit the roles–and contributions–of learner and instructor. Since it was developed as a tool for curriculum development, it may be difficult to conceptualize how the MR-Bi can be used to support the development of short training programs or courses. Nevertheless, even in the absence of a formal curriculum, for stand-alone and/or linked courses, the Mastery Rubric can help i) instructors to focus on prerequisite knowledge and learning objectives that are time-delimited; and, ii) learners to identify targeted training opportunities and thence to track their personal/professional development with the PLDs and KSAs.
For example, a possible application of the MR-Bi could be the classification (or development) of teaching/training materials and courses according to which KSAs they support, where learners should start (at which stage of each KSA) in order to benefit optimally from the training, and to which stage a given training opportunity proposes to bring them. Another important application of this tool could be the revision of existing courses and materials so that they explicitly support learning/demonstration of targeted KSAs at a given level. The integration of MR-Bi features into teaching materials would both provide guidance to the self-directed learner, and support the standardization of teaching and learning goals across instructional materials, and across formal and informal bioinformatics training programs worldwide. The MR-Bi is therefore a timely contribution to current global conversations and initiatives (including ELIXIR’s Training e-Support System [67], [68], which is championing the uptake of Bioschemas.org specifications for sharing training materials [69]; the training portal of the Global Organization for Learning, Education, and Training [70]; the Educational Resource Discovery Index [71]; and The Carpentries [72]) about standardizing and sharing training resources, and best practices for personalizing and customizing learning experiences.
Conclusions
In recent years, concern about the growing computational skills gap amongst life scientists has prompted the articulation of core bioinformatics competencies, aiming to facilitate development of curricula able to deliver appropriate skills to learners. However, implementing competencies in curricula has proved problematic: this is partly because there are still disparate views on what it means to be a trained bioinformatician, and partly also because competencies are actually complex, multi-dimensional educational end-points, making it difficult to achieve a common understanding of how to deliver requisite training in practice (e.g., [65]; [56]). These problems have led some researchers to suggest refinements to bioinformatics competencies, including the identification of phases of competency, and the provision of guidance on the evidence required to assess whether a given competency has been acquired [20]; others have already revised their competency-based framework to include learning trajectories, charting the progression of individuals’ abilities through defined stages via milestones [28].
Cognisant of these issues, we have devised a Mastery Rubric—a formal framework that supports the development of specific KSAs, and provides a structured trajectory for achieving bioinformatics competencies. Importantly, it prioritizes the development of independent scientific reasoning and practice; it can therefore contribute to the cultivation of a next generation of bioinformaticians who are able to design rigorous, reproducible research, and critically analyze their and others’ work. The framework is inherently robust to new research or technology, because it is broadly content agnostic. It can be used to strengthen teaching and learning, and to guide both curriculum building and evaluation, and self-directed learning; any scientist, irrespective of prior experience or disciplinary background, can therefore use it to document their accomplishments and plan further professional development. Moreover, the MR-Bi can be used to support short training courses by helping instructors to focus on prerequisite knowledge and on learning objectives that are time-delimited. Specifics on how to accomplish these are the topics of our ongoing work.
Supporting information
S1 File [docx]
Cognitive Task Analysis Methodology (Table A) and Figure (Figure A) with examples (Text A), and full listing of all competencies (Table B).
Zdroje
1. MacLean M, Miles C. Swift action needed to close the skills gap in bioinformatics. Nature. 1999;401: 10. doi: 10.1038/43269 10485694
2. Brass A. Bioinformatics education—A UK perspective. Bioinformatics. 2000;16: 77–78. doi: 10.1093/bioinformatics/16.2.77 10842726
3. Pevzner P, Shamir R. Computing has changed biology-biology education must catch up. Science (80-). 2009;325: 541–542. doi: 10.1126/science.1173876 19644094
4. Abeln S, Molenaar D, Feenstra KA, Hoefsloot HCJ, Teusink B, Heringa J. Bioinformatics and Systems Biology: bridging the gap between heterogeneous student backgrounds. Brief Bioinform. 2013;14: 589–598. doi: 10.1093/bib/bbt023 23603092
5. Libeskind-Hadas R, Bush E. A first course in computing with applications to biology. Brief Bioinform. 2013;14: 610–617. doi: 10.1093/bib/bbt005 23449003
6. Schneider MV, Jungck JR. Editorial: International, interdisciplinary, multi-levelbioinformatics training and education. Brief Bioinform. 2013;14: 527. doi: 10.1093/bib/bbt064 24030777
7. Goodman AL, Dekhtyar A. Teaching Bioinformatics in Concert. PLoS Comput Biol. 2014;10: e1003896. doi: 10.1371/journal.pcbi.1003896 25411792
8. Rubinstein A, Chor B. Computational Thinking in Life Science Education. PLoS Comput Biol. 2014;10: e1003897. doi: 10.1371/journal.pcbi.1003897 25411839
9. Chang J. Core services: Reward bioinformaticians. Nature. 2015;520: 151–152. doi: 10.1038/520151a 25855439
10. Brazas MD, Blackford S, Attwood TK. Training: Plug gap in essential bioinformatics skills. Nature. 2017;544: 161. doi: 10.1038/544161c 28406196
11. Feldon DF, Jeong S, Peugh J, Roksa J, Maahs-Fladung C, Shenoy A, et al. Null effects of boot camps and short-format training for PhD students in life sciences. Proc Natl Acad Sci. 2017;114: 9854–9858. doi: 10.1073/pnas.1705783114 28847929
12. Attwood TK, Blackford S, Brazas MD, Davies A, Schneider MV. A global perspective on evolving bioinformatics and data science training needs. Brief Bioinform. 2019;20: 398–404. doi: 10.1093/bib/bbx100 28968751
13. Barone L, Williams J, Micklos D. Unmet needs for analyzing biological big data: A survey of 704 NSF principal investigators. PLoS Comput Biol. 2017;13: e1005858. doi: 10.1371/journal.pcbi.1005858 29131819
14. Brazas MD, Brooksbank C, Jimenez RC, Blackford S, Palagi PM, Las Rivas JD, et al. A global perspective on bioinformatics training needs. bioRxiv. 2017; doi: 10.1101/098996
15. Schneider MV, Madison G, Flannery P. Survey of Bioinformatics and Computational Needs in Australia 2016.pdf. figshare. [Internet]. figshare. 2016.
16. Tan TW, Lim SJ, Khan AM, Ranganathan S. A proposed minimum skill set for university graduates to meet the informatics needs and challenges of the “-omics” era. BMC Genomics. 2009; doi: 10.1186/1471-2164-10-S3-S36 19958501
17. Kulikowski CA, Shortliffe EH, Currie LM, Elkin PL, Hunter LE, Johnson TR, et al. AMIA Board white paper: Definition of biomedical informatics and specification of core competencies for graduate education in the discipline. J Am Med Informatics Assoc. 2012;19: 931–938. doi: 10.1136/amiajnl-2012-001053 22683918
18. Dinsdale E, Elgin SCR, Grandgenett N, Morgan W, Rosenwald A, Tapprich W, et al. NIBLSE: A Network for Integrating Bioinformatics into Life Sciences Education. CBE—Life Sci Educ. 2015;14: 1–4. doi: 10.1187/cbe.15-06-0123 26466989
19. Welch L, Lewitter F, Schwartz R, Brooksbank C, Radivojac P, Gaeta B, et al. Bioinformatics Curriculum Guidelines: Toward a Definition of Core Competencies. PLoS Comput Biol. 2014;10: e1003496. doi: 10.1371/journal.pcbi.1003496 24603430
20. Welch L, Brooksbank C, Schwartz R, Morgan SL, Gaeta B, Kilpatrick AM, et al. Applying, Evaluating and Refining Bioinformatics Core Competencies (An Update from the Curriculum Task Force of ISCB’s Education Committee). PLOS Comput Biol. 2016;12: e1004943. doi: 10.1371/journal.pcbi.1004943 27175996
21. Wilson Sayres MA, Hauser C, Sierk M, Robic S, Rosenwald AG, Smith TM, et al. Bioinformatics core competencies for undergraduate life sciences education. PLoS One. 2018;13: e0196878. doi: 10.1371/journal.pone.0196878 29870542
22. Centers for Disease Control and Prevention and University of Washington’s Center for Public Health Informatics. In: Competencies for Public Health Informaticians [Internet]. [cited 11 Jan 2016]. Available: http://www.cdc.gov/InformaticsCompetencies
23. Miner KR, Childers WK, Alperin M, Cioffi J, Hunt N. The MACH Model: From Competencies to Instruction and Performance of the Public Health Workforce. Public Health Rep. 2017;120: 9–15. doi: 10.1177/00333549051200s104 16025702
24. Carter KF, Kaiser KL, O’Hare PA, Callister LC. Use of PHN competencies and ACHNE essentials to develop teaching-learning strategies for generalist C/PHN curricula. Public Health Nurs. 2006;23: 146–160. doi: 10.1111/j.1525-1446.2006.230206.x 16684189
25. Fernandez N, Dory V, Ste-Marie LG, Chaput M, Charlin B, Boucher A. Varying conceptions of competence: An analysis of how health sciences educators define competence. Med Educ. 2012;46: 357–365. doi: 10.1111/j.1365-2923.2011.04183.x 22429171
26. Bennett CJ, Walston SL. Improving the use of competencies in public health education. Am J Public Health. 2015;105: S65–S67. doi: 10.2105/AJPH.2014.302329 25706022
27. Caverzagie KJ, Nousiainen MT, Ferguson PC, ten Cate O, Ross S, Harris KA, et al. Overarching challenges to the implementation of competency-based medical education. Med Teach. 2017;39: 588–593. doi: 10.1080/0142159X.2017.1315075 28598747
28. Holmboe ES, Edgar L, Hamstra S. The Milestones Guidebook [Internet]. 2016.
29. Englander R, Frank JR, Carraccio C, Sherbino J, Ross S, Snell L. Toward a shared language for competency-based medical education. Med Teach. 2017;39: 582–587. doi: 10.1080/0142159X.2017.1315066 28598739
30. Tractenberg RE. The Mastery Rubric: A tool for curriculum development and evaluation in higher, graduate/post-graduate, and professional education. SocArXiv. 2017; doi: 10.31235/osf.io/qd2ae
31. Tractenberg RE, Lindvall JM, Attwood TK, Via A. Guidelines for curriculum development in higher education: How learning outcomes drive all decision-making (In preparation). 2019;
32. Tractenberg RE. Achieving sustainability and transfer with short term learning experiences. SocArXiv. 2018; doi: 10.31235/osf.io/jsfe9
33. Bloom BS, Englehard MD, Furst EJ, Hill WH. Taxonomy of educational objectives: The classification of educational goals: Handbook I, cognitive domain. 2nd ed. New York: David McKay Co Inc.; 1956.
34. Anderson LW, Krathwohl DR, Airasian PW, Cruikshank KA, Mayer RE, Pintrich PR, et al. A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. White Plains, NY: Addison Wesley Longman; 2001.
35. Stevens DD, Levi AJ. Introduction to Rubrics. 2nd ed. Stylus Publishing;
36. Egan K, Schneider C, Ferrara S. Performance Level Descriptors. In: Cizek GJ, editor. Setting Performance Standards: Foundations, Methods, and Innovations. 2nd ed. Routledge; 2012. pp. 79–106.
37. Clark R, Feldon D, van Merriënboer J, Yates K, Early S. Cognitive Task Analysis. In: Spector JM, Merrill MD, Elen J, Bishop MJ, editors. Handbook of research on educational communications and technology. 3rd ed. Mahwah, NJ: Lawrence Earlbaum Associates; 2008. pp. 577–593.
38. Ogilvie S. The Economics of Guilds. J Econ Perspect. 2014;28: 169–192. doi: 10.1257/jep.28.4.169
39. Cizek GJ. An introduction to contemporary standard setting: concepts, characteristics, and contexts. In: Cizek GJ, editor. Setting Performance Standards. 2nd ed. New York: Routledge; 2012. pp. 3–14.
40. Norcini JJ. Setting standards on educational tests. Med Educ. 2003;37: 464–469. doi: 10.1046/j.1365-2923.2003.01495.x 12709190
41. Tractenberg RE. Degrees of freedom analysis in educational research and decision-making: Leveraging qualitative data to promote excellence in bioinformatics training and education. Brief Bioinform. 2019;20: 416–42. doi: 10.1093/bib/bbx106 30908585
42. Campbell DT. III. “Degrees of Freedom” and the Case Study. Comp Polit Stud. 1975;8: 178–193. doi: 10.1177/001041407500800204
43. Knowles MS, Holton EF III, Swanson RA. The Adult Learner. 6th ed. New York: Elsevier/Buttrworth Heinemann; 2005.
44. Wild CJ, Pfannkuch M. Statistical thinking in empirical enquiry. Int Stat Rev. 1999;67: 223–248. doi: 10.1111/j.1751-5823.1999.tb00442.x
45. Bishop G, Talbot M. Statistical thinking for novice researchers in the biological sciences. In: Batanero C, editor. Training researchers in the use of statistics. Granada, Spain: International Association for Statistical Education International Statistical Institute; 2001. pp. 215–226.
46. Tractenberg R. How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes. Educ Sci. 2016;7: 3. doi: 10.3390/educsci7010003
47. Pearson WR. Training for bioinformatics and computational biology. Bioinformatics. 2001;17: 761–762. doi: 10.1093/bioinformatics/17.9.761 11590093
48. Tractenberg RE, Gushta MM, Weinfeld JM. The Mastery Rubric for Evidence-Based Medicine: Institutional Validation via Multidimensional Scaling. Teach Learn Med. 2016;28: 152–165. doi: 10.1080/10401334.2016.1146599 27064718
49. Chan TM, Baw B, McConnell M, Kulasegaram K. Making the McMOST out of Milestones and Objectives: Reimagining Standard Setting Using the McMaster Milestones and Objectives Stratification Technique. AEM Educ Train. 2016;1: 48–54. doi: 10.1002/aet2.10008 30051009
50. Kingston N, Tiemann G. Setting Performance Standards on Complex Assessments: The Body of Work method. In: Cizek GJ, editor. Setting Performance Standards: Foundations, Methods, and Innovations. 2nd ed. New York: Routledge; 2012. pp. 201–223.
51. Plake B, Cizek G. Variations on a theme: The Modified Angoff, Extended Angoff, and Yes/No standard setting methods. In: Cizek GJ, editor. Setting Performance Standards: Foundations, Methods, and Innovations. New York: Routledge; 2012. pp. 181–199.
52. Kane M. Validating the performance standards associated with passing scores. Rev Educ Res. 1994;64: 425–461. doi: 10.3102/00346543064003425
53. Messick S. The interplay of evidence and consequences in the validation of performance assessments. Educ Res. 1994;23: 13–23.
54. Kane MT. Validating the Interpretations and Uses of Test Scores. J Educ Meas. 2013;50: 1–73. doi: 10.1111/jedm.12000
55. Magana AJ, Taleyarkhan M, Alvarado DR, Kane M, Springer J, Clase K. A survey of scholarly literature describing the field of bioinformatics education and bioinformatics educational research. CBE Life Sci Educ. 2014;13: 573–738.
56. Smith DR. Bringing bioinformatics to the scientific masses. EMBO Rep. 2018;19: e46262. doi: 10.15252/embr.201846262 29724753
57. Tractenberg RE, Wilkinson M, Bull A, Pellathy T, Riley J. (2019). Designing a developmental trajectory supporting the evaluation and achievement of competencies: a case study with a Mastery Rubric for the advanced practice nursing curriculum. PLOS ONE 14(11): e0224593. https://doi.org/10.1371/journal.pone.0224593 31697730
58. Andrade HG. Using Rubrics to Promote Thinking and Learning. Educ Leadersh. 2000;57: 13–18.
59. Jonsson A, Svingby G. The use of scoring rubrics: Reliability, validity and educational consequences. Educ Res Rev. 2007;2: 130–144. doi: 10.1016/j.edurev.2007.05.002
60. Lipnevich AA, McCallen LN, Miles KP, Smith JK. Mind the gap! Students’ use of exemplars and detailed rubrics as formative assessment. Instr Sci. 2014;42: 539–559. doi: 10.1007/s11251-013-9299-9
61. Sullivan RS. The Competency-Based Approach to Training. Strategy Paper No 1. Baltimore, Maryland; 1995.
62. Messick S. Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. Am Psychol. 1995;50: 741–749. doi: 10.1037/0003-066X.50.9.741
63. Kane M. Certification Testing as an Illustration of Argument-Based Validation. Meas Interdiscip Res Perspect. 2004;2: 135–170. doi: 10.1207/s15366359mea0203_1
64. Shaffer CD, Alvarez C, Bailey C, Barnard D, Bhalla S, Chandrasekaran C, et al. The genomics education partnership: successful integration of research into laboratory classes at a diverse group of undergraduate institutions. CBE Life Sci Educ. 2010;9: 55–69. doi: 10.1187/09-11-0087 20194808
65. Mulder N, Schwartz R, Brazas MD, Brooksbank C, Gaeta B, Morgan SL, et al. The development and application of bioinformatics core competencies to improve bioinformatics training and education. PLoS Comput Biol. 2018;14: e1005772. doi: 10.1371/journal.pcbi.1005772 29390004
66. Master V. BioExcel Deliverable 4.2—Competency framework, mapping to current training & initial training plan [Internet]. 2016. doi: 10.5281/zenodo.264231
67. Attwood T, Beard N, Nenadic A, Finn Bacall MT. TeSS–The life science training portal. F1000Research 2018 [version 1; not peer Rev]. 2018;7: 250(poster). doi: 10.7490/f1000research.1115282.1
68. Larcombe L, Hendricusdottir R, Attwood TK, Bacall F, Beard N, Bellis LJ, et al. ELIXIR-UK role in bioinformatics training at the national level and across ELIXIR [version 1; peer review: 4 approved, 1 approved with reservations]. F1000Research. 2017;6: 952. doi: 10.12688/f1000research.11837.1 28781748
69. Profiti G, Jimenez RC, Zambelli F, Mičetić I, Licciulli VF, Chiara M, et al. Using community events to increase quality and adoption of standards: the case of Bioschemas [version 1; not peer reviewed]. F1000Research. 2018;7: 1696. doi: 10.7490/f1000research.1116233.1
70. Corpas M, Jimenez RC, Bongcam-Rudloff E, Budd A, Brazas MD, Fernandes PL, et al. The GOBLET training portal: A global repository of bioinformatics training materials, courses and trainers. Bioinformatics. 2015;31: 140–142. doi: 10.1093/bioinformatics/btu601 25189782
71. Van Horn JD, Fierro L, Kamdar J, Gordon J, Stewart C, Bhattrai A, et al. Democratizing data science through data science training. Pac Symp Biocomput. 2018;23: 292–303. doi: 10.1142/9789813235533_0027 29218890
72. Teal TK, Cranston KA, Lapp H, White E, Wilson G, Ram K, et al. Data Carpentry: Workshops to Increase Data Literacy for Researchers. Int J Digit Curation. 2015;10: 292–303. doi: 10.2218/ijdc.v10i1.351
Článek vyšel v časopise
PLOS One
2019 Číslo 11
- S diagnostikou Parkinsonovy nemoci může nově pomoci AI nástroj pro hodnocení mrkacího reflexu
- Je libo čepici místo mozkového implantátu?
- Pomůže v budoucnu s triáží na pohotovostech umělá inteligence?
- AI může chirurgům poskytnout cenná data i zpětnou vazbu v reálném čase
- Nová metoda odlišení nádorové tkáně může zpřesnit resekci glioblastomů
Nejčtenější v tomto čísle
- A daily diary study on maladaptive daydreaming, mind wandering, and sleep disturbances: Examining within-person and between-persons relations
- A 3’ UTR SNP rs885863, a cis-eQTL for the circadian gene VIPR2 and lincRNA 689, is associated with opioid addiction
- A substitution mutation in a conserved domain of mammalian acetate-dependent acetyl CoA synthetase 2 results in destabilized protein and impaired HIF-2 signaling
- Molecular validation of clinical Pantoea isolates identified by MALDI-TOF