This page is part of the Evidence Based Medicine on FHIR Implementation Guide (v1.0.0-ballot: STU1 Ballot 1) based on FHIR (HL7® FHIR® Standard) v5.0.0. . For a full list of available versions, see the Directory of published versions
Active as of 2022-11-04 |
<CodeSystem xmlns="http://hl7.org/fhir">
<id value="181513"/>
<meta>
<versionId value="25"/>
<lastUpdated value="2023-12-08T13:35:47.940Z"/>
</meta>
<text>
<status value="generated"/>
<div xmlns="http://www.w3.org/1999/xhtml"><p><b>Properties</b></p><p><b>This code system defines the following properties for its concepts</b></p><table class="grid"><tr><td><b>Code</b></td><td><b>Type</b></td><td><b>Description</b></td></tr><tr><td>comment</td><td>string</td><td>Comment for application</td></tr><tr><td>editors</td><td>string</td><td>Term/Definition Editors</td></tr><tr><td>approval</td><td>string</td><td>Expert Working Group Agreement</td></tr><tr><td>negative-vote</td><td>string</td><td>Expert Working Group Disagreement</td></tr><tr><td>expert-comments</td><td>string</td><td>Expert Working Group Comments</td></tr><tr><td>external-definitions</td><td>string</td><td>Externally Mapped Definitions</td></tr><tr><td>open-for-voting</td><td>dateTime</td><td>Open for Voting</td></tr><tr><td>change-for-vote</td><td>string</td><td>Proposed Change for Future Vote</td></tr><tr><td>multiple-parents</td><td>string</td><td>Has more than one parent term (IS-A relationship)</td></tr><tr><td>statistical-purpose</td><td>string</td><td>Statistical purpose</td></tr></table><p><b>Concepts</b></p><p>This case-insensitive code system <code>https://fevir.net/resources/CodeSystem/181513</code> defines the following codes in a Is-A heirarchy:</p><table class="codes"><tr><td><b>Lvl</b></td><td style="white-space:nowrap"><b>Code</b></td><td><b>Display</b></td><td><b>Definition</b></td><td><b>comment</b></td><td><b>editors</b></td><td><b>approval</b></td><td><b>negative-vote</b></td><td><b>expert-comments</b></td><td><b>external-definitions</b></td><td><b>open-for-voting</b></td><td><b>change-for-vote</b></td><td><b>multiple-parents</b></td><td><b>statistical-purpose</b></td><td><b>Finnish (fi)</b></td></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:01000<a name="181513-SEVCO.5801000"> </a></td><td>study design</td><td>A plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Asiyah Lin, Mario Tristan, Neeraj Ojha</td><td>9/9 as of 4/26/2021: Eric Harvey, Bhagvan Kommadi, KM Saif-Ur-Rahman, Paola Rosati, Jesús López-Alcalde, Tatyana Shamliyan, Sorana D. Bolboacă, Asiyah Lin, Eric Au</td><td>2021-04-12 Vote 9-2 on "Study design=A plan specification for how and what kinds of data are gathered or used to generate or test a hypothesis", Bhagvan Kommadi, Jesús López-Alcalde, Sorana D. Bolboacă, Tatyana Shamliyan, Asiyah Lin, Philippe Rocca-Serra, Eric Au, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Eric Harvey
2021-04-06 vote 8-1 on "Study Design = A plan specification for how and what kinds of data will be gathered as part of an investigation to generate or test a hypothesis" by Tatyana Shamliyan, Paola Rosati, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Eric Harvey, KM Saif-Ur-Rahman, Asiyah Lin, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01001<a name="181513-SEVCO.5801001"> </a></td><td>interventional research</td><td>A study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by the investigator to evaluate a response in the dependent variable (an effect or outcome).</td><td>We acknowledge that interventional study design and interventional study may not be exact synonyms of interventional research, but interventional research could be used to encompass both design and implementation of the design</td><td>Mario Tristan, Joanne Dehnbostel, Harold Lehmann, Khalid Shahin, Brian S. Alper</td><td>12/12 as of 5/31/2021: Eric Harvey, Bhagvan Kommadi, Brian Alper, Sebastien Bailly, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Paola Rosati, Sorana D. Bolboacă, Asiyah Lin, Leo Orozco, Erfan Shamsoddin</td><td>2021-05-17 vote 6-2 on "Interventional research = In a prospective study, an independent variable is manipulated or assigned by the investigator to evaluate a response or outcome (the dependent variable)." by Eric Harvey, Bhagvan Kommadi, Paola Rosati, KM Saif-Ur-Rahman, Ahmad Sofi-Mahmudi, Jesus Lopez-Alcalde, Sorana D. Bolboacă, Harold Lehmann, 2021-05-24 vote 10-1 on Interventional research="A study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by the investigator to evaluate a response in the dependent variable (an effect or outcome)." by Alejandro Piscoya, Philippe Rocca-Serra, KM Saif-Ur-Rahman, Eric Harvey, Harold Lehmann, Bhagvan Kommadi, Sorana D. Bolboacă, Jesús López-Alcalde, Paola Rosati, Tatyana Shamliyan, Brian Alper</td><td>I would avoid the term prospective study, as this term is ambiguous. Suggested change to "A study in whichi the independent variable is prospectively manipulated or assigned by the invesigator…" Manipulate = to control, manipulate or influence suggestion to delete "the dependent variable" which mixes language of analysis vs. design with "response" 5-24-2021 No major disagreement with the definition but uneasy to have 'intervention study' as (unspecified) synonym as doing so convey that a plan (the study design) is the same as the execution of the plan (the study). The same applies to 'Primary research...) I think that we need to clarify the goals: Experiments examine cause-and-effect relationship by measuring outcomes when a particular factor (exposure, intervention, independent variable) is manipulated and controlled during and after experiment (inference). I think that we should clarify the subjects of experiments: consent people or animals</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01003<a name="181513-SEVCO.5801003"> </a></td><td>randomized assignment</td><td>An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by random chance to separate groups.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson</td><td>8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01006<a name="181513-SEVCO.5801006"> </a></td><td>simple randomization</td><td>A randomized assignment in which each participant has the same prespecified likelihood of being assigned to a group as all other participants, independent of the assignment of any other participant.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson</td><td>8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01007<a name="181513-SEVCO.5801007"> </a></td><td>stratified randomization</td><td>A randomized assignment in which participants are stratified into groups based on prognostic variables and then randomized into balanced treatment groups</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Khalid Shahin</td><td>8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01008<a name="181513-SEVCO.5801008"> </a></td><td>block randomization</td><td>A randomized assignment in which a pre-specified number of subjects is assigned to a block containing the same pre-specified ratio of group assignments in random order.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Khalid Shahin</td><td>7/7 as of 7/26/2021: Mario Tristan, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Leo Orozco, Janice Tufte</td><td>2021-07-19 vote 7-1 on "A randomized assignment in which a pre-specified number of subjects is assigned to a block containing the same pre-specified number of balanced group assignments in random order" by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte</td><td>I think I'm hung up on the word "balanced". Does allocation in block design need to be balanced? Couldn't a block design allocate subjects to treatment arms in a 2:1, or other "unbalanced" ratio?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01009<a name="181513-SEVCO.5801009"> </a></td><td>adaptive randomization</td><td>A randomized assignment in which a participant’s group assignment probability is adjusted based on any factor such that the likelihood of assignment is not the same for all participants.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson</td><td>9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya</td><td>2021-07-19 vote 7-1 on "A randomized assignment in which a participant’s group assignment probability is adjusted based on any factor such that the likelihood of assignment is not the same for all participants." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte, 2021-07-26 vote 6-1 by Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan</td><td>I deem this kind of adaptation could determine conflict of interests or a new kind of bias. I disagree with adding an adaptive randomization as a new term 7-26-21 comment: Again, why and for what you wish to maintain this term? I think the term adaptive randomization risks a severe selection bias. In ethical terms, I deem there is no justification to proceed with such a methodology in clinical trials.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01005<a name="181513-SEVCO.5801005"> </a></td><td>non-randomized assignment</td><td>An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by methods other than random chance to separate groups.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Michael Panzer, Janice Tufte, Erfan Shamsoddin, Ellen Jepson, Khalid Shahin</td><td>9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya</td><td>2021-07-19 vote 6-2 on "An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by methods other than random chance to separate groups." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana D. Bolboacă, Janice Tufte, 2021-07-26 vote 6-1 by Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan</td><td>In this case, if the patients choose which is the arm they want to be in it would beok to insert this term. I presumetherefore that if the choice is made bythe researchers they offer a clearjustification for it in the protocol As written, this category would include all quasi-randomized designs. If this is the intent, fine. If this was not the intent, perhaps we could change "..randomized.." to "..randomized or quasi-randomized.." 7-26-21 comment: We usually have started the definitions by saying "A xxx assignment that..." (see previous ones in this page). That is, we define the assigment. However, for"Non-Randomized Assignment" we start by saying "An interventional study design..." I propose to describe the "assignment" (avoid starting by defining the study design itself)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01004<a name="181513-SEVCO.5801004"> </a></td><td>quasi-randomized assignment</td><td>An interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation.</td><td>Quasi-random methods of allocation include allocation by alternate order of entry, date of birth, day of the week, month of the year, or medical record number</td><td>Brian S. Alper, Joanne Dehnbostel, Michael Panzer, Janice Tufte, Erfan Shamsoddin, Ellen Jepson, Khalid Shahin</td><td>7/7 as of 7/26/2021: Mario Tristan, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Leo Orozco, Janice Tufte</td><td>2021-07-19 vote 6-2 on "An interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte</td><td>Which is the difference between thisquasi-randomized assignment and theadaptive randomization? It is unclearwhy we should insert these two terms inthe glossary I would specify in the definition thatquasi-randomisation is a non-randommethod of allocation</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01029<a name="181513-SEVCO.5801029"> </a></td><td>clinical trial</td><td>Interventional research in which one or more healthcare-related actions (i.e., a diagnostic, prognostic, therapeutic, preventive or screening method or intervention) is evaluated for effects on health-related biomedical or behavioral processes and/or outcomes.</td><td>Some definitions for "clinical trial" include human subject research for effects on human health outcomes. The term "human" was not added to this definition because a study design with animal subjects for effects on animal health outcomes to inform veterinary care would be considered a clinical trial. However, a study design with animal subjects to inform human health outcomes would not be considered a clinical trial.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Paul Whaley</td><td>2021-12-14 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Robin Ann Yurk, Janice Tufte, Paul Whaley, Brian S. Alper</td><td>2021-11-30 vote 7-1 by Alejandro Piscoya, Mario Tristan, Robin Ann Yurk, Muhammad Afzal, Paola Rosati, Paul Whaley, Janice Tufte, Jesus Lopez-Alcalde
2021-12-07 vote 4-1 by Mario Tristan, Robin Ann Yurk, Janice Tufte, Joanne Dehnbostel, CP Ooi</td><td>2021-11-30 comments: (We should include the classical definition for Phase lV Field Trials of Health Interventions: A Toolbox. 3rd edition.
Smith PG, Morrow RH, Ross DA, editors.
Oxford (UK): OUP Oxford; 2015 Jun 1.https://www.ncbi.nlm.nih.gov/books/NBK305508/), Instead of "methods" I would use the term "interventions". I also miss the term "prognostic" as they are not diagnostic or screening. Besides, it would be important to highlight that the clinical trial is done in humans
2021-12-07 comment: A clinical trial is a type of research that studies new tests and treatments and evaluates their effects on human health outcomes. The medical intervention can be drugs, cells and other biological products, surgical procedures, radiological procedures, devices, behavioural treatments and preventive care.</td><td>NIH Clinical Trial Definition = A research study[1] in which one or more human subjects[2] are prospectively assigned[3] to one or more interventions[4] (which may include placebo or other control) to evaluate the effects of those interventions on health-related biomedical or behavioral outcomes.[5]
[4]An intervention is defined as a manipulation of the subject or subject’s environment for the purpose of modifying one or more health-related biomedical or behavioral processes and/or endpoints. Examples include: drugs/small molecules/compounds; biologics; devices; procedures (e.g., surgical techniques); delivery systems (e.g., telemedicine, face-to-face interviews); strategies to change health-related behavior (e.g., diet, cognitive therapy, exercise, development of new habits); treatment strategies; prevention strategies; and, diagnostic strategies.
from https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-015.html</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01041<a name="181513-SEVCO.5801041"> </a></td><td>pragmatic clinical trial</td><td>A clinical trial conducted under conditions of routine clinical practice.</td><td>"Pragmatic trials are designed to evaluate the effectiveness of interventions in real-life routine practice conditions, whereas explanatory trials aim to test whether an intervention works under optimal situations. The pragmatic trial, on the other hand, is designed to test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. The research question under investigation is whether an intervention actually works in real life." (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181997/)</td><td>Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Kenneth Wilkins, Harold Lehmann</td><td>2021-12-07 vote 5-0 by Mario Tristan, Robin Ann Yurk, Janice Tufte, CP Ooi, Joanne Dehnbostel</td><td>2021-11-30 vote 5-1 by Alejandro Piscoya, Robin Ann Yurk, Muhammad Afzal, Paul Whaley, Janice Tufte, Jesus Lopez-Alcalde</td><td>2021-11-30 comments: (The definition in the current form is fine however the last part may be thought like;
where "everyday" means day-to-day clinical practice wherein the conditions are not modified for the conduct of the research.), Suggested alternative: = A clinical trial designed to test the effects of an intervention under everyday conditions, where "everyday conditions" means clinical conditions are not modified for the conduct of the research</td><td>NCIt: Pragmatic Trial = A study designed to test the effectiveness of an intervention in a broad routine clinical practice. Term used to describe a clinical study designed to examine the benefits of a product under real world conditions.
UMLS: Works about randomized clinical trials that compare interventions in clinical settings and which look at a range of effectiveness outcomes and impacts.
CDISC Glossary: pragmatic trial = Term used to describe a clinical study designed to examine the benefits of a product under real world conditions.
EDDA: pragmatic clinical trial = Randomized clinical trials that compare interventions in clinical settings and which look at a range of effectiveness outcomes and impacts. [MeSH_2015]
SCO: pragmatic trial = A study designed to test the effectiveness of an intervention in a broad routine clinical practice.
"Pragmatic trials are designed to evaluate the effectiveness of interventions in real-life routine practice conditions, whereas explanatory trials aim to test whether an intervention works under optimal situations. The pragmatic trial, on the other hand, is designed to test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. The research question under investigation is whether an intervention actually works in real life." (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181997/)</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01038<a name="181513-SEVCO.5801038"> </a></td><td>expanded access study</td><td>A clinical trial that provides a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study.</td><td>Expanded Access studies include individual-patient investigational new drug (IND), treatment IND, compassionate use, emergency use or continued access.</td><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann</td><td>2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Brian S. Alper, Paola Rosati, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya</td><td/><td>2022-02-15 comment: Define IND acronym under comment for application under individual patient IND, treatment IND</td><td>from CTO:
Expanded Access Study
Studies that provide a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study. Expanded Access studies include individual-patient IND, treatment IND, compassionate use, emergency use or continued access. An investigational drug product (including biological product) available through expanded access for patients who do not qualify for enrollment in a clinical trial. Expanded Access includes all expanded access types under section 561 of the Federal Food, Drug, and Cosmetic Act: (1) for individual patients, including emergency use; (2) for intermediate-size patient populations; and (3) under a treatment IND or treatment protocol.
from NCIt:
Expanded Access Study
Studies that provide a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study. Expanded Access studies include individual-patient IND, treatment IND, compassionate use, emergency use or continued access.
also Compassionate Treatment (compassionate use trial, expanded access trial, pre-approval access)
Providing experimental therapies to very sick individuals even though they don't meet the critera for inclusion in a trial.
A way to provide an investigational therapy to a patient who is not eligible to receive that therapy in a clinical trial, but who has a serious or life-threatening illness for which other treatments are not available. Compassionate use trials allow patients to receive promising but not yet fully studied or approved cancer therapies when no other treatment option exists.
A potential pathway for a patient with an immediately life-threatening condition or serious disease or condition to gain access to an investigational medical product (drug, biologic, or medical device) for treatment outside of clinical trials when no comparable or satisfactory alternative therapy options are available. NOTE: The intent is treatment, as opposed to research. Individual, Intermediate-size, and Widespread Use Expanded Access, also Emergency IND, are all programs administered under FDA guidelines. Additionally, the US Right-to-Try Act, which is independent of FDA, expands access. [FDA Expanded Access: Information for Physicians]
from EDDA:
compassionate use trial (expanded access trial, compassionate treatment)
Providing experimental therapies to very sick individuals even though they don't meet the critera for inclusion in a trial. [NCI 2014_12E]
Providing an investigational therapy to a patient who is not eligible to receive that therapy in a clinical trial, but who has a serious or life-threatening illness for which other treatments are not available. Compassionate use trials allow patients to receive promising but not yet fully studied or approved therapies when no other treatment option exists. Also called expanded access trial. [MeSH 2014_2014_02_10]
shared as a comment: Expanded access is the use of an investigational new drug, biologics, and medical devices used to diagnose, monitor, or treat patients with serious diseases or conditions for which there are no comparable or satisfactory therapy options available outside of clinical trials. (USA FDA)</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01030<a name="181513-SEVCO.5801030"> </a></td><td>phase 1 trial</td><td>A clinical trial to gather initial evidence in humans to support further investigation of an intervention.</td><td>Phase 1 trials are often the first step in testing a new treatment in humans and may include safety assessment, measurement of metabolism and pharmacologic actions of a drug in humans, or the side effects associated with increasing doses. Phase 1 studies often include between 20 and 80 subjects, and often involve healthy subjects.</td><td>Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel</td><td>2022-01-11 vote 7-0 by Harold Lehmann, Jesus Lopez-Alcalde, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk</td><td>2022-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, janice tufte, Paola Rosati, C P Ooi, Paul Whaley, Joanne Dehnbostel</td><td>2022-01-04 comments: Perhaps adding the following may improve the clarity "It may include testing the best way to give a new treatment (for example, by mouth, infusion into a vein, or injection)".
"providing the initial investigation" sounds a bit vague compared to the other trial phase definitions. Also, can a trial really "provide an investigation"? Maybe suggest changing to "in which xxx is investigated", where "xxx" is a tighter definition of what "the initial" is referring to.
2022-01-11 comment: I would suggest not adding how many subjects are typically involved, maybe state that these usually have very small sample sizes. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377</td><td>https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes:
§ 312.21 Phases of an investigation.
An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows:
....
Phase 1.
(1) Phase 1 includes the initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid, Phase 2 studies. The total number of subjects and patients included in Phase 1 studies varies with the drug, but is generally in the range of 20 to 80.
(2) Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes.
from CTO:
Phase I trial (phase I study, early-stage clinical trial, phase I protocol, phase I clinical trial, trial phase 1)
A clinical research protocol designed to test a new biomedical intervention in a small group of people for the first time. A Phase I trial can be to establish the toxicity of a new treatment with escalating intensity of the treatment administered and/or to determine the side effects of a new treatment for a particular indication in subjects. Includes initial studies to determine the metabolism and pharmacologic actions of drugs in humans, the side effects associated with increasing doses, and to gain early evidence of effectiveness; may include healthy participants and/or patients.
The initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. NOTE: These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid, Phase 2 studies. The total number of subjects and patients included in Phase I studies varies with the drug, but is generally in the range of 20 to 80. Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
The first step in testing a new treatment in humans. These studies test the best way to give a new treatment (for example, by mouth, intravenous infusion, or injection) and the best dose. The dose is usually increased a little at a time in order to find the highest dose that does not cause harmful side effects. Because little is known about the possible risks and benefits of the treatments being tested, phase I trials usually include only a small number of patients who have not been helped by other treatments.
The initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. NOTE: These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid Phase 2 studies. The total number of subjects and patients included in Phase 1 studies varies with the drug, but is generally in the range of 20 to 80. Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes. [after FDA CDER handbook, ICH E8]
from SCO:
phase I trial not independently defined
from NCIt:
same as CTO
from OCRe:
A Phase 1 trial includes initial studies to determine the metabolism and pharmacologic actions of drugs in humans, the side effects associated with increasing doses, and to gain early evidence of effectiveness; may include healthy participants and/or patients.
from EDDA:
A clinical research protocol designed to test a new biomedical intervention in a small group of people for the first time. A Phase I trial can be to establish the toxicity of a new treatment with escalating intensity of the treatment administered and/or to determine the side effects of a new treatment for a particular indication in subjects. [NCI 2014_12E]
Studies performed to evaluate the safety of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques in healthy subjects and to determine the safe dosage range (if appropriate). These tests also are used to determine pharmacologic and pharmacokinetic properties (toxicity, metabolism, absorption, elimination, and preferred route of administration). They involve a small number of persons and usually last about 1 year. This concept includes phase I studies conducted both in the U.S. and in other countries. [MeSH 2014_2014_02_10]
from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021
4.3.1 Human Pharmacology
The protection of study participants should always be the first priority when designing early
clinical studies, especially for the initial administration of an investigational product to humans
(usually referred to as phase 1). These studies may be conducted in healthy volunteer
participants or in a selected population of patients who have the condition or the disease,
depending on drug properties and the objectives of the development programme.
These studies typically address one or a combination of the following aspects:
4.3.1.1 Estimation of Initial Safety and Tolerability
The initial and subsequent administration of a drug to humans is usually intended to determine
the tolerability of the dose range expected to be evaluated in later clinical studies and to
determine the nature of adverse reactions that can be expected. These studies typically include
both single and multiple dose administration.
4.3.1.2 Pharmacokinetics
Characterisation of a drug's absorption, distribution, metabolism, and excretion continues
throughout the development programme, but the preliminary characterisation is an essential
early goal. Pharmacokinetic studies are particularly important to assess the clearance of the
drug and to anticipate possible accumulation of parent drug or metabolites, interactions with
metabolic enzymes and transporters, and potential drug-drug interactions. Some
pharmacokinetic studies are commonly conducted in later phases to answer more specialised
questions. For orally administered drugs, the study of food effects on bioavailability is
important to inform the dosing instructions in relation to food. Obtaining pharmacokinetic
information in sub-populations with potentially different metabolism or excretion, such as
patients with renal or hepatic impairment, geriatric patients, children, and ethnic subgroups
should be considered (ICH E4 Dose-Response Studies, E7 Clinical Trials in Geriatric
Population, E11, and E5, respectively).
4.3.1.3 Pharmacodynamics & Early Measurement of Drug Activity
Depending on the drug and the endpoint of interest, pharmacodynamic studies and studies
relating drug levels to response (PK/PD studies) may be conducted in healthy volunteer
participants or in patients with the condition or disease. If there is an appropriate measure,
pharmacodynamic data can provide early estimates of activity and efficacy and may guide the
dosage and dose regimen in later studies.
from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf
3.1.3.1 Phase I (Most typical kind of study: Human Pharmacology)
Phase I starts with the initial administration of an investigational new drug into humans.
Although human pharmacology studies are typically identified with Phase I, they may also be
indicated at other points in the development sequence. Studies in this phase of development
usually have non-therapeutic objectives and may be conducted in healthy volunteer subjects
or certain types of patients, e.g. patients with mild hypertension. Drugs with significant
potential toxicity, e.g. cytotoxic drugs, are usually studied in patients. Studies in this phase
can be open, baseline controlled or may use randomisation and blinding, to improve the
validity of observations.
Studies conducted in Phase I typically involve one or a combination of the following aspects:
a) Estimation of Initial Safety and Tolerability
The initial and subsequent administration of an investigational new drug into humans is
usually intended to determine the tolerability of the dose range expected to be needed for later
clinical studies and to determine the nature of adverse reactions that can be expected. These
studies typically include both single and multiple dose administration.
b) Pharmacokinetics
Characterisation of a drug's absorption, distribution, metabolism, and excretion continues
throughout the development plan. Their preliminary characterisation is an important goal of
Phase I. Pharmacokinetics may be assessed via separate studies or as a part of efficacy, safety
and tolerance studies. Pharmacokinetic studies are particularly important to assess the
clearance of the drug and to anticipate possible accumulation of parent drug or metabolites
and potential drug-drug interactions. Some pharmacokinetic studies are commonly conducted
in later phases to answer more specialised questions. For many orally administered drugs,
especially modified release products, the study of food effects on bioavailability is important.
Obtaining pharmacokinetic information in sub-populations such as patients with impaired
elimination (renal or hepatic failure), the elderly, children, women and ethnic subgroups
should be considered. Drug-drug interaction studies are important for many drugs; these are
generally performed in phases beyond Phase I but studies in animals and in vitro studies of
metabolism and potential interactions may lead to doing such studies earlier.
c) Assessment of Pharmacodynamics
Depending on the drug and the endpoint studied, pharmacodynamic studies and studies
relating drug blood levels to response (PK/PD studies) may be conducted in healthy volunteer
subjects or in patients with the target disease. In patients, if there is an appropriate measure,
pharmacodynamic data can provide early estimates of activity and potential efficacy and may
guide the dosage and dose regimen in later studies.
d) Early Measurement of Drug Activity
Preliminary studies of activity or potential therapeutic benefit may be conducted in Phase I as
a secondary objective. Such studies are generally performed in later phases but may be
appropriate when drug activity is readily measurable with a short duration of drug exposure in
patients at this early stage.</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:01031<a name="181513-SEVCO.5801031"> </a></td><td>exploratory investigational new drug study</td><td>A clinical trial that is conducted early in phase 1, involves very limited human exposure, and has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies).</td><td>According to the original FDA guidance, such exploratory IND studies are conducted prior to the traditional dose escalation, safety, and tolerance studies that ordinarily initiate a clinical drug development program. The duration of dosing in an exploratory IND study is expected to be limited (e.g., 7 days).
A type of clinical trial that involves low dosage and short duration of drug exposure for a limited number of study participants with the intent of gathering preliminary data on the mechanism of action, pharmacodynamics, pharmacokinetics, or bioavailability of promising therapeutic candidate agents in human subjects.
Less official terms (phase 0 trial, pre-clinical trial) have been used to describe a clinical trial that uses an investigational agent that has never previously given to humans or for which there is extremely limited human experience. A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations).</td><td>Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Olga Vovk</td><td>2022-02-01 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper</td><td>2022-01-25 vote 8-1 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde</td><td>2022-01-25 comments: I had to read a couple of times and check the hierarchy to appreciate this definition, but I agree.
For a later version of SEVCO, we probably should put citations ("original FDA guidance") into the documentation.
concern over the use of the term 'phase 1' in the definition and the presence of an alternative term 'phase 0 study`.</td><td>the original source at https://www.fda.gov/regulatory-information/search-fda-guidance-documents/exploratory-ind-studies
FDA GUIDANCE DOCUMENT
Exploratory IND Studies
Guidance for Industry, Investigators, and Reviewers
JANUARY 2006
investigational new drug (IND)
For the purposes of this guidance the phrase exploratory IND study is intended to describe a clinical trial that
is conducted early in phase 1,
involves very limited human exposure, and
has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies).
Such exploratory IND studies are conducted prior to the traditional dose escalation, safety, and tolerance studies that ordinarily initiate a clinical drug development program. The duration of dosing in an exploratory IND study is expected to be limited (e.g., 7 days).
from CTO:
Early Phase I clinical trial (Phase 0 trial, Phase 0 clinical trial, Pre-Clinical Trial)
A clinical trial that is at an Early Phase i or Phase 0, which is designed to use an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited. [def-source: NCI] Exploratory trials, involving very limited human exposure, with no therapeutic or diagnostic intent (e.g., screening studies, microdose studies). (Formerly listed as "Phase 0") A clinical trial that is at Early Phase 1 or Phase 0
from SCO:
not included
from NCIt:
Preferred Name: Exploratory Investigational New Drug Study
Definition: A type of clinical trial that involves low dosage and short duration of drug exposure for a limited number of study participants with the intent of gathering preliminary data on the mechanism of action, pharmacodynamics, pharmacokinetics, or bioavailability of promising therapeutic candidate agents in human subjects.
CDISC-GLOSS Definition: A clinical study that is conducted early in Phase 1; involves very limited human exposure and has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies) [FDA Guidance for industry, investigators, and Reviewers: exploratory IND studies, January 2006] See also Phase 0.
First-in-Human Study = A type of phase 1 clinical trial in which the test product is administered to human beings for the first time.
Phase 0 Trial = Pre-Clinical Trial = A clinical trial that uses an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited.
First-in-human trials, in a small number of subjects, that are conducted before Phase 1 trials and are intended to assess new candidate therapeutic and imaging agents. The study agent is administered at a low dose for a limited time, and there is no therapeutic or diagnostic intent. NOTE: FDA Guidance for Industry, Investigators, and Reviewers: Exploratory IND Studies, January 2006 classifies such studies as Phase 1. NOTE: A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations). [Improving the Quality of Cancer Clinical Trials: Workshop summary-Proceedings of the National Cancer Policy Forum Workshop, improving the Quality of Cancer Clinical Trials (Washington, DC, Oct 2007)] (CDISC glossary)
First-in-human trials, in a small number of subjects, that are conducted before Phase 1 trials and are intended to assess new candidate therapeutic and imaging agents. The study agent is administered at a low dose for a limited time, and there is no therapeutic or diagnostic intent. NOTE: FDA Guidance for Industry, Investigators, and Reviewers: Exploratory IND Studies, January 2006 classifies such studies as Phase 1. NOTE: A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations). [Improving the Quality of Cancer Clinical Trials: Workshop summary-Proceedings of the National Cancer Policy Forum Workshop, improving the Quality of Cancer Clinical Trials (Washington, DC, Oct 2007)] )
from OCRe:
Phase 0 = A Phase 0 trial is an exploratory trial involving very limited human exposure, with no therapeutic or diagnostic intent (e.g., screening study, microdose study).
from EDDA:
pre-clinical trial = phase 0 trial = A clinical trial that uses an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited. [NCI 2014_12E]</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01032<a name="181513-SEVCO.5801032"> </a></td><td>phase 1/phase 2 trial</td><td>A clinical trial with a component meeting the definition of phase 1 trial and a component meeting the definition of phase 2 trial.</td><td>A phase 1 trial is a clinical trial to gather initial evidence in humans to support further investigation of an intervention.
A phase 2 trial is a clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use.</td><td>Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte</td><td>2022-01-25 vote 9-0 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde</td><td>2022-01-18 vote 3-2 by Harold Lehmann, Paul Harris, Robin Ann Yurk, raradhikaag@gmail.com, Paul Whaley</td><td>2022-01-18 comments: Does it matter that the Term has Arabic numerals and the Definition, Roman?
Consider adding a comment for application to improve definition interpretation with individual term definitions for Phase I/Phase 2 trial
Not sure I quite understand what the "separate sets of design parameters with" phrase means here?</td><td>from CTO:
phase I/II trial (trial phase 1/2, trial phase 1-2)
Trials that are a combination of phases 1 and 2. A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol.
A class of clinical study that combines elements characteristic of traditional Phase I and Phase II trials. See also Phase I, Phase II.
A trial to study the safety, dosage levels, and response to a new treatment.
from SCO:
phase I/II trial (trial phase 1/2, trial phase 1-2)
A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol.
from NCIt:
same as CTO
from OCRe:
not included
from EDDA:
phase I/II trial (trial phase 1/2, trial phase 1-2)
A class of clinical study that combines elements characteristic of traditional Phase I and Phase II trials. See also Phase I, Phase II. [NCIT_14.08d] [Contributing_Source_CDISC]
A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol. [NCIT_14.08d]
A trial to study the safety, dosage levels, and response to a new treatment. [NCIT_14.08d]</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01033<a name="181513-SEVCO.5801033"> </a></td><td>phase 2 trial</td><td>A clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use.</td><td>Phase 2 trials are typically controlled clinical studies conducted to evaluate the effectiveness of the intervention for a particular indication and to determine the common short-term side effects and risks associated with the intervention. Phase 2 trials may have a goal of determining the dose(s) or regimen(s) for Phase 3 trials. Phase 2 studies usually include no more than several hundred subjects.</td><td>Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Harold Lehmann</td><td>2022-01-11 vote 7-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk</td><td>2021-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, janice tufte, Paola Rosati, C P Ooi, Joanne Dehnbostel, Paul Whaley</td><td>2021-01-04 comments: The first part of the definition is ok. In the second part, I would suggest to change with "An insufficient evidence for the intervention tested or the desired patients' number failure could occur thus impeding regulatory approval for clinical use"
Comment Suggestion to add to comment for term from extracted from notes-3.1.3.2: An important goal for this phase is to determine the dose(s) and regimen for Phase III trials.
Early studies in this phase often utilize dose escalation designs (see ICH E4) to give an early estimate of dose response and later studies may confirm the dose response relationship for the indication in question by using recognized parallel dose-response designs (could also be deferred to phase III)
Minor change - the phrasing is a little awkward, suggest "gather evidence about the effectiveness and safety of an intervention in patients with the disease or condition under study, but not sufficient...".
I am not sure the comment for application is fully consistent with the definitions (what about safety?).
2022-01-11 comment: I would suggest not adding how many subjects are typically involved, maybe state that these usually have small sample sizes. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377</td><td>https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes:
§ 312.21 Phases of an investigation.
An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows:
....
Phase 2. Phase 2 includes the controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects.
from CTO:
Phase II trial
A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. Includes controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in participants with the disease or condition under study and to determine the common short-term side effects and risks.
Phase 2. Controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. NOTE: Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
A study to test whether a new treatment has an anticancer effect (for example, whether it shrinks a tumor or improves blood test results) and whether it works against a certain type of cancer.
Controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. NOTE: Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects. [after FDA CDER handbook, ICH E8]
from SCO:
phase II trial not independently defined
from NCIt:
same as CTO
from OCRe:
A Phase 2 trial includes controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks.
from EDDA:
A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. [NCI 2014_12E] Studies that are usually controlled to assess the effectiveness and dosage (if appropriate) of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques. These studies are performed on several hundred volunteers, including a limited number of patients with the target disease or disorder, and last about two years. This concept includes phase II studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10]
A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. [NCI 2014_12E]
Studies that are usually controlled to assess the effectiveness and dosage (if appropriate) of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques. These studies are performed on several hundred volunteers, including a limited number of patients with the target disease or disorder, and last about two years. This concept includes phase II studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10]
from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021
After initial clinical studies provide sufficient information on safety, clinical pharmacology and
dose, exploratory and confirmatory studies (usually referred to as phases 2 and 3, respectively)
are conducted to further evaluate both the safety and efficacy of the drug.
Exploratory studies are designed to investigate safety and efficacy in a selected population of
patients for whom the drug is intended. Additionally, these studies aim to refine the effective
dose(s) and regimen, refine the definition of the targeted population, provide a more robust
safety profile for the drug, and include evaluation of potential study endpoints for subsequent
studies. Exploratory studies may provide information on the identification and determination
of factors that affect the treatment effect and, possibly combined with modelling and
simulation, serve to support the design of later confirmatory studies.
from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf
3.1.3.2 Phase II (Most typical kind of study: Therapeutic Exploratory)
Phase II is usually considered to start with the initiation of studies in which the primary
objective is to explore therapeutic efficacy in patients.
Initial therapeutic exploratory studies may use a variety of study designs, including
concurrent controls and comparisons with baseline status. Subsequent trials are usually
randomised and concurrently controlled to evaluate the efficacy of the drug and its safety for
a particular therapeutic indication. Studies in Phase II are typically conducted in a group of
patients who are selected by relatively narrow criteria, leading to a relatively homogeneous
population and are closely monitored.
An important goal for this phase is to determine the dose(s) and regimen for Phase III trials.
Early studies in this phase often utilise dose escalation designs (see ICH E4) to give an early
estimate of dose response and later studies may confirm the dose response relationship for the
indication in question by using recognised parallel dose-response designs (could also be
deferred to phase III). Confirmatory dose response studies may be conducted in Phase II or
left for Phase III. Doses used in Phase II are usually but not always less than the highest doses
used in Phase†I.
Additional objectives of clinical trials conducted in Phase II may include evaluation of
potential study endpoints, therapeutic regimens (including concomitant medications) and
target populations (e.g. mild versus severe disease) for further study in Phase II or III. These
objectives may be served by exploratory analyses, examining subsets of data and by including
multiple endpoints in trials.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01034<a name="181513-SEVCO.5801034"> </a></td><td>phase 2/phase 3 trial</td><td>A clinical trial with a component meeting the definition of phase 2 trial and a component meeting the definition of phase 3 trial.</td><td>A phase 2 trial is a clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use.
A phase 3 trial is a clinical trial to gather the evidence of effectiveness and safety of an intervention, intended to provide an adequate basis for regulatory approval for clinical use.</td><td>Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte</td><td>2022-02-08 vote 7-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper, Paul Whaley, Sunu Alice Cherian</td><td>2022-01-18 vote 2-3 by Harold Lehmann, Paul Harris, Robin Ann Yurk, raradhikaag@gmail.com, Paul Whaley
2022-01-25 vote 9-1 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde, Paul Whaley
2022-02-01 vote 4-1 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper</td><td>2022-01-18 comments: Phase 2/3 trials determine efficacy of a new biomedical intervention i.e. whether it works as intended in a larger group of study participants, and monitor adverse effects so that the intervention may be used safely.
Consider adding a comment for application to improve definition interpretation with individual term definitions for Phase 2/Phase 3 trial
Not sure I quite understand what the "separate sets of design parameters with" phrase means here?
2022-01-25 comment: As already pointed out, to me these definitions seem incongruent and lack specification of the outcomes used, namely core clinical outcomes relevant for patients. Are pahse 2 and phase 3 trials designed to gather evidence of 'effectiveness' and safety or 'efficacy' and monitor adverse effects of a new biomedical intervention? For what outcome? The three sentences proposed in the comment for application of this code seem overlapping the two terms (i.e. is it still efficacy the right term used for trials or is it effectiveness, commonly used for prospective observational studies?). I think it is important to justify why the two terms are used for clinical trial designs.
2022-02-01 comment: To me this definition has no clear meaning. As your are working and struggling so hard to define and clarify the scientific evidence code system, I wish to participate to the meeting to discuss with you this tricky definition. If you agree, please, let me know.</td><td>from CTO:
phase II/III trial (trial phase 2/3, trial phase 2-3)
Trials that are a combination of phases 2 and 3. A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials.
A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen.
A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials.
from SCO:
not included
from NCIt:
phase II/III trial (trial phase 2/3, trial phase 2-3)
A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials.
A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen.
A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials.
from OCRe:
not included
from EDDA:
phase II/III trial (trial phase 2/3, trial phase 2-3)
A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. [NCIT_14.08d]
A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. [NCIT_14.08d] [Contributing_Source_CDISC]
A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen. [NCIT_14.08d]
"Designs that combine phase II and III functions (ie, phase II/III designs) have separate sets of design parameters that correspond to their phase II and III components." -- Korn EL et al. Design Issues in Randomized Phase II/III Trials. J Clin Oncol 2012 https://ascopubs.org/doi/full/10.1200/JCO.2011.38.5732. https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3295562&blobtype=pdf</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01035<a name="181513-SEVCO.5801035"> </a></td><td>phase 3 trial</td><td>A clinical trial to gather the evidence of effectiveness and safety of an intervention, intended to provide an adequate basis for regulatory approval for clinical use.</td><td>Phase 3 trials are typically conducted after preliminary evidence suggests effectiveness and usually have the primary objective to demonstrate or confirm therapeutic benefit compared to placebo or a standard treatment. Phase 3 studies usually include from several hundred to several thousand subjects. Study endpoints for phase 3 trials should be clinically relevant or of adequate surrogacy for predicting clinical effects.</td><td>Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte, Kenneth Wilkins, Harold Lehmann</td><td>2022-01-18 vote 6-0 by Harold Lehmann, Paul Harris, Robin Ann Yurk, Paola Rosati, raradhikaag@gmail.com, Paul Whaley</td><td>2021-12-21 vote 2-2 by Robin Ann Yurk, C P Ooi, Janice Tufte, Paul Whaley
2022-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Paola Rosati, C P Ooi, Joanne Dehnbostel, Paul Whaley
2022-01-11 vote 6-1 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk</td><td>2021-12-21 comments: Note: consider adding the following comments from comments from previous reviewers to improve interpretation.
3.1.3.3 “Phase III usually is considered to begin with the initiation of studies in which the primary objective is to demonstrate, or confirm therapeutic benefit.”
EDDA: “Comparative studies to verify the effectiveness of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques determined in phase II studies…. A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo”
..................
I think the pieces are there but the phrasing is difficult to parse.
2022-01-04 comments: Perhaps adding "compared with a standard treatment" may improve the clarity.
Minor change - the phrasing is a little awkward, suggest "gather evidence about the effectiveness and safety of an intervention that is needed...".
2022-01-11 comments: I would suggest not adding how many subjects are typically involved. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377
Minor change for consistency with other trial definitions: "A clinical trial to gather evidence of effectiveness and safety of an intervention, that is intended to provide an adequate basis for regulatory approval for clinical use."</td><td>https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes:
§ 312.21 Phases of an investigation.
An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows:
....
Phase 3. Phase 3 studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather the additional information about effectiveness and safety that is needed to evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. Phase 3 studies usually include from several hundred to several thousand subjects.
from CTO:
Phase III trial
Includes trials conducted after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather additional information to evaluate the overall benefit-risk relationship of the drug. A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo.
Phase 3. Studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather the additional information about effectiveness and safety that is needed to confirm efficacy and evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. NOTE: Phase 3 studies usually include from several hundred to several thousand subjects. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
A study to compare the results of people taking a new treatment with the results of people taking the standard treatment (for example, which group has better survival rates or fewer side effects). In most cases, studies move into phase III only after a treatment seems to work in phases I and II. Phase III trials may include hundreds of people.
Studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained and are intended to gather the additional information about effectiveness and safety that is needed to confirm efficacy and evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. NOTE: Phase 3 studies usually include from several hundred to several thousand subjects. [after FDA CDER handbook, ICH E8]
from SCO:
A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo.
from NCIt:
Phase III trial (Phase III Clinical Trial; Phase III Trial; phase 3; Trial Phase 3; PHASE III TRIAL; phase III trial; Phase III Trials; 3; Phase 3 Study; Clinical Trials, Phase III; Phase III Study; Phase III Protocol)
A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo.
from OCRe:
A Phase 3 trial includes expanded controlled and uncontrolled trials after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather additional information to evaluate the overall benefit-risk relationship of the drug and provide an adequate basis for physician labeling.
from EDDA:
Comparative studies to verify the effectiveness of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques determined in phase II studies. During these trials, patients are monitored closely by physicians to identify any adverse reactions from long-term use. These studies are performed on groups of patients large enough to identify clinically significant responses and usually last about three years. This concept includes phase III studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10]
A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo. [NCI 2014_12E]
from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021
After initial clinical studies provide sufficient information on safety, clinical pharmacology and
dose, exploratory and confirmatory studies (usually referred to as phases 2 and 3, respectively)
are conducted to further evaluate both the safety and efficacy of the drug.
Confirmatory studies are designed to confirm the preliminary evidence accumulated in earlier clinical studies that a drug is safe and effective for use for the intended indication and recipient population. These studies are often intended to provide an adequate basis for marketing approval, and to support adequate instructions for use of the drug and official product information. They aim to evaluate the drug in participants with or at risk of the condition or disease who represent those who will receive the drug once approved. This may include investigating subgroups of patients with frequently occurring or potentially relevant comorbidities (e.g., cardiovascular disease, diabetes, hepatic and renal impairment) to characterise the safe and effective use of the drug in patients with these conditions.
Confirmatory studies may evaluate the efficacy and safety of more than one dose or the use of the drug in different stages of disease or in combination with one or more other drugs. If the intent is to administer a drug for a long period of time, then studies involving extended exposure to the drug should be conducted (ICH E1 Clinical Safety for Drugs used in Long-Term Treatment). Irrespective of the intended duration of administration, the duration of effect of the drug will also inform the duration of follow-up.
Study endpoints selected for confirmatory studies should be clinically relevant and reflect disease burden or be of adequate surrogacy for predicting disease burden or sequelae.
from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf
3.1.3.3 Phase III (Most typical kind of study: Therapeutic Confirmatory)
Phase III usually is considered to begin with the initiation of studies in which the primary objective is to demonstrate, or confirm therapeutic benefit.
Studies in Phase III are designed to confirm the preliminary evidence accumulated in Phase II that a drug is safe and effective for use in the intended indication and recipient population. These studies are intended to provide an adequate basis for marketing approval. Studies in Phase III may also further explore the dose-response relationship, or explore the drug's use in wider populations, in different stages of disease, or in combination with another drug. For drugs intended to be administered for long periods, trials involving extended exposure to the drug are ordinarily conducted in Phase III, although they may be started in Phase II (see ICH E1). ICH E1 and ICH E7 describe the overall clinical safety database considerations for chronically administered drugs and drugs used in the elderly. These studies carried out in Phase III complete the information needed to support adequate instructions for use of the drug (official product information).</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01036<a name="181513-SEVCO.5801036"> </a></td><td>post-marketing study</td><td>A clinical trial to gather additional evidence of effectiveness and safety of an intervention for an already approved clinical use.</td><td>Post-marketing studies (phase IV trials) are often used to evaluate adverse effects that were not apparent in phase III trials, and may involve thousands of patients. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval.</td><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Paul Whaley</td><td>2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Paola Rosati, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Joanne Dehnbostel</td><td/><td>2022-02-15 comment: Maybe add hyphen between "already" and "approved"</td><td>from CTO:
Phase IV Trial (Phase IV Study, Phase IV clinical trial, phase 4 study, phase 4 trial, trial phase 4)
Studies of FDA-approved drugs to delineate additional information including the drug's risks, benefits, and optimal use. A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed.
After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial.
Post approval studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [after FDA CDER handbook, ICH E8]
Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
from SCO:
not included
from NCIt:
Phase IV Trial (Phase IV Study, Phase IV clinical trial, phase 4 study, phase 4 trial, trial phase 4)
A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed.
After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial.
Post approval studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [after FDA CDER handbook, ICH E8]
Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
from OCRe:
A Phase 4 study monitors FDA-approved drug to delineate additional information including the drug's risks, benefits, and optimal use.
from EDDA:
A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed. [NCIT_14.08d]
After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial. [NCIT_14.08d]
Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] [Contributing Source_CDISC] [NCIT_14.08d]</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01002<a name="181513-SEVCO.5801002"> </a></td><td>observational research</td><td>A study design in which the independent variables (exposures or interventions) are not prospectively assigned or modified by the investigator.</td><td>We acknowledge that observational study design and observational study may not be exact synonyms of observational research, but observational research could be used to encompass both design and implementation of the design. In the context of coding study design factors, observational research is commonly used to denote non-interventional research.</td><td>Mario Tristan, Joanne Dehnbostel, Harold Lehmann, Khalid Shahin, Brian S. Alper</td><td>8/8 as of 6/7/2021: Asiyah Lin, KM Saif-Ur-Rahman, Harold Lehmann, Sebastien Bailly, Bhagvan Kommadi, Mario Tristan, Leo Orozco, Ahmad Sofi-Mahmudi</td><td>2021-05-17 vote 5-3 on "Observational research = In a prospective or retrospective study, an independent variable is measured but not manipulated by the investigator to evaluate a response or outcome (the dependent variable)." by Eric Harvey, Bhagvan Kommadi, Paola Rosati, KM Saif-Ur-Rahman, Ahmad Sofi-Mahmudi, Jesus Lopez-Alcalde, Sorana D. Bolboacă, Harold Lehmann, 2021-05-24 vote 8-3 on Observational research="A study design in which the variables (exposures, interventions, and outcomes) are not prospectively assigned or modified by the investigator." by Alejandro Piscoya, Philippe Rocca-Serra, KM Saif-Ur-Rahman, Eric Harvey, Harold Lehmann, Bhagvan Kommadi, Sorana D. Bolboacă, Jesús López-Alcalde, Paola Rosati, Tatyana Shamliyan, Brian Alper, , 2021-05-31 vote 11-1 on Observational research="A study design in which the independent variables (exposures or interventions) are not prospectively assigned or modified by the investigator." by Eric Harvey, Bhagvan Kommadi, Brian Alper, Sebastien Bailly, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Paola Rosati, Sorana D. Bolboacă, Asiyah Lin, Leo Orozco, Erfan Shamsoddin</td><td>I dislike the term "manipulated" in the definition -- suggest change to: In a prospective or retrospective study, without any specific intervention assigned to participants, an investigator observes and measures an intervention or procedure (the independent variable) to assess or learn more about an effect or outcome (the dependent variable). "In a prospective or retrospective study, an independent variable (a predictor) is obeserved or measured by the investigator to evaluate a response or an outcome (the dependent variable)." I would delete "in a prospective or retrospective study" as it could be ambispective 5-24-2021 similar comment about the synonyms assigned the class (conflating plan/design) with the object realised by executing a plan I think that the outcomes are never assigned or modified by the investigator (they are measured). Thus, to be consistent with the definition of interve…suggest to remove "outcomes" from ( ) (is there a semantic difference between "are not" and "none is"?) I suggest to clarify the goal as drawing causal inferences from the observed association between exposure and outcomes 5-31-2021 comment The suggested definition is a non-interventional study definition. Not sure if a non-interventional is fully equivalent to observational studies</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01037<a name="181513-SEVCO.5801037"> </a></td><td>post-marketing surveillance study</td><td>An observational study to identify adverse events related to the use of an approved clinical intervention.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Harold Lehmann</td><td>2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Brian S. Alper, Paola Rosati, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Joanne Dehnbostel</td><td/><td>2022-02-15 comments: Alternative terms could be: Post-marketing evaluation study,
(Do we need to connect the "approval" to an indication?</td><td>from CTO:
not included
from SCO:
not included
from NCIt:
Postmarketing Surveillance
Programs to identify adverse events that did not appear during the drug approval process.
Ongoing safety monitoring of marketed drugs. See also Phase 4 studies, Phase 5 studies.
also Phase V Trial (phase 5, trial phase 5)
Postmarketing surveillance is sometimes referred to as Phase V. See outcomes research.
from OCRe:
not included
from EDDA:
postmarketing evaluation study (post-marketing product surveillance)
Surveillance of drugs, devices, appliances, etc., for efficacy or adverse effects, after they have been released for general sale. [MeSH 2014_2014_02_10]</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01010<a name="181513-SEVCO.5801010"> </a></td><td>Comparative study design</td><td>A study design in which two or more groups are compared.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Ellen Jepson, Kenneth Wilkins, Mario Tristan</td><td>9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01011<a name="181513-SEVCO.5801011"> </a></td><td>Parallel cohort design</td><td>A comparative study design in which the groups are compared concurrently and participants are expected to remain in the groups being compared for the entire duration of participation in the study.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Ellen Jepson, Kenneth Wilkins, Mario Tristan, Harold Lehmann</td><td>9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01012<a name="181513-SEVCO.5801012"> </a></td><td>Crossover cohort design</td><td>A comparative study design in which participants receive two or more alternative exposures during separate periods of time.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td>8/9 as of 8/9/2021: voting on "A comparative study design in which participants receive two or more alternative exposures during separate periods of time." by Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya</td><td>8/9/21 comment: It's not clear from this definition that each group of participants receives the same 2 or more exposures, but not in the same time sequence</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01024<a name="181513-SEVCO.5801024"> </a></td><td>Controlled crossover cohort design</td><td>A crossover cohort design in which two or more cohorts have different orders of exposures.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01025<a name="181513-SEVCO.5801025"> </a></td><td>Single-arm crossover design</td><td>A crossover cohort design in which all participants are in a single cohort with the same order of exposures.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01013<a name="181513-SEVCO.5801013"> </a></td><td>Case control design</td><td>A comparative study design in which the groups being compared are defined by outcome presence (case) or absence (control).</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01014<a name="181513-SEVCO.5801014"> </a></td><td>Matching for comparison</td><td>A comparative study design in which individual participants in different groups being compared are paired or matched into sets based on selected attributes for within-set analysis.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01020<a name="181513-SEVCO.5801020"> </a></td><td>Family study design</td><td>A matched study design in which related or non-related family members are compared.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:01021<a name="181513-SEVCO.5801021"> </a></td><td>Twin study design</td><td>A family study design in which twin siblings are compared.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01015<a name="181513-SEVCO.5801015"> </a></td><td>Cluster as unit of allocation</td><td>A comparative study design in which participants are allocated to exposures (interventions) by their membership in groups (called clusters) rather than by individualized assignments.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01023<a name="181513-SEVCO.5801023"> </a></td><td>Non-comparative study design</td><td>A study design with no comparisons between groups with different exposures and no comparisons between groups with different outcomes.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01016<a name="181513-SEVCO.5801016"> </a></td><td>Uncontrolled cohort design</td><td>A non-comparative study design in which two or more participants are evaluated in a single group (or cohort).</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01017<a name="181513-SEVCO.5801017"> </a></td><td>Case report</td><td>A non-comparative study design in which a single participant is evaluated.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer</td><td>7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01022<a name="181513-SEVCO.5801022"> </a></td><td>Population-based design</td><td>A study design in which the unit of observation is a population or community.</td><td>The term ‘population-based study’ is generally used for an observational comparative study design in which populations are compared.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Harold Lehmann</td><td>5/5 as of 10/18/2021: Cheow Peng Ooi, Janice Tufte, Robin Ann Yurk, Eric Harvey, Joanne Dehnbostel</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01044<a name="181513-SEVCO.5801044"> </a></td><td>Ecological design</td><td>A study design in which the unit of observation is a population or community defined by social relationships or physical surroundings.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Harold Lehmann</td><td>5/5 as of 10/18/2021: Cheow Peng Ooi, Janice Tufte, Robin Ann Yurk, Eric Harvey, Joanne Dehnbostel</td><td>6 to 1 in 2021-09-20 vote with 7 participants (Ecological design = A comparative study design in which populations are compared. An ecologic study is a non individual-human study in which the unit of observation is a population or community.) - Robin Ann Yurk, Janice Tufte, Eric Harvey, Jesus Lopez-Alcalde, Mario Tristan, Sorana D Bolboaca, Paola Rosati, 8 to 1 vote on 2021-09-27 with 9 participants (Ecological design [Population-based design, Ecologic study, Population study] = A comparative study design in which populations are compared. An ecologic study is a non-individual study in which the unit of observation is a population or community.) - Jesus Lopez-Alcalde, Asiyah Lin, Eric Harvey, Bhagvan Kommadi, Alejandro Piscoya, Robin Ann Yurk, Mario Tristan, Paola Rosati, Janice Tufte</td><td>2021-09-20 comment: I miss here the explicit declaration that ecological studies are observational. A cluster trial can randomise communities and is not an ecological study. Besides, and I may be worng, but an ecological study may include non-humans, for example, ecological study of air contamination levels in Spain compared to Italy. 2021-09-27 comment: The differences of ecologic studies and other population based studies are not reflected. consider adding "Variables in an ecologic analysis may be aggregate measures, environmental measures, or global measures."</td><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:00998<a name="181513-SEVCO.5800998"> </a></td><td>Study Design Process</td><td>A specification of a sequence of actions for a component or part of a study design.</td><td>Study design is defined as a plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-03-22 vote 5-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01027<a name="181513-SEVCO.5801027"> </a></td><td>Cross sectional data collection</td><td>A study design process in which data is collected at a single point in time.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan</td><td/><td>The word "feature" was added to the definition on March 7, 2022 to match the change in hierarchical terms.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01028<a name="181513-SEVCO.5801028"> </a></td><td>Longitudinal data collection</td><td>A study design process in which data is collected at two or more points in time.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan</td><td/><td>The word "feature" was added to the definition on March 7, 2022 to match the change in hierarchical terms.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01018<a name="181513-SEVCO.5801018"> </a></td><td>Time series design</td><td>A longitudinal data collection which includes a set of time-ordered observations.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01019<a name="181513-SEVCO.5801019"> </a></td><td>Before and after comparison</td><td>A time series design which includes comparisons of observations before and after an event or exposure.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01045<a name="181513-SEVCO.5801045"> </a></td><td>Primary data collection</td><td>A study design process in which the data are recorded and collected during the study for the purpose of the same study.</td><td>The study design process includes the source and method for data collection. When the data are collected for original research to answer the original research questions, this is called primary data collection.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Paul Whaley, Mario Tristan</td><td>2022-03-29 vote 6-0 by Paul Whaley, Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Brian S. Alper, Cauê Monaco</td><td>2022-02-22 vote 7-1 by Paola Rosati, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A, Rebecca Baker, Robin Ann Yurk, Janice Tufte, Harold Lehmann
2022-03-01 vote 3-3 by Joanne Dehnbostel, Robin Ann Yurk, Paul Whaley, Nisha Mathew, Paola Rosati, Sunu Alice Cherian
2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 5-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk, nelle.stocquart</td><td>2022-02-22 comments: Definition: Data recorded and collected during the study.
For parallelism with "secondary data collection," perhaps write, "for the purpose of the current study."2022-03-01 comments:
A data collection technique in which the data are collected and recorded during the study for the purpose of the same study.
For the term definition---I would edit so it reads...A study design in which the data are collected and recorded to answer a new research question.
Data collection is not study design, it can called as a technique A data collection technique in which data is recorded and collected during the study for the purpose of the same study.
2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
"in which" sounds strange for a "feature." ("Color is a feature in which..." does not sound right.) Perhaps a...feature regarding how data are recorded..."?
2022-03-22 comment: Suggest modify definition or create a comment for application so it reads: A Study design method in which the data are collected for original research to answer new research questions.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01026<a name="181513-SEVCO.5801026"> </a></td><td>Real world data collection</td><td>A study design process in which the study data are obtained from a source of data collected during a routine process in the natural environment rather than using a process designed or controlled by the researcher.</td><td>Real world data collection occurs when the study uses data obtained from a source that was not created for research as a primary purpose. A study can involve both primary data collection (with some data collected by a process created for the purpose of the study investigation) and real world data collection (with some data collected from a process created for a routine business or operational purpose). If a study involves both primary data collection and real world data collection, both terms can be applied.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Paul Whaley, Mario Tristan</td><td>2022-05-06 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte</td><td>2022-02-22 vote 7-1 by Paola Rosati, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A, Rebecca Baker, Robin Ann Yurk, Janice Tufte, Harold Lehmann
2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-2 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk, nelle.stocquart
2022-03-29 vote 4-1 by Paul Whaley, Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco
2022-04-05 vote 6-0 by Cauê Monaco, Harold Lehmann, Mario Tristan, Robin Ann Yurk, Jesus Lopez-Alcalde, Nisha Mathew
THEN THE TERM CHANGED to Real World Data Collection
2022-04-19 vote 3-1 by Cauê Monaco, Robin Ann Yurk, Jesus Lopez-Alcalde, Harold Lehmann</td><td>2022-02-22 comment: Definition: Data gathered from studies, surveys, experiments that have been done by other people for other studies
2022-03-15 comment: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
2022-03-22 comments: The proposed definition only details the source of the data but not the data collection itself. I propose: "A study design process in which the data are collected from data collected for a purpose other than the current study".
Suggest modify definition or create a comment for application so it reads. A study design method in which the previously collected data is used to answer new and additional research questions. Some example of the types of studies are retrospective study etc..
2022-03-29 comment: In the comment for application--suggestion. Delete phrase When data are collected. I would combined sentence When data are used in the form of analysis and interpretation from original research to answer additional research questions separate from the original research.
2022-04-12 comments: For Term definition: Suggest revising definition to A study design process in which the study data are obtained from data collected for recording data for business purposes.
Comment for application: Add this statement, There are different categories of research such as business research, marketing research, insurance research etc.
"data are obtained from data collected" may be changed to "data are obtained from a source for data collection"
2022-04-19 comment: Suggest edit the term definition. The alternative term and comment for application are fine. There are different kinds of research business research that can be classified as real world data. The term definition should read....A study design in which the study data processes are obtained from a natural environment rather than controlled research.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01039<a name="181513-SEVCO.5801039"> </a></td><td>Real world data collection from healthcare records</td><td>Real world data collection from data obtained routinely for a purpose of recording healthcare delivery in a record controlled by a healthcare professional.</td><td>This term is used when the original data collection (primary data collection) is done for the purpose of delivering professional healthcare services. The secondary use of this data (sometimes called 'real world data') for research is then called secondary data collection.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo</td><td>2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Robin Ann Yurk, Muhammad Afzal</td><td>2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 3-1 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco
2022-04-05 vote 6-1 by Cauê Monaco, Paola Rosati, Harold Lehmann, Mario Tristan, Robin Ann Yurk, Jesus Lopez-Alcalde, Nisha Mathew</td><td>2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
would proposed, "...for the purpose..." (as in primary data collection)
Seems like we should add that the original data is then used for a secondary research purpose in the definition, not only explain in alternative terms
2022-03-22 comment: ídem: Proposal: "A study design process in which the data are collected from data collected for a purpose of recording healthcare delivery in a record controlled by a healthcare professional."2022-03-29 comment: "medical records" and "health records" seem to be much more widely used expressions than "healthcare delivery records"2022-04-05 comments: Suggest make a comment or distinction in the term definition that the primary data collected is categorized as real world data for the purpose of delivering professional healthcare services. The data set can be used for secondary data collection.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01050<a name="181513-SEVCO.5801050"> </a></td><td>Real world data collection from personal health records</td><td>Real world data collection from data obtained routinely for a purpose of recording data related to personal health in a record controlled by the person, guardian, or caretaker.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo</td><td>2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal</td><td>2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper
THEN TERM CHANGED 2022-04-05</td><td>2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
Might suggest "the purpose," again
add in the definition that the original ddata is then used for a seconday purpose
2022-03-22 comment: dem: Proposal: "A study design process in which the data are collected from data collected for a purpose of recording data related to personal health in a record controlled by the person, guardian, or caretaker."</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01040<a name="181513-SEVCO.5801040"> </a></td><td>Real world data collection from healthcare financing records</td><td>Real world data collection from data obtained routinely for a purpose of recording healthcare financing.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal</td><td>2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper
THEN TERM CHANGED 2022-04-05</td><td>2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
"the purpose"add original financial data is then used for secondary analysis etc
2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of recording healthcare financing"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01048<a name="181513-SEVCO.5801048"> </a></td><td>Real world data collection from testing procedures</td><td>Real world data collection from data obtained routinely for a purpose of testing, such as diagnostic testing or screening examination.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal</td><td>2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper
THEN TERM CHANGED 2022-04-05</td><td>2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
"the purpose"and then used for secondary research purposes
2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of testing, such as diagnostic testing or screening examination"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01046<a name="181513-SEVCO.5801046"> </a></td><td>Real world data collection from monitoring procedures</td><td>Real world data collection from data obtained routinely for a purpose of repeated testing.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal</td><td>2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper
THEN TERM CHANGED 2022-04-05</td><td>2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
"the purpose"2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of repeated testing."</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01049<a name="181513-SEVCO.5801049"> </a></td><td>Secondary data collection from prior research</td><td>A study design process in which the data are collected from data obtained during a different study than the current study.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan</td><td>2022-03-29 vote 5-0 by Mario Tristan, Paul Whaley, Cauê Monaco, Joanne Dehnbostel, Harold Lehmann</td><td>2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart</td><td>2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
2022-03-22 comments: Ídem. "A study design process in which the data are collected from data collected during a different study than the current study"When does this recording happen?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01042<a name="181513-SEVCO.5801042"> </a></td><td>Secondary data collection from a registry</td><td>A study design process in which the data are collected from a system organized to obtain and maintain uniform data for discovery and analysis, and this system is organized prior to the current study.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo</td><td>2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper</td><td>2022-03-15 vote 3-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart</td><td>2022-03-15 comments: the term discovery is not suitable. Can we have some other term?
ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
2022-03-22 comments: Ídem. "A study design process in which the data are collected from data collected in a system organized to obtain and maintain uniform data for discovery and analysis"The definition needs to be more, When did this happen? Before the study starts?</td><td>"For the purposes of this guide, a patient registry is an organized system that uses observational study methods to collect uniform data (clinical and other) to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serves one or more predetermined scientific, clinical, or policy purposes" -- in https://effectivehealthcare.ahrq.gov/sites/default/files/pdf/registries-guide-3rd-edition_research.pdf</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01051<a name="181513-SEVCO.5801051"> </a></td><td>Multisite data collection</td><td>A study design process in which data are collected from two or more geographic locations.</td><td>For studies conducted across multiple contexts (administrative or logistical) that are distinct from geographic locations, potentially introducing greater variability beyond multisite data collection, use the term Multicentric.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan</td><td>2022-05-10 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte</td><td>2022-05-06 vote 6-1 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte</td><td>2022-04-26 comment: As stated, this term has too much overlap with "Multicentric" Why do we need this term?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01086<a name="181513-SEVCO.5801086"> </a></td><td>Quantitative analysis</td><td>A study design process in which data are analyzed with mathematical or statistical methods and formulas.</td><td>The distinction of quantitative vs. qualitative analysis refers to whether mathematical processing is involved, whether or not the analysis includes numerical variables. Processing a categorical variable (e.g. values of happy, sad, or jealous as a response to "How are you feeling?") to produce numerical results (e.g. 30% happy, 50% sad, 20% surprised) would be classified as a Quantitative analysis.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin</td><td>2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann</td><td>2022-05-17 vote 8-1 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde
2022-05-24 vote 5-1 by Robin Ann Yurk, nelle.stocquart@kce.fgov.be, Eric M Harvey, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde</td><td>2022-05-17 comments: Suggest including examples of quantitative analysis so as to improve your definition as there are many categories of quantitative methods: ie survey methods, logistic regression,...etc
Quantitative and qualitative have categorical results I believe
2022-05-24 comment: An analytic approach using statistical methods and formulas to report the data for interpretation
2022-05-26 comment: I would leave the description of a qualitative analysis out of the comment for application</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01087<a name="181513-SEVCO.5801087"> </a></td><td>Qualitative analysis</td><td>A study design process in which data are analyzed, without primary reliance on mathematical or statistical techniques, by coding and organizing data to provide interpretation or understanding of experiences or hypotheses.</td><td>The distinction of quantitative vs. qualitative analysis refers to whether mathematical processing is involved, whether or not the analysis includes numerical variables. Processing a categorical variable (e.g. values of happy, sad, or jealous as a response to "How are you feeling?") to produce numerical results (e.g. 30% happy, 50% sad, 20% surprised) would be classified as a Quantitative analysis. Processing the transcripts of interviews to categorize phrases and report themes identified across interviews would be classified as a Qualitative analysis. Qualitative analysis techniques may include phenomenology development from categorical codes, and may result in discovery or creation of theories that are unattainable through quantitative analysis.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin</td><td>2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann</td><td>2022-05-17 vote 5-4 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde
2022-05-24 vote 4-1 by Robin Ann Yurk, Eric M Harvey, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde</td><td>2022-05-17 comments: What about ordinal data such as low/medium/high? I think I would view that as qualitative.
Suggest revise definition to include examples of the analysis methods as alternative terms or comment for applications: ie focus groups. There are many new software tools which apply quantitative methods to qualitative studies.
Quantitative and qualitative have categorical results I believe
Disagree - (Sorry, maybe you already know my comment here ;>). From my experience, qualitative analysis produces more than descriptive or categorical results, and uses a range of essential complex methodologies for producing unattainable results from trials. Some methods are inductive, others are deductive, or a mix of both. This modify the results achievable. For example, phenomenology from categorical codes produces new understanding of people's lived experiences (deemed robust, even from a small but convenient sample of people), whereas grounded theory, from descriptive and categorical data results, discovers or creates novel theories, crucial for subsequent research scrutiny, even for a trial.
I would suggest to define qualitative analysis differently = A study design process in which data, analysed and coded to produce descriptive and categorical results, lead to new understanding of people's lived experiences or new theories, unattainable from quantitative studies, essential for future trials.
In my opinion, descriptive numerical results come from quantitative analysis also (for example, incidence of SARS-COV2 per 100.000 habitants). I am not an expert in qualitative research but I guess it tackles phenomenons which can be observed but not measured.
2022-05-24 comment: Qualitative analysis provide a description or summary to understand exploratory experiences and patterns, themes in the data which can provide the framework for additional data interpretation through other analysis such as quantitative analysis. An example of a qualitative method is focus groups. Technology exists such as natural language processing or other software to report the analysis.
2022-05-26 comment: I would leave the description for a quantitative analysis out of the definition. I would also delete the example of feelings as this can be quantified through satisfaction research which is a quantitative analysis. I would give an example of focus groups or nature language processing. The method involves identifying themes in narrative text.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01060<a name="181513-SEVCO.5801060"> </a></td><td>Blinding of study participants</td><td>A study design process in which study participants are not informed of their intervention assignment.</td><td>Masking of study participants involves actions to conceal information that could lead to their awareness of their intervention assignment, such as provision of placebo or simulated interventions that mimic the target interventions.</td><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte</td><td>2022-08-23 vote 6-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey, Robin Ann Yurk</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01061<a name="181513-SEVCO.5801061"> </a></td><td>Blinding of intervention providers</td><td>A study design process in which the people administering the intervention are not informed of the intervention assignment.</td><td>Masking of intervention providers involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants, such as provision of placebo interventions that mimic the target interventions.
The terms 'double-blinding' and 'triple-blinding' are not clearly and consistently defined terms but typically suggest blinding of intervention providers.</td><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins</td><td>2022-08-23 vote 6-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey, Robin Ann Yurk</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01062<a name="181513-SEVCO.5801062"> </a></td><td>Blinding of outcome assessors</td><td>A study design process in which the people determining the outcome are not informed of the intervention assignment.</td><td>Masking of outcome assessors involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants to minimize the influence of such awareness on the determination of outcome measurement values.
The terms 'triple-blinding' and 'quadruple-blinding' are not clearly and consistently defined terms but may suggest blinding of outcome assessors.</td><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins</td><td>2022-08-23 vote 5-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01063<a name="181513-SEVCO.5801063"> </a></td><td>Blinding of data analysts</td><td>A study design process in which the people managing or processing the data and statistical analysis are not informed of the intervention assignment.</td><td>The term 'data analysts' is meant to include any person who works with the data at any point between data collection and the reporting of analyzed results.
Masking of data analysts involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants, such as noninformative labeling used to represent the study groups.
The terms 'triple-blinding' and 'quadruple-blinding' are not clearly and consistently defined terms but may suggest blinding of data analysts.</td><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins</td><td>2022-08-23 vote 5-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01064<a name="181513-SEVCO.5801064"> </a></td><td>Allocation concealment</td><td>A study design process in which all parties influencing study enrollment and allocation to study groups are unaware of the group assignment for the study participant at the time of enrollment and allocation.</td><td>Allocation concealment occurs before and during the enrollment process and refers to limiting awareness of assignment during the process of recruitment and assignment to groups. Other blinding and masking terms refer to limiting awareness of the assignment during and after enrollment.</td><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann</td><td>2022-08-30 vote 8-0 by Janice Tufte, nisha mathew,: Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Cauê Monaco, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:00999<a name="181513-SEVCO.5800999"> </a></td><td>Study Design Feature</td><td>An aspect or characteristic of a study design.</td><td>Study design is defined as a plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis.</td><td>Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte</td><td>2022-03-29 vote 8-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk, nelle.stocquart</td><td>2022-03-15 vote 7-0 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte, Robin Ann Yurk (but then the definition changed with the creation of Study Design Process)
2022-03-22 vote 5-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart, Robin Ann Yurk</td><td>2022-03-15 comments: I agree with the definition but feel this is meta-vocabulary that helps us talk about elements of study design that are not part of the code system itself. So I don't know if it should be included in the code system as a code, or if we should be considering some other means for defining these terms (e.g. in documentation or guidance about SEVCO).
not a fan of the synonym "study design factor" as it could cause confusion with 'Study Factor", Independent Variable. How different Study Design is from Study Protocol? "Study design planned process" could cover the following subtypes
For the comment for application include ...as a technical plan specification....
2022-03-22 comment: The definition of "Study design" seems to exclude the "statistical analysis". Am I right?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01043<a name="181513-SEVCO.5801043"> </a></td><td>Multicentric</td><td>A study design feature in which two or more institutions are responsible for the conduct of the study.</td><td>This term may be used for studies conducted across multiple contexts (administrative or logistical) that are distinct from geographic locations, potentially introducing greater variability beyond multisite data collection.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2022-05-06 vote 6-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann</td><td>2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Robin Ann Yurk
2022-04-26 vote 3-0 by Eric M Harvey, Robin Ann Yurk, Mario Tristan</td><td>2022-03-15 comments: Suggest add to multiple contexts (research)
a multicenter study is_a study. 'multicentric' would be a subtype of study_design_feature.
a concern here is that the current definition conflates 2 entities: a study and a characteristic of that study.
At the end of the day, it depends on how the modeling will be made, e.g (study) (has_some_study_design_feature) (type of study_design_feature)
Or should it be "Multicenter data collection" ?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01052<a name="181513-SEVCO.5801052"> </a></td><td>Includes patient-reported outcome</td><td>A study design feature in which one or more outcomes are reported directly from the patient without interpretation by a clinician or researcher.</td><td>Examples of patient-reported outcomes include symptoms, pain, quality of life, satisfaction with care, adherence to treatment, and perceived value of treatment. Data collection methods including surveys and interviews may obtain patient-reported outcomes. Reports derived from wearable devices would not typically include patient-reported outcomes. Such data may be coded with 'Real world data collection from monitoring procedures' (SEVCO:01046).</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-05-06 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte</td><td>2022-04-26 vote 3-0 by Eric M Harvey, Robin Ann Yurk, Mario Tristan</td><td>2022-04-26 comment: Suggest adding to the comment for application: data methods to collect Patient Reported Outcomes such as survey data.
2022-05-06 comments: Perhaps direct the reader to "Patient generated health data" or whatever else is the SEVCO term for "wearables" or other data sources (e.g., bluetooth scale).
Suggest adding to the comment for application: data methods to collect Patient Reported Outcomes such as survey data.
2022-06-07 preferred term changed from "Patient-reported outcome" to "Includes patient-reported outcome" to maintain consistency with sibling concepts</td><td>The U.S. Food and Drug Administration (FDA) defines a patient-reported outcome (PRO) as “any report of the status of a patient’s health condition that comes directly from the patient, without interpretation of the patient’s response by a clinician or anyone else [1].” -- from https://dcricollab.dcri.duke.edu/sites/NIHKR/KR/PRO%20Resource%20Chapter.pdf</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01053<a name="181513-SEVCO.5801053"> </a></td><td>Includes patient-centered outcome</td><td>A study design feature in which one or more measures are outcomes that patients directly care about, i.e. outcomes that are directly related to patients' experience of their life.</td><td>In healthcare research, outcomes are effects on patients or populations, including changes to health status, behavior, or knowledge as well as patient satisfaction and quality of life. A patient-centered outcome qualifies the type of outcome as that which patients directly care about, i.e. outcomes that are directly related to patients' experience of their life. Examples of patient-centered outcomes include mortality, morbidity, symptoms, and quality of life. Some use 'clinical outcome' as synonymous with 'patient-centered outcome' while some use 'clinical outcome' to represent outcomes that would assessed as part of healthcare practice.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Khalid Shahin</td><td>2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann</td><td>2022-05-06 vote 5-2 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte
2022-05-17 vote 7-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, Janice Tufte</td><td>2022-04-26 comment: Suggest Adding to comment for application: Population Statistics such as mortality, morbidity. Development of clinical outcomes is based on using a framework such as the Donabedian model: Structure, Process, Outcomes where outcomes have some relationship to structural or process measures in clinical care.....
2022-05-06 comment: I understand the goal of "quantity or quality of life," but I think it's too abstract--and limiting ("Quantity of life" is limited to life expectancy). I haven't reviewed other definitions, but the flavor is, "outcomes that patients care about." ("Function" is left off the list of "examples", albeit there is a large overlap with "morbidity," "symptoms," and "quality of life.") (See the Comments for Application for Surrogate Outcome!)
2022-05-17 comments: The definition seems to be in the comment: "A clinical outcome qualifies the type of outcome as that which patients directly care about." The definition as proposed doesn't really make sense to me.
The definition, alternative terms and comment for application are correct. However, it is more specific to patient reported outcomes. Clinical outcomes are more broad and also includes: physiologic measures, condition specific measures.....etc. Clinical outcomes can be structural, process or outcomes in the donabedian framework and or combined as composite outcomes.
While patient centered outcomes are typically considered clinical outcomes, they also indicate the observed outcomes by the clinician but not so much by the patient.
2022-05-26 comment: Suggest revise term definition so it is more inclusive or all healthcare or clinical outcomes, such as mortality, morbidity, physiologic measures, symptoms, experiences. The term is not a study design but a measure.
For example:
A healthcare measure which captures results from healthcare populations, settings structures, processes, and patients directly related to their care with healthcare settings, people, providers, and interventions.
Insert other alternative terms: Morbidity, Mortality, Symptoms, Experience of Care, Health Status, Quality of life. Suggest delete Patient Oriented Outcome, Patient Important Outcome, Patient Relevant Outcome, Patient Centered OUtcome, Includes clinical outcomes,</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01054<a name="181513-SEVCO.5801054"> </a></td><td>Includes disease-oriented outcome</td><td>A study design feature in which one or more measures are outcomes that relate to a health or illness condition but are not outcomes which patients directly care about.</td><td>In healthcare research, outcomes are effects on patients or populations, including changes to health status, behavior, or knowledge as well as patient satisfaction and quality of life. A patient-centered outcome qualifies the type of outcome as that which patients directly care about. Examples of patient-centered outcomes include mortality, morbidity, symptoms, and quality of life. A disease-oriented outcome qualifies the type of outcome as that which patients do not directly care about. Examples of disease-oriented outcomes include laboratory test measurements, imaging study findings, and calculated risk estimates. In this context, disease-oriented outcomes may be used as surrogate outcomes or proxy outcomes for ultimate effects on patient-centered outcomes, but do not provide direct evidence of effects on patient-centered outcomes.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Khalid Shahin</td><td>2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann</td><td>2022-05-06 vote 4-2 by Mario Tristan, Robin Ann Yurk, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte
2022-05-17 vote 6-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde</td><td>2022-05-06 comments: I would refer to whatever "clinical outcome" is defined as: "...indirect measures of clinical outcomes". The Comment for Application is redefining Clinical Outcome. I would spend that space pointing out that some surrogates are predictive (e.g., cholesterol levels, for MIs) and others are after the fact (e.g., sales of orange juice for treating the flu).
Suggestion--look at the wikipedia definition, then explore other mapping definitions. The current term definition and comment for application need improvement.
"In clinical trials, a surrogate endpoint is a measure of effect of a specific treatment that may correlate with a real clinical endpoint but does not necessarily have a guaranteed relationship. The National Institutes of Health defines surrogate endpoint as "a biomarker intended to substitute for a clinical endpoint". wikipedia...
2022-05-17 comments: Maybe edit to "An indirect measure of quantity or quality of life, presumed or believed to have an effect on clinical outcomes."I would focus on revising and define surrogate first and then include a broad definition, not just specific to clinical outcomes.
2022-05-26 comment: Surrogate Outcome is a proxy measure for capturing the outcome of interest.
Alternative Terms: delete disease oriented and surrogate outcome measure. Suggest add: Proxy Outcome Measure.
Comment for application: Delete first 3 sentences.
Edit the last sentence so it reads: A surrogate outcome is a measure which captures an approximate measure. Examples of surrogate outcomes includes survey measures rating scales for a child by the parent or teacher. Geriatric rating scales from paid or professional caregivers for a seriously ill or geriatric patient are other examples.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01085<a name="181513-SEVCO.5801085"> </a></td><td>Includes process measure</td><td>A study design feature in which one or more outcomes are actions or behaviors of a healthcare professional or care team.</td><td>A process outcome measure is a measure of change in actions or behaviors conducted in the process of healthcare delivery or clinical care, such as obtaining laboratory tests or referrals for follow-up care.</td><td>Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Kenneth Wilkins</td><td>2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann</td><td>2022-05-17 vote 7-1 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde
2022-05-17 vote 8-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, nelle.stocquart@kce.fgov.be, Mario Tristan
2022-05-31 vote 8-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, nelle.stocquart@kce.fgov.be, Mario Tristan</td><td>2022-05-17 comment: Process Measure is included in the donabedian framework of structure, process, outcomes. Do you want to define just for healthcare process measure versus keep the definition broad to include such as a series of steps or tasks providing a measurement pathway for any industry and the examples in healthcare processes are....
2022-05-24 comments: repeat 2022-05-17 comment plus:
you need to provide more info, it is not clear as such
2022-05-31 comments:
you need to provide more info, it is not clear as such
Add comment for application with examples: A process measures captures the steps to care such as
Lab test orders, Referrals....The literature defines a process measure in the donabedian framework of structure, process, outcomes.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01089<a name="181513-SEVCO.5801089"> </a></td><td>Study Goal</td><td>A study design feature specifying the intent of the study.</td><td/><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal</td><td/><td>2022-06-21 comment: Another alternate term could be "Study Objective"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01096<a name="181513-SEVCO.5801096"> </a></td><td>Evaluation Goal</td><td>A study goal to assess the efficiency, effectiveness, and impact of a given program, process, person or piece of equipment.</td><td>Intended to include all forms of evaluation study. (Child concepts for program, process, personnel and equipment evaluations may be added later.)</td><td>Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-07-12 vote 6-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey, Janice Tufte</td><td/><td/><td>Medical Subject Heading (MESH): this heading is used as a Publication Type; for original report of the conduct or results of a specific evaluation study; a different heading EVALUATION STUDIES AS TOPIC is used for general design, methodology, economics, etc. of evaluation studies
Scope Note
Works consisting of studies determining the effectiveness or utility of processes, personnel, and equipment. https://meshb.nlm.nih.gov/record/ui?ui=D023362</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01097<a name="181513-SEVCO.5801097"> </a></td><td>Derivation Goal</td><td>A study goal with the intent to generate a predictive algorithm.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01098<a name="181513-SEVCO.5801098"> </a></td><td>Validation Goal</td><td>A study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.</td><td>Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann</td><td>2022-07-26 vote 6-0 by Jesus Lopez-Alcalde, Harold Lehmann, Paola Rosati, Eric Harvey, Janice Tufte, Mario Tristan</td><td>2022-07-19 vote 8-1 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte</td><td>2022-07-19 comment: "Relevance" is a value judgment that is not the hallmark of a validation study. (It requires elicitation of this judgment from experts or potential users.)
Accuracy, while difficult to measure, is certainly a validation aspiration (goal). Thus, validation of instruments assesses their sensitivity and specificity (measures of "accuracy").
Perhaps a broader goal is "performance", which would include accuracy but also applicability across sites or other external contexts.
Also, typo: "*from* the source used..."</td><td>https://meshb.nlm.nih.gov/record/ui?ui=D023361
MeSH Heading: Validation Study
Annotation:
This heading is used as a Publication Type for original report of the conduct or results of a specific validation study. A different heading VALIDATION STUDIES AS TOPIC is used for general design, methodology, economics, etc. of validation studies. CATALOGER: Do not use
Scope Note:
Works consisting of research using processes by which the reliability and relevance of a procedure for a specific purpose are established.
Entry Term(s):
Validation Studies</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:01088<a name="181513-SEVCO.5801088"> </a></td><td>Comparison Goal</td><td>A study design feature in which the study intent is to compare two or more interventions or exposures.</td><td/><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal</td><td/><td/><td>MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment.
Year introduced: 2018
Do not include MeSH terms found below this term in the MeSH hierarchy.
Tree Number(s): V03.175.250.500.500.125
MeSH Unique ID: D000073843
Entry Terms:
Non-Inferiority Trial
Noninferiority Trial
Superiority Trial
Equivalence Clinical Trial</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01091<a name="181513-SEVCO.5801091"> </a></td><td>Comparative Effectiveness Goal</td><td>A study design feature in which the study intent is to compare two or more interventions with respect to benefits and/or harms.</td><td>In 2009, the Institute of Medicine committee defined comparative effectiveness research (CER) as: "Comparative effectiveness research is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels."</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin</td><td>2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal</td><td/><td/><td>Defining comparative effectiveness research (CER) was the first order of business for the Institute of Medicine Committee on Initial Priorities for CER. The Institute of Medicine committee approached the task of defining CER by identifying the common theme in the 6 extant definitions. The definition follows: "Comparative effectiveness research is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels." https://pubmed.ncbi.nlm.nih.gov/20473202/</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:01090<a name="181513-SEVCO.5801090"> </a></td><td>Comparative Efficacy Goal</td><td>A study design feature in which the study intent is to compare two or more interventions with respect to effectiveness in ideal conditions.</td><td>Efficacy is defined as effectiveness in ideal conditions. In this context, an efficacy goal is a type of effectiveness goal. Efficacy is used to distinguish the context from effectiveness in 'real-world' settings.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin</td><td>2022-06-28 vote 8-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Harold Lehmann, Muhammad Afzal, Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey</td><td/><td>2022-06-28 comment: what does "in ideal conditions" really mean? is it necessary ?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:01092<a name="181513-SEVCO.5801092"> </a></td><td>Comparative Safety Goal</td><td>A study design feature in which the study intent is to compare two or more interventions with respect to harms.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin</td><td>2022-06-28 vote 8-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Harold Lehmann, Muhammad Afzal, Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey</td><td/><td>2022-06-28 comment: no need to be "in ideal conditions" ? see related comment on. "comparative efficacy goal" class textual definition</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01093<a name="181513-SEVCO.5801093"> </a></td><td>Equivalence Goal</td><td>A study goal with the intent to compare two or more interventions or exposures and determine that any difference in effects is within a prespecified range representing absence of a meaningful difference.</td><td>An Equivalence Goal is only applicable with a Comparative study design.
The prespecified range representing absence of a meaningful difference may be defined with an equivalence margin.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey</td><td>2022-07-12 vote 4-2 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Janice Tufte, Harold Lehmann, Eric Harvey</td><td>2022-07-12 comments: harmonize the definition to match the pattern used for the other terms. e.g. "evaluation goal" is a study goal in which the objective is to assess the efficience, effectivement and impact of a given process, process, person or piece of equipment'
so Equivalence Goal is a study goal in which the study intent is to compare two or more interventions or exposures and determine that any difference in effects is within a prespecified range representing absence of a meaningful difference
I think this definition is unclear. Is the equivalence goal an aim of a study? Which kind of study? My understanding is: Given a prespecified range (of results?) showing an absence of a meaningful (for which kind of subjects/previous research?) difference between two interventions/exposures, the equivalence goal assesses that there is no difference in effects. Is this the meaning of this definition? Which kind of study could give a valid result in terms of equivalence? An RCT?</td><td>MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment.
Year introduced: 2018
Do not include MeSH terms found below this term in the MeSH hierarchy.
Tree Number(s): V03.175.250.500.500.125
MeSH Unique ID: D000073843
Entry Terms:
Non-Inferiority Trial
Noninferiority Trial
Superiority Trial
Equivalence Clinical Trial</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01094<a name="181513-SEVCO.5801094"> </a></td><td>Non-inferiority Goal</td><td>A study goal with the intent to compare two or more interventions or exposures and determine that any difference in effects is below a prespecified value representing a threshold between a meaningful difference and absence of a meaningful difference.</td><td>A Non-inferiority Goal is only applicable with a Comparative study design.
The threshold between a meaningful difference and absence of a meaningful difference may be called a non-inferiority margin.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey</td><td>2022-07-12 vote 3-2 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey</td><td>2022-07-12 comments: so Non-Inferiorty Goal is a study goal in which....
I have the same doubts already given for the equivalence goal</td><td>MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment.
Year introduced: 2018
Do not include MeSH terms found below this term in the MeSH hierarchy.
Tree Number(s): V03.175.250.500.500.125
MeSH Unique ID: D000073843
Entry Terms:
Non-Inferiority Trial
Noninferiority Trial
Superiority Trial
Equivalence Clinical Trial</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:01095<a name="181513-SEVCO.5801095"> </a></td><td>Superiority Goal</td><td>A study goal with the intent to compare two or more interventions or exposures and detect a difference in effects.</td><td>A Superiority Goal is only applicable with a Comparative study design.
A superiority study goal may be exploratory (to detect a difference) or confirmatory (to establish that a difference exists with a degree of certainty).
A superiority goal is not the opposite of a non-inferiority goal. A superiority goal uses a threshold of zero difference while an inferiority goal uses a threshold of a meaningful difference.
Some superiority comparisons are conducted following determination of non-inferiority.
Placebo-controlled trials are typically superiority studies.
Superiority, as commonly used, is 'statistical superiority,' with null used as the threshold of effect. An approach representing 'clinical superiority' would use the non-inferiority margin as the threshold of effect.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Harold Lehmann</td><td>2022-07-19 vote 9-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte</td><td>2022-07-12 vote 4-1 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey</td><td>2022-07-12 comment: so Superiority Goal is a study goal in which...
2022-07-19 comment: alter definition to "...and detect *meaningful* difference in effects" (in order to be consistent with Equivalence and Non-inferiority Study Goals')</td><td>MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment.
Year introduced: 2018
Do not include MeSH terms found below this term in the MeSH hierarchy.
Tree Number(s): V03.175.250.500.500.125
MeSH Unique ID: D000073843
Entry Terms:
Non-Inferiority Trial
Noninferiority Trial
Superiority Trial
Equivalence Clinical Trial</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:01100<a name="181513-SEVCO.5801100"> </a></td><td>Allocation ratio</td><td>A study design feature describing the intended relative proportion of assignment across groups.</td><td>The allocation ratio may be expressed as Treatment:Control, e.g., 2:1, or, in the case of two treatment arms and one control, e.g. 2:2:1.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins</td><td>2023-04-10 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Eric Harvey, Janice Tufte, Jesus Lopez-Alcalde</td><td/><td>2023-04-10 comment Perhaps add to Comment for Application something like, "The allocation ratio is usually expressed as Treatment:Control, e.g., 2:1 or 2:2:1, in the case of two treatment arms."</td><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:00001<a name="181513-SEVCO.5800001"> </a></td><td>Bias</td><td>A systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]).</td><td/><td>Brian S. Alper, Philippe Rocca-Serra, Joanne Dehnbostel, Mario Tristan, Harold Lehmann; Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>8/8 as of 2021-02-26: , Harold Lehmann, Khalid Shahin, Eric Harvey, Jesús López-Alcalde, Joanne Dehnbostel, Muhammad Afzal, Paola Rosati, Eric Au, 5/5 for second sentence as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00002<a name="181513-SEVCO.5800002"> </a></td><td>Selection Bias</td><td>A bias resulting from methods used to select subjects or data, factors that influence initial study participation, or differences between the study sample and the population of interest</td><td>Selection bias can occur before the study starts (inherent in the study protocol) or after the study starts (during study execution).</td><td>Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel</td><td>8/8 as of 3/5/2021 Eric Au, Alejandro Piscoya, Mario Tristan, Brian Alper, Zbys Fedorowicz, Bhagvan Kommadi, Eric Harvey, Muhammad Afzal</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00003<a name="181513-SEVCO.5800003"> </a></td><td>Participant Selection Bias</td><td>A selection bias resulting from methods used to select participating subjects, factors that influence initial study participation, or differences between the study participants and the population of interest</td><td/><td>Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel</td><td>10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau</td><td>2021-03-08 vote 7-2 on "A selection bias where key characteristics of the participants differ systematically from the population of interest." by Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel, 2021-03-19 vote 10-1 on "A bias resulting from methods used to select participating subjects, factors that influence initial study participation, or differences between the study participants and the population of interest" by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00004<a name="181513-SEVCO.5800004"> </a></td><td>Inappropriate selection criteria</td><td>A selection bias resulting from inclusion and exclusion criteria used to select participating subjects that could result in differences between the study participants and the population of interest.</td><td/><td>Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel</td><td>10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau</td><td>2021-03-19 vote 9-2 on "A bias resulting from inclusion and exclusion criteria used to select participating subjects that could make the included participants unrepresentative of the population of interest." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00005<a name="181513-SEVCO.5800005"> </a></td><td>Inappropriate sampling strategy</td><td>A selection bias resulting from the sampling frame, sampling procedure, or methods used to recruit participating subjects that could result in differences between the study participants and the population of interest.</td><td/><td>Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel</td><td>10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau</td><td>2021-03-19 vote 9-2 on "A bias resulting from the sample frame, sampling procedure, or methods used to recruit participating subjects that could make the included participants unrepresentative of the population of interest." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00014<a name="181513-SEVCO.5800014"> </a></td><td>Inappropriate data source for participant selection</td><td>Participant selection bias due to inappropriate data source for sampling frame.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Muhammad Afzal, Bhagvan Kommadi</td><td>6/6 as of 4/12/2021: KM Saif-Ur-Rahman, Bhagvan Kommadi, Joanne Dehnbostel, Paola Rosati, Jesús López-Alcalde, Tatyana Shamliyan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00006<a name="181513-SEVCO.5800006"> </a></td><td>Non-representative sample</td><td>A selection bias due to differences between the included participants and the population of interest that distorts the research results (estimation of effect, association, or inference), limiting external validity or applicability.</td><td/><td>Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel</td><td>10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau</td><td>2021-03-19 vote 10-1 on "Differences between the included participants and the population of interest that distorts the research results (estimation of effect, association, or inference), limiting external validity or applicability." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00008<a name="181513-SEVCO.5800008"> </a></td><td>Inadequate enrollment of eligible subjects</td><td>A selection bias in which insufficient enrollment of eligible subjects results in differences (recognized or unrecognized) between the included participants and the population of interest that distorts the research results.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Marc Duteau, Khalid Shahin, Asiyah Yu Lin, Muhammad Afzal, Tatyana Shamliyan</td><td>11/11 as of 3/29/2021: Alejandro Piscoya, Eric Harvey, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Eric Au, Joanne Dehnbostel, Marc Duteau, Brian S. Alper, Jesús López-Alcalde, Tatyana Shamliyan, Paola Rosati</td><td>2021-03-26 vote 8-2 on "Inadequate enrollment = A selection bias due to a rate of study entry among eligible subjects that is not sufficient for the included sample to be considered representative of the population of interest." by Harold Lehmann, Tatyana Shamliyan, Muhammad Afzal, Eric Au, Paola Rosati, Mario Tristan, Alejandro Piscoya, Bhagvan Kommadi, Jesús López-Alcalde, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00012<a name="181513-SEVCO.5800012"> </a></td><td>Non-representative sample due to timing or duration of exposure</td><td>A selection bias in which the timing or duration of exposure influences the outcome, and the timing or duration of exposure in the sample does not represent that of the population of interest. This selection bias may occur when the selection for study participation is not coincident with the initiation of the exposure or intervention under investigation.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin</td><td>9/9 as of 4/9/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Paola Rosati, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00013<a name="181513-SEVCO.5800013"> </a></td><td>Depletion of susceptibles</td><td>A non-representative sample due to exclusion of susceptible participants who have already had an outcome due to prior exposure. For example, the inclusion of prevalent users of a medication misrepresents the initial adverse effects rate by excluding persons who do not tolerate the medication.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin</td><td>9/9 as of 4/9/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Paola Rosati, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00009<a name="181513-SEVCO.5800009"> </a></td><td>Post-baseline factors influence enrollment selection</td><td>A selection bias in which factors observed after study entry, baseline, or start of follow-up influence enrollment</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Marc Duteau, Khalid Shahin, Asiyah Yu Lin, Harold Lehmann, Mario Tristan</td><td>9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00212<a name="181513-SEVCO.5800212"> </a></td><td>Participant selection bias due to early study termination</td><td>A selection bias due to premature closure of study enrollment.</td><td>'Early termination bias affecting enrollment' is a type of 'Post-baseline factors influence enrollment selection' which is defined as 'A selection bias in which factors observed after study entry, baseline, or start of follow-up influence enrollment.'
To express bias related to making the decision to terminate a study, use 'Early Study Termination Bias'.</td><td>Brian S. Alper, Harold Lehmann, Paul Whaley, Kenneth Wilkins, Muhammad Afzal</td><td>2022-04-08 vote 12-0 by Muhammad Afzal, Paul Whaley, Mario Tristan, Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, nelle.stocquart, nisha mathew, Harold Lehmann, Cauê Monaco</td><td>2022-03-25 vote 7-1 by Muhammad Afzal, Paul Whaley, Mario Tristan, Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk</td><td>2022-03-25 comment: Recommend simplifying the term and then add your test to the term definition.
For example edit term to Early Study Termination Bias. Term definition should read. Selection Bias due to premature closing of a study enrollment for the participants....</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00010<a name="181513-SEVCO.5800010"> </a></td><td>Factor associated with exposure influences enrollment selection</td><td>A selection bias in which a factor associated with the exposure under investigation influences study enrollment</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Muhammad Afzal</td><td>9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00011<a name="181513-SEVCO.5800011"> </a></td><td>Factor associated with outcome influences enrollment selection</td><td>A selection bias in which a factor associated with the outcome under investigation influences study enrollment</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Muhammad Afzal</td><td>9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00015<a name="181513-SEVCO.5800015"> </a></td><td>Study Selection Bias</td><td>A selection bias resulting from factors that influence study selection, from methods used to include or exclude studies for evidence synthesis, or from differences between the study sample and the population of interest</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Muhammad Afzal, Philippe Rocca-Serra</td><td>6/6 as of 4/26/2021: Eric Harvey, Bhagvan Kommadi, Harold Lehmann, Mario Tristan, Jesús López-Alcalde, Tatyana Shamliyan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00262<a name="181513-SEVCO.5800262"> </a></td><td>Bias in Study Eligibility Criteria</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00273<a name="181513-SEVCO.5800273"> </a></td><td>Study eligibility criteria not prespecified</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00274<a name="181513-SEVCO.5800274"> </a></td><td>Study eligibility criteria not appropriate for review question</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00275<a name="181513-SEVCO.5800275"> </a></td><td>Study eligibility criteria ambiguous</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00276<a name="181513-SEVCO.5800276"> </a></td><td>Study eligibility criteria limits for study characteristics not appropriate</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00277<a name="181513-SEVCO.5800277"> </a></td><td>Study eligibility criteria limits for information sources not appropriate</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00263<a name="181513-SEVCO.5800263"> </a></td><td>Database search sources not appropriate</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00264<a name="181513-SEVCO.5800264"> </a></td><td>Non-database search sources inadequate</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00265<a name="181513-SEVCO.5800265"> </a></td><td>Search strategy not sensitive</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00266<a name="181513-SEVCO.5800266"> </a></td><td>Search strategy limits for information sources not appropriate</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00267<a name="181513-SEVCO.5800267"> </a></td><td>Study eligibility criteria not adhered to</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00268<a name="181513-SEVCO.5800268"> </a></td><td>Error in study selection not minimized</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00269<a name="181513-SEVCO.5800269"> </a></td><td>Language bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00270<a name="181513-SEVCO.5800270"> </a></td><td>Geography bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00271<a name="181513-SEVCO.5800271"> </a></td><td>One-sided reference bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00272<a name="181513-SEVCO.5800272"> </a></td><td>Publication Bias affecting study selection</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00016<a name="181513-SEVCO.5800016"> </a></td><td>Confounding Covariate Bias</td><td>A situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.</td><td>Association of any two variables includes direct associations and indirect associations through each of the variables having direct associations with a third variable.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Philippe Rocca-Serra, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi</td><td>2023-07-14 vote 5-0 by Paul Whaley, Harold Lehmann, Cauê Monaco, Jesus Lopez-Alcalde, Paola Rosati</td><td>2021-05-07 vote 4-2 on "Comparator Bias = A bias resulting from differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi
2021-05-10 vote 11-1 on "Confounding Covariate Bias = A bias resulting from differences in covariates (variables other than the exposure and outcome) between the groups being compared." by Eric Harvey, KM Saif-Ur-Rahman, Janice Tufte, Bhagvan Kommadi, Paola Rosati, Alejandro Piscoya, Harold Lehmann, Ahmad Sofi-Mahmudi, Eric Au, Jesus Lopez-Alcalde, Tatyana Shamliyan, Joanne Dehnbostel
AGREEMENT VOTE 8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati
On 2023-06-16 the Steering Group corrected a technical error in the definition (between A or B ... corrected to ... between A and B), and added a Comment for Application, so re-opened the term for vote.</td><td>A bias resulting from differences (other than in variables directly involved in the analysis) between the groups being compared. ---led to --- Which differences do you mean between the groups? This definition seems unclear. Defining a Comparator bias means to addresss some possible specific explanation. Or it is preferable to delete this bias. The definition is for selection bias resulting from nonrandom allocation of participants to interventions. Random allocation of trial participants to interentions would reduce this bias. Comprator seletion would not. A bias resulting from differences in covariates (variables other than the exposure and outcome) between the groups being compared -- led to I agree with the definition but I suggest detailing that the covariate is associated to the outcome</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00032<a name="181513-SEVCO.5800032"> </a></td><td>Allocation Bias</td><td>A confounding covariate bias resulting from methods for assignment of the independent variable by the investigator to evaluate a response or outcome.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi</td><td>8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati</td><td>2021-05-07 vote 5-1 on "Comparator Selection Bias = A comparator bias resulting from methods for selection of or allocation to groups for comparative analysis that have the potential to introduce differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi, 2021-05-10 vote 11-1 on "Allocation Bias = A confounding covariate bias resulting from methods for assignment of exposures in an interventional study." by Eric Harvey, KM Saif-Ur-Rahman, Janice Tufte, Bhagvan Kommadi, Paola Rosati, Alejandro Piscoya, Harold Lehmann, Ahmad Sofi-Mahmudi, Eric Au, Jesus Lopez-Alcalde, Tatyana Shamliyan, Joanne Dehnbostel</td><td>A comparator bias resulting from methods for selection of or allocation to groups for comparative analysis that have the potential to introduce differences (other than in variables directly involved in the analysis) between the groups being compared. -- led to--- Selection of comparators would not reduce differences between compared groups. A confounding covariate bias resulting from methods for assignment of exposures in an interventional study. --led to-- In my opinion, in an interventional study the investigator assigns the intervention, not the exposures. The differences in the covariates results from the methods for the assignment of the intervention. For example not concealed allocation.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00031<a name="181513-SEVCO.5800031"> </a></td><td>Inadequate allocation concealment</td><td>An allocation bias resulting from awareness of the assigned intervention before study enrolment and intervention assignment</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra</td><td>10/10 as of 6/11/2021: Names not captured</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00278<a name="181513-SEVCO.5800278"> </a></td><td>Bias due to non-randomized allocation</td><td>An allocation bias resulting from a process of assigning participants or subjects to different groups or conditions which is not random.</td><td>A confounding covariate bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
Allocation bias is defined as a confounding covariate bias resulting from *methods for assignment* of the independent variable by the investigator to evaluate a response or outcome.
Methods for assignment that are not random may introduce confounding with measured or unmeasured variables.
Non-random methods of generation of an allocation sequence may introduce a confounding covariate bias through associations with one ore more non-random variables related to sequence generation. A non-random allocation sequence may be described as a predictable sequence in mathematical terms.
The SEVCO term [Quasi-Randomized assignment](https://fevir.net/resources/CodeSystem/27270#SEVCO:01004) is defined as an interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation. Although Quasi-Randomized assignment is "intended to produce similar baseline groups" the term is classified as a type of [Non-randomized assignment](https://fevir.net/resources/CodeSystem/27270#SEVCO:01005). Examples of non-random methods (which may be called 'partially randomized' or 'quasi-random') include every other participant, day of the week, even/odd identification number, birth date, etc.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal</td><td>2023-07-28 vote 5-0 by Brian S. Alper, Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann</td><td>2023-05-12 vote 4-1 by Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Jesus Lopez-Alcalde, Harold Lehmann
2023-05-26 vote 5-1 by Harold Lehmann, Jesus Lopez-Alcalde, Sunu Alice Cherian, Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel
2023-06-09 vote 4-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian, Harold Lehmann
2023-06-16 vote 3-2 by Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann, Paola Rosati
2023-07-14 vote 4-1 by Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco, Paola Rosati</td><td>2023-05-12 comment: Does this term definition actually define "inadequate" = potentially predictable sequence ? seems like an innapropriate allocation bias perhaps
2023-05-26 comment: Defintion: Methods of allocating study participants to treatment comparison groups that are not random, but are intended to produce similar groups.
Alternative terms: Quasi random allocation
2023-06-09 comment: The comment for application describes "unrecognised associations", but the definition talks about "potentially predictable", which implies exploiting a recognised association to break blinding. I am not sure it can be both of these.
2023-06-16 comments:
Type of bias that arises in research studies when the process of assigning participants or subjects to different groups or conditions is not random.
I think I remember my original concern now - in the definition, the problem is not that the sequence is predictable, it is that the sequence is associated with another variable, thus introducing this other variable as a confounder. Unless it is about the investigator being able to break blinding, in which case the concept of the sequence being predictable is important.
2023-07-14 comment: I think non-random methods are those clearly non-random, such as allocation by provider's preferences. However, quasi-random methods are those that apply a method that attempts to be random but that it isn't. Example: day of the week.
2023-07-28 comment: For consistency, should we call it, "Confounding Bias due to non-randomized allocation"?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00033<a name="181513-SEVCO.5800033"> </a></td><td>Comparator Selection Bias</td><td>A confounding covariate bias resulting from methods used to select participating subjects, or factors that influence study participation, for the comparator group.</td><td>This situation is more commonly related to observational research.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi</td><td>8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00034<a name="181513-SEVCO.5800034"> </a></td><td>Confounding difference</td><td>A confounding covariate bias in which the unequal distribution of a potentially distorting variable is recognized.</td><td>The potentially distorting variable is a covariate, and not the exposure or the outcome. Even if adjusted for in the analysis, a risk of bias can be present.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Philippe Rocca-Serra</td><td>8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati</td><td>2021-05-07 vote 5-1 on "Recognized Difference with Potential for Confounding = A comparator bias resulting from known differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi, , 2021-05-24 vote 6-1 on "A confounding covariate bias in which the unequal distribution of a potentially distorting variable is recognized." by Harold Lehmann, Eric Harvey, KM Saif-Ur-Rahman, Bhagvan Kommadi, janice tufte, Paola Rosati, Jesus Lopez-Alcalde</td><td>A comparator bias resulting from known differences (other than in variables directly involved in the analysis) between the groups being compared. -- led to-- This defintion seems tricky. If you find any diference between groups that can go astray with analysis you simply address the potention for confounding explicitly in the discussion session of yoru protocol/paper The potnetial for confounding needs to be consideriend in the protocol, and specifically addresssed int eh post-analysis to avoid any further bias. The term comparator bias is misleading since differnece between groups would not be reduced by selecting different comparators. If this is recognized and adjusted for, is it still a bias? Seems that we need to address this circumstance.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00280<a name="181513-SEVCO.5800280"> </a></td><td>Confounding by time of observation</td><td>A confounding covariate bias in which the distorting variable is the time at which the outcome is measured or observed.</td><td>A confounding covariate bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
The time at which the outcome is measured or observed may be absolute (e.g. a specific date) or relative (e.g. 3 months after study enrollment).
To understand "confounding by time of observation" consider the following example:
An observational study is comparing patients with asthma taking Superdrug and patients with asthma not taking Superdrug. The outcome of interest is mortality. The patients taking Superdrug are observed for their full duration of exposure to Superdrug. For comparison, the control group not receiving Superdrug is measured during a 1-year calendar period.
For the mortality outcome comparing Superdrug vs. no Superdrug, the time of observation for the control group is consistently 1 year but for the Superdrug group the time of observation varies for each patient. This comparison is confounded by the time of observation.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Paul Whaley</td><td>2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Harold Lehmann, Mario Tristan</td><td>2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian
2023-06-16 vote 4-1 by Paola Rosati, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann
2023-07-14 vote 7-0 by Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco
THEN REOPENED
2023-08-04 due to comment that suggests removing parenthetical from definition</td><td>2023-06-09 comment: The comment for application is not sufficiently informative. I am also not sure I understand what the definition means - what is the importance of recognition of unequal distribution of follow-up time?
2023-06-16 comments: A confounding that occurs when the relationship between an exposure or intervention and an outcome is confounded by the time at which the outcome is measured or observed.
Alternate terms: time-varying confounding
Comment for application: This occurs when both the exposure and the outcome change over time, and there are other time-dependent factors that influence the outcome
The Comment for Application seems to be repeating the definition of the parent term. I though we usually add details specific to the current term.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00281<a name="181513-SEVCO.5800281"> </a></td><td>Lead time bias</td><td>A confounding covariate bias in which the distorting variable is the length of time that the participant has had the condition of interest at study enrollment.</td><td>A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
A lead time bias is often manifest as a distortion overestimating the apparent time surviving with a disease caused by bringing forward the time of its diagnosis (https://catalogofbias.org/biases/lead-time-bias/).
Lead time bias is a type of bias that occurs in medical screening or diagnostic tests when the early detection of a disease or condition artificially appears to improve survival or prognosis, even if it does not actually provide a true benefit in terms of overall health outcomes.
Lead time refers to the amount of time between the detection of a disease through early screening or diagnostic testing and the time when the disease would have been clinically detected without screening.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Paul Whaley</td><td>2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Harold Lehmann</td><td>2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian
2023-06-16 vote 3-2 by Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann, Paola Rosati
2023-07-14 vote 6-0 by Muhammad Afzal, Joanne Dehnbostel, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco
THEN REOPENED
2023-08-04 due to comment that suggests removing parenthetical from definition</td><td>2023-06-09 comment: I am not sure I can successfully parse the syntax of the definition. While I think I understand what is meant, I feel it could be phrased more clearly.
2023-06-16 comments: Lead time bias is a type of bias that occurs in medical screening or diagnostic tests when the early detection of a disease or condition artificially appears to improve survival or prognosis, even if it does not actually provide a true benefit in terms of overall health outcomes
Comment for application: Lead time refers to the amount of time between the detection of a disease through early screening or diagnostic testing and the time when the disease would have been clinically detected without screening.
This definition seems difficult to understand: does it convey that lead time bias is related to the potentially distorting variable of the length of time chosen in the study in which some participants could have confounding differences between their diagnosis of the condition of interest and the time of enrolment?
I have some problem in understanding, sorry.</td><td>Lead time bias
A distortion overestimating the apparent time surviving with a disease caused by bringing forward the time of its diagnosis
https://catalogofbias.org/biases/lead-time-bias/</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00282<a name="181513-SEVCO.5800282"> </a></td><td>Confounding influencing adherence to intervention</td><td>A confounding covariate bias in which the distorting variable is associated with deviations from the intended intervention.</td><td>A confounding covariate bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
For 'Confounding influencing adherence to intervention', the association of the distorting variable and the exposure is specific to deviations from the intended exposure (intended intervention). Deviations from the intended intervention may include deviations from the intervention protocol or lack of adherence. Lack of adherence includes imperfect compliance, cessation of intervention, crossovers to the comparator intervention and switches to another active intervention.
The term 'Confounding influencing adherence to intervention' is distinct from 'Performance Bias' (including 'Nonadherence of participants' or 'Imbalance in deviations from intended interventions') in that an additional variable (the distorting variable or confounding covariate) is acting as a confounder, while the 'Performance Bias' may occur with or without any differences in a third variable.</td><td>Brian Alper, Joanne Dehnbostel, Harold Lehmann, Paul Whaley, Kenneth Wilkins</td><td>2023-09-29 vote 5-0 by Joanne Dehnbostel, Harold Lehmann, Paul Whaley, Eric Harvey, Mario Tristan</td><td>2023-07-28 vote 3-1 by Eric Harvey, Harold Lehmann, Jesus Lopez-Alcalde, Paul Whaley
2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition
2023-08-11 vote 3-1 by Mario Tristan, Cauê Monaco, Eric Harvey, Joanne Dehnbostel</td><td>2023-06-02 comment from steering group: need to see the background to ROBINS-I to understand context for this term
2023-07-28 comment: I think the definition is good but the comment for application should specifically address this term and not just duplicate the definition of confounding covariate bias.
2023-08-11 comment: Is this the same as compliance bias, or compliance bias ("https://catalogofbias.org/biases/compliance-bias/") is a subtype of this? If "compliance bias" is a synonim, should be added as such. If not, should be added as a separate term</td><td>trigger question from ROBINS-I: 1.3. Were intervention discontinuations or switches likely to be related to factors that are prognostic for the outcome?</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00284<a name="181513-SEVCO.5800284"> </a></td><td>Confounding by indication</td><td>A confounding covariate bias in which the distorting variable is the reason for receiving an exposure.</td><td>A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
The term 'indication' is derived from the medical community, in which the reason that an intervention is provided is called the indication. A reason for not providing an intervention may be called a 'contraindication' rather than 'indication to not provide'.
For example, people exposed to chemotherapy have higher mortality. This observation can easily be confounded by people exposed to chemotherapy having a higher rate of cancer (as the reason for receiving the chemotherapy).</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal</td><td>2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Joanne Dehnbostel</td><td>2023-05-12 vote 5-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann
BUT THEN TERM CHANGED WITH HIERARCHY CHANGE on 2023-06-30
2023-07-14 vote 2-1 by Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann
2023-07-28 vote 3-1 by Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann
2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition</td><td>2023-05-12 comment: For Comment for Application, I thought we usually put the definition of the parent term first, and the comments about this child. So I would arrange the current 1, 2, 3 paragraphs as 2, 1, 3.
And I think what is now the first paragraph should start with, "A confounding different bias..."
2023-07-01 comment: I would add to the definition "or lack of". Thus: "A confounding covariate bias in which the confounder (distorting variable) is the reason for (or for lack of) an intended exposure.
2023-07-14 comment: I think the definition is good but the comment for application should specifically address this term in more detail than providing a definition for "indication". It is a complex concept and I am not sure I understand what is happening with this bias.
2023-07-28 comment: I still feel that an example of how the reason for receiving an exposure can end up being a confounder would be helpful.</td><td>Confounding by indication
A distortion that modifies an association between an exposure and an outcome, caused by the presence of an indication for the exposure that is the true cause of the outcome.
from https://catalogofbias.org/biases/confounding-by-indication/</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00388<a name="181513-SEVCO.5800388"> </a></td><td>Confounding by contraindication</td><td>A confounding covariate bias in which the distorting variable is the reason for not receiving an exposure.</td><td>A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
The term 'indication' is derived from the medical community, in which the reason that an intervention is provided is called the indication. A reason for not providing an intervention may be called a 'contraindication' rather than 'indication to not provide'.
For example, people with cancer exposed to surgery for curative resection have lower mortality than other people with cancer. This observation can easily be confounded by people exposed to surgery for curative resection having a lower rate of metastatic cancer (which is a contraindication to such a surgery).</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco</td><td>2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Joanne Dehnbostel</td><td>2023-07-28 vote 3-1 by Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann
2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition</td><td>2023-07-28 comment: I still feel that an example of how the reason for receiving an exposure can end up being a confounder would be helpful.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00390<a name="181513-SEVCO.5800390"> </a></td><td>time-varying confounding affected by past exposure</td><td>A confounding covariate bias in which the distorting variable is itself influenced by the exposure.</td><td>Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
To distinguish "confounding by time of observation" from "time-varying confounding affected by past exposure" consider the following example:
An observational study is comparing patients with asthma taking Superdrug and patients with asthma not taking Superdrug. The outcome of interest is mortality, both for association with the dose of Superdrug and compared to not receiving Superdrug. For comparison, the control group not receiving Superdrug is measured during a 1-year calendar period.
For the mortality outcome comparing Superdrug vs. no Superdrug, the time of observation for the control group is consistently 1 year but for the Superdrug group the time of observation varies for each patient. This comparison is confounded by the time of observation.
For the mortality outcome comparing high-dose vs. low-dose Superdrug, the confounding variable of asthma exacerbation rate is complicated in several ways. First, the asthma exacerbation rate is associated with the outcome (mortality) independent from the effects of Superdrug. Second, the asthma exacerbation rate may influence the exposure (the dose of Superdrug which is increased if frequent asthma exacerbations) and the exposure (higher dose of Superdrug) may influence the confounder (reducing the asthma exacerbation rate). This comparison of high-dose vs. low-dose Superdrug for effects on mortality is distorted by time-varying confounding affected by past exposure.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Caue Monaco</td><td>2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Harold Lehmann, Mario Tristan</td><td/><td>2023-09-01 comment (with No vote): This term seems unnecessary. Describes a bias rarely seen.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00017<a name="181513-SEVCO.5800017"> </a></td><td>Performance Bias</td><td>A bias resulting from differences between the received exposure and the intended exposure.</td><td>Such differences could be the administration of additional interventions that are inconsistent with the study protocol, or non-adherence by the interventionalists or study participants to their assigned intervention. Such differences may occur based on assignment to intervention or may occur due to adherence to intervention.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Philippe Rocca-Serra</td><td>8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati</td><td>2021-05-24 vote 5-2 on "A bias resulting from differences between the received exposure and the intended exposure. Such differences could be the administration of additional interventions that are inconsistent with the study protocol, or non-adherence by the interventionalists or study participants to their assigned intervention. " by Harold Lehmann, Eric Harvey, KM Saif-Ur-Rahman, Bhagvan Kommadi, janice tufte, Paola Rosati, Jesus Lopez-Alcalde</td><td>Definition of performance bias should be modified, Performance bias should involve the blinding at participant level and implementer level in definition.I would add that the differences must be present between the study arms In a RCT with an active control (for example drug A vs drug B) both study arms may have had low adherence but if these deviations from the protocol occurred homogeneously accross arms the effect estimate may not be distorted (biased). As a reviewer, I would not penalise this estimate due to high risk of performance bias. So, concerning the definition, I would propose "A bias resulting from differences accross the study arms between the [...]"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00035<a name="181513-SEVCO.5800035"> </a></td><td>Inadequate blinding of participants</td><td>A performance bias due to awareness of the allocated intervention by participants</td><td>Inadequate blinding of participants is applied when there is awareness of assigned intervention AFTER intervention assignment. If there is awareness BEFORE study enrolment and intervention assignment, this would be Inadequate allocation concealment.
The term "Inadequate blinding of participants" is used to denote the TYPE of bias. Separate terms for the RATING of risk of bias are used to report the likelihood of the presence and influence of the type of bias.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi, Philippe Rocca-Serra</td><td>8/8 as of 6/14/2021: Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco,</td><td>2021-06-07 vote 7-1 on "Inadequate blinding of participants = A performance bias due to awareness of the allocated intervention by participants" by KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati, 2021-06-11 vote 9-1 on same</td><td>Need to distinguish blinding of intervention from blinding of allocation Inadequate blinding of participants does not always imply bias. Besides, it can also imply detection bias in patient reported outcomes</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00036<a name="181513-SEVCO.5800036"> </a></td><td>Inadequate blinding of intervention deliverers</td><td>A performance bias due to awareness of the allocated intervention by individuals providing or delivering the intervention</td><td>Inadequate blinding of intervention deliverers is applied when there is awareness of assigned intervention AFTER intervention assignment. If there is awareness BEFORE study enrolment and intervention assignment, this would be Inadequate allocation concealment.
The term noted here is used to denote the TYPE of bias. Separate terms for the RATING of risk of bias are used to report the likelihood of the presence and influence of the type of bias.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi, Philippe Rocca-Serra</td><td>8/8 as of 6/14/2021: Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco,</td><td>2021-06-07 vote 7-1 on "Inadequate blinding of participants = A performance bias due to awareness of the allocated intervention by individuals providing or delivering the intervention" by KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati, 2021-06-11 vote 9-1 on same</td><td>Need to distinguish blinding of intervention from blinding of allocation; Should we use the term interventionalist or interventionist? Inadequate blinding of intervention deliverers does not always imply Performance bias</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00037<a name="181513-SEVCO.5800037"> </a></td><td>Deviation from study intervention protocol</td><td>A performance bias in which the intervention received differs from the intervention specified in the study protocol</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi</td><td>8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00038<a name="181513-SEVCO.5800038"> </a></td><td>Deviation from standard of care</td><td>A performance bias in which the intervention or exposure received differs from the from the usual practice or expected care</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi</td><td>8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00039<a name="181513-SEVCO.5800039"> </a></td><td>Nonadherence of implementation</td><td>A performance bias in which the intervention deliverers do not completely adhere to the expected intervention</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi</td><td>8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati</td><td/><td>interventionist vs. intervention deliverer</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00040<a name="181513-SEVCO.5800040"> </a></td><td>Nonadherence of participants</td><td>A performance bias in which the participants do not completely adhere to the expected intervention or exposure</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi</td><td>8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati</td><td/><td>is known or unknown</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00041<a name="181513-SEVCO.5800041"> </a></td><td>Imbalance in deviations from intended intervention</td><td>A performance bias in which the degree of performance bias is unequally distributed between groups being compared</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi</td><td>8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00019<a name="181513-SEVCO.5800019"> </a></td><td>Attrition Bias</td><td>A bias due to absence of expected participation or data collection after selection for study inclusion.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Erfan Shamsoddin</td><td>13/13 as of 6/18/2021: Eric Au, Harold Lehmann, Erfan Shamsoddin, Ahmad Sofi-Mahmudi, Mario Tristan, Eric Harvey, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati</td><td>6/14/2021-06-14 vote 7-1 on "Attrition Bias = A bias due to absence of expected participation or data collection after study enrollment." by, Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco,</td><td>The phrase "after study enrolment" might be confusing. Does enrolment apply to retrospective observational studies?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00286<a name="181513-SEVCO.5800286"> </a></td><td>Attrition bias due to participant attrition</td><td>A bias due to absence of expected participation due to participant dropout, withdrawal or non-participation after selection for study inclusion.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2023-05-12 vote 6-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel , Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td>2023-04-28 comment: I would not detail that 20%: it is misleading and not evidence-based</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00287<a name="181513-SEVCO.5800287"> </a></td><td>Attrition bias due to missing data</td><td>A bias due to data loss or absence of data collection from participants after selection for study inclusion.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2023-05-12 vote 6-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel , Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td>2023-04-28 comment: I would not detail that 20%: it is misleading and not evidence-based</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00386<a name="181513-SEVCO.5800386"> </a></td><td>Attrition bias due to missing outcome data</td><td>An attrition bias due to missing data specific to the dependent variable.</td><td>In a situation of repeated measures outcomes, attrition bias due to missing outcome data can occur if one or more measurements are missing.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin</td><td>2023-06-16 vote 5-0 by Joanne Dehnbostel, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann</td><td>2023-05-19 5-1 Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Joanne Dehnbostel
2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Harold Lehmann</td><td>The information in the parentheses, "(or data on an independent variable)," is unclear in its intended meaning. To improve clarity, we could revise the definition.
2023-06-09 comment: The definition is too difficult to parse, and probably too similar to the preferred term. The comment for application is also very difficult to read.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00288<a name="181513-SEVCO.5800288"> </a></td><td>Attrition bias due to missing exposure data</td><td>An attrition bias due to missing data specific to the independent variable(s) of primary interest, such as exposure or intervention.</td><td>If coding a bias related to the classification of exposure, misclassification of exposure may be coded as Exposure Detection Bias, but if the data is excluded from analysis it may then be coded as Attrition bias due to missing exposure data.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Paul Whaley</td><td>2023-06-09 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Harold Lehmann, Jesus Lopez-Alcalde, Sunu Alice Cherian</td><td>2023-05-19 5-1 Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Joanne Dehnbostel</td><td>The information in the parentheses, "(or data on an independent variable)," is unclear in its intended meaning. To improve clarity, we could revise the definition.
2023-06-09 comment: This needs a comment for application, but the definition is clearer than for "attrition bias due to missing outcome data".</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00289<a name="181513-SEVCO.5800289"> </a></td><td>Attrition bias due to missing modifier data</td><td>An attrition bias due to missing data specific to a confounder or effect modifier</td><td>The term modifier is intended to be broad, including variables used for modeling interactions, stratification factors to account for effect modification, or other variables such as mediators that need to be accounted for when modeling the relationship between the outcome and exposure.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin</td><td>2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00387<a name="181513-SEVCO.5800387"> </a></td><td>Attrition bias due to missing data about attrition</td><td>An attrition bias due to missing data specific to the extent of or reasons for missing data.</td><td>Attrition bias due to missing data is defined as a bias due to data loss or absence of data collection from participants after selection for study inclusion. Data about the amount of missing data and data about the reasons for missing data are types of data that can also be missing. For example, in a time-to-event study, the reason a participant is censored might be missing and missing such data may interfere with distinguishing informative from non-informative censoring.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2023-06-16 vote 5-0 by Joanne Dehnbostel, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann</td><td/><td>2023-06-16 comments: Time-to-event should be hyphenated
Funnily enough, this came up straight after our call in relation to another bias project I am working on, so I would consider this addition useful!</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00290<a name="181513-SEVCO.5800290"> </a></td><td>Imbalance in missing data</td><td>An attrition bias in which the degree of missing data is unequally distributed between groups being compared.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00291<a name="181513-SEVCO.5800291"> </a></td><td>Inadequate response rate</td><td>An attrition bias in which the reason for absence of data collection is a low response rate to data collection surveys.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00020<a name="181513-SEVCO.5800020"> </a></td><td>Detection Bias</td><td>A bias due to distortions in any process involved in the determination of the recorded values for a variable.</td><td>Detection of the value of the variable comprises three processes involved in the determination of the recorded values for the variable: ascertainment (providing the opportunity for assessment), assessment (measurement and/or classification), and documentation (recording of data values for analysis).</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Janice Tufte, Muhammad Afzal; Paul Whaley</td><td>2022-01-28 vote 9-0 by Mario Tristan, Janice Tufte, Robin Ann Yurk, Brian S. Alper, C P Ooi, Harold Lehmann, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley</td><td>6/14/2021-06-14 vote 7-1 on "Detection Bias = A bias due to distortions in how variable values (data) are determined (measured, classified or ascertained)." by, Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco
AGREEMENT REACHED 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper --- for DEFINITION OF: A bias due to distortions in how variable values (data) are determined. COMMENT FOR APPLICATION: Determination may include ascertainment or assessment (classification or measurement).
2022-10-14 vote 3-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris
2022-01-21 vote 6-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra</td><td>We need to state that this bias relates to the "outcome" -- The ROB-1 says the term "outcome assessment" as an alternative for detection bias. The ROBINS-1 says that "Non-differential misclassification is unrelated to the outcome and will usually bias the estimated effect of intervention towards the null". Still though, this leads to inadvertent deviations in the outcome assessment. I would suggest to at least state that this bias relates to outcome assessment. I remember Joanne saying that we will add a few "child concepts" later on and if that is the case here, then it is fine. Nevertheless, the RoB2 suggests not to use these terms to prevent "confusion" and does not actually agree with these sub-classifications (the fist page of the introduction section). Alternative terms according to (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5130591/): "Observer Bias", "Ascertainment Bias", or "Assessment Bias" 8/27/21 comment: Detection bias is no included in the list the more problematic Cochrane ROB1 however (Jørgensen et al. Systematic Reviews (2016) describe alll the domains of ROB1 "frequently implemented in a non-recommended way" The description in general is clear.
2022-10-14 comments: Do we need "Outcome Detection Bias" in addition to "Detection Bias"?
Blinding or masking may be used to reduce the risk of distorted outcome measurement(s).
2022-01-21 comment: I am not sure whether to vote yes or no: I understand the definition because I have been following our discussions and it is consistent with the bias model we have developed, but I worry that this definition may not be consistently understood or applied by a user of SEVCO - I feel there is too much unspoken metaphysical baggage that is coherent and correct but not useful.
2022-01-28 comment: Not perfect but good enough to live with. Could maybe improve on ascertainment component of the comment for application.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00042<a name="181513-SEVCO.5800042"> </a></td><td>Outcome Detection Bias</td><td>A detection bias due to distortions in how an outcome is determined.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Erfan Shamsoddin, Muhammad Afzal, Kenneth Wilkins</td><td>10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00047<a name="181513-SEVCO.5800047"> </a></td><td>Cognitive Interpretive Bias for outcome determination</td><td>An outcome detection bias due to the subjective nature of human interpretation.</td><td/><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins</td><td>10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper, 7/7 for alternative terms on 9/24/21: Janice/Brian/Eric/Paola/Jesus/Bhagvan/Mario</td><td/><td>8/27/21 comment: This bias is difficult to manage and avoid it.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00048<a name="181513-SEVCO.5800048"> </a></td><td>Bias due to lack of masking for outcome determination</td><td>A cognitive interpretive bias for outcome determination due to awareness of the participant's status with respect to the exposure of interest.</td><td>Lack of blinding or masking is not automatically a bias, but if awareness of exposure status systematically distorts the outcome determination then a 'Bias due to lack of masking for outcome determination' exists.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Paul Whaley, Kenneth Wilkins</td><td>2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper</td><td>8/27/2021 vote 9-1 on "Lack of blinding during outcome assessment = A cognitive interpretive bias for outcome determination due to the outcome assessor’s awareness of the participant's status with respect to the exposure of interest." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper
earlier term approved 5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte</td><td>2021-08-27 comment: In my opinion "lack of blinding during outcome assessment" does not always imply bias for outcome determination (for example, for hard outcomes, such as analytic parameters, or all-cause mortality)
2022-03-18 comment: I would consider editing the term definition to ...lack of blinding.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00049<a name="181513-SEVCO.5800049"> </a></td><td>Observer bias for outcome determination</td><td>A cognitive interpretive bias for outcome determination due to subjective interpretations in the process of observing and recording information.</td><td>Multiple types of bias can overlap. Observer bias is different than lack of blinding with respect to the exposure. Observer bias is about the influence of the observer's interpretation of what they are observing, whether or not the observer is aware of the participant's exposure.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel</td><td>5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte,</td><td>8/27/2021 vote 9-1 on "Observer bias for outcome determination = A cognitive interpretive bias for outcome determination due to subjective interpretations in the process of observing and recording information." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td>2021-08-27 comment: This situation seems to be covered by "Lack of blinding for outcome determination" and "Outcome ascertainment bias". I would suggest deleting this term to remove the overlap.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00052<a name="181513-SEVCO.5800052"> </a></td><td>Confirmation bias for outcome determination</td><td>An observer bias for outcome determination due to previous opinions or knowledge of a subject’s prior exposures or assessments.</td><td/><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan</td><td>5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte,</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00050<a name="181513-SEVCO.5800050"> </a></td><td>Recall bias for outcome determination</td><td>A cognitive interpretive bias for outcome determination due to differences in accuracy or completeness of recall of past events or experiences.</td><td/><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel</td><td>10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00051<a name="181513-SEVCO.5800051"> </a></td><td>Apprehension bias for outcome determination</td><td>A cognitive interpretive bias for outcome determination due to a study participant's responding or behaving differently when aware of being observed.</td><td/><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Mario Tristan</td><td>5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte,</td><td>8/27/2021 vote 8-2 on "Apprehension bias for outcome determination = A cognitive interpretive bias for outcome determination due to study participants’ awareness of being observed resulting in different responses or behaviors." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td>2021-08-27 comments: A cognitive interpretive bias for outcome determination due to study participants’ awareness of being observed and resulting in different responses or behaviors. (just a slight rewording - the existing wording doesn't read well to me) This definition seems to refer to performance bias. The key is that [...] results in different responses or behaviours concerning the outcome determination.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00053<a name="181513-SEVCO.5800053"> </a></td><td>Hypothetical assessment bias for outcome determination</td><td>A cognitive interpretive bias for outcome determination due to a difference between an individual’s report of an imagined or hypothetical response from their actual response. The response may be a behavior or valuation.</td><td/><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan</td><td>7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00054<a name="181513-SEVCO.5800054"> </a></td><td>Mimicry bias for outcome determination</td><td>A cognitive interpretive bias for outcome determination due to a misinterpretation of observations that resemble the outcome.</td><td/><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel</td><td>7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00057<a name="181513-SEVCO.5800057"> </a></td><td>Unacceptability bias for outcome determination</td><td>A cognitive interpretive bias for outcome determination due to distortions in response, response values, or recording of responses resulting from perception of the social unacceptability of an outcome.</td><td/><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Ken Wilkins, Lisa Schilling</td><td>5/5 as of 10/1/21: , Joanne Dehnbostel, Brian S. Alper, Eric Harvey, Alejandro Piscoya, Bhagvan Kommadi,</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00058<a name="181513-SEVCO.5800058"> </a></td><td>Outcome Ascertainment Bias</td><td>An outcome detection bias due to distortions in how the data are collected.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00097<a name="181513-SEVCO.5800097"> </a></td><td>Nonrepresentative observation period for outcome of interest</td><td>An outcome ascertainment bias due to differences in the period used for observation of the outcome and the period for the outcome of interest.</td><td/><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>10/29/2021 vote 6-0 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte</td><td>10/15/2021 vote 5-2 on "Inappropriate follow up period for outcome of interest = An outcome ascertainment bias due to differences in the time period used for observation of the outcome and the true time period for outcome occurrence." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Misaligned follow up period for outcome of interest = An outcome ascertainment bias due to differences in the time period used for observation of the outcome and the true time period for outcome occurrence." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey</td><td>2021-10-15 comments: I wonder if we need to address interim analysis validity. What about adding to inappropriate 'unreliable'?; Change word Inappropriate to Different
2021-10-25 comments: It is unclear what do you mean with 'and the true time period for outcome occurrence', On the other hand, I propose using 'period' instead of 'time period'</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00098<a name="181513-SEVCO.5800098"> </a></td><td>Nonrepresentative context for outcome ascertainment</td><td>An outcome ascertainment bias due to differences in the context in which the outcome is observed and the intended context for the outcome of interest.</td><td>This term is used when the context used for outcome ascertainment is incorrect, insensitive, or nonspecific. If the context (whether representative or not) is applied inconsistently, then use the term "Inconsistency in outcome ascertainment"</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper</td><td>10/15/2021 vote 6-1 on "Unreliable method for outcome ascertainment = An outcome ascertainment bias due to methods of data collection that result in inconsistent data values." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Undependable method for outcome of interest = An outcome ascertainment bias due methods of data collection that result in inconsistent or incorrect data values." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
10/29/2021 vote 5-1 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte
11/22/2021 vote 6-1
2021-12-03 vote for priort term 7-0 by Philippe Rocca-Serra, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley, Joanne Dehnbostel, C Ooi</td><td>2021-10-15 comments: the word Unreliable is misleading as more applicable to measurement error than bias
2021-10-25 comments: I do not fully understand the difference between the second and the third definitions
2021-10-29 comments: Suggest Incorrect or inconsistent method.
2021-11-22 comments: The term 'inconsistent' may be more appropriate -- steering group discussion to move the "Comment for application" property higher on the page and see if this comment will resolve the concern</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00099<a name="181513-SEVCO.5800099"> </a></td><td>Inconsistency in outcome ascertainment</td><td>An outcome ascertainment bias due to differences within or between groups in how the data are collected.</td><td>This term is used when the context (whether representative or not) is applied inconsistently. If the context used for outcome ascertainment is incorrect, insensitive, or nonspecific, then use the term "Nonrepresentative context for outcome ascertainment"</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley, Harold Lehmann</td><td>2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper</td><td>10/15/2021 vote 6-1 on "Imbalance in application of outcome ascertainment = An outcome ascertainment bias due to differences within or between groups in how the data are collected." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Imbalance in application of outcome ascertainment = An outcome ascertainment bias due to differences within or between groups in how the data are collected." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
10/29/2021 vote on prior term 6-0 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte</td><td>2021-10-15 comments: Imbalance is misleading as more applicable to measurement error?
2021-10-25 comment: Suggestion, replace imbalance with Variation or Heterogeneity
2021-10-29 comment: Alternative Terms: Variation or Heterogeneity --> converted 2021-10-29 to suggested addition of alternative term "Variation in application of outcome ascertainment" by Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Harold Lehmann, Mario Tristan, Bhagvan Kommadi</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00059<a name="181513-SEVCO.5800059"> </a></td><td>Outcome Measurement Bias</td><td>An outcome detection bias due to distortions in how the observed outcomes are measured.</td><td>If one is addressing a bias in the instruments or processes used to measure the observed outcome, use Outcome Measurement Bias. If one is addressing how the measured outcome is categorized, use Outcome Classification Bias.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal; Paul Whaley</td><td>2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra</td><td>PRIOR AGREEMENT 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper WITH DEFINTION: An outcome detection bias due to distortions in how the data are measured.</td><td>2022-01-11 comment: Outcome Measurement Bias has a similar term definition as Outcome Classification Bias. May need to add an additional comment for application from T&O discussion.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00100<a name="181513-SEVCO.5800100"> </a></td><td>Inappropriate method for outcome measurement</td><td>An outcome measurement bias due to use of an incorrect method or protocol.</td><td/><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey</td><td>10/15/2021 vote 6-1 on "Outcome measurement method inappropriate = An outcome measurement bias due to use of an incorrect method or protocol." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 2-2 on "Outcome measurement method inappropriate = An outcome measurement bias due to use of an incorrect method or protocol." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey</td><td>2021-10-15 comments: I would change word inappropriate to different as the bias is from difference in comparison not flaws or errors in scientific methods.
2021-10-25 comments: suggest replace with incorrect method; Should not be 'Inappropriate outcome measurement method' (instead of placing the adjective at the end?)
2022-03-11 Preferred term revised (and alternative term added) to match corresponding changes in Exposure Detection Bias)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00101<a name="181513-SEVCO.5800101"> </a></td><td>Insensitive measure bias for outcome determination</td><td>An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present.</td><td>Use of an inadequately sensitive outcome measure is likely to result in false negative findings.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey</td><td>10/15/2021 vote 6-1 on "Insensitive measure bias for outcome determination = An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Insensitive measure bias for outcome determination =An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey</td><td>2021-10-15 comments: Change word Insensitive to Sensitivity measure bias as double negative in phrase
2021-10-25 comment: False Negative measure Bias or Unreliable measure bias</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00211<a name="181513-SEVCO.5800211"> </a></td><td>Nonspecific measure bias for outcome determination</td><td>An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent.</td><td>Use of an inadequately specific outcome measure is likely to result in false positive findings.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey</td><td>10/15/2021 vote 6-1 on "Nonspecific measure bias for outcome determination = An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Nonspecific measure bias for outcome determination = An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey</td><td>2021-10-15 comments: I would change to Specificity measurement bias. Remove word falsely from the definition as it implies problems with scientific methods
2021-10-25 comment: Suggest use False Positive Measure Biac</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00103<a name="181513-SEVCO.5800103"> </a></td><td>Inappropriate application of method for outcome measurement</td><td>An outcome measurement bias due to inappropriate application of the method or protocol.</td><td>An inappropriate application of the method or protocol suggests error is introduced by the process of measurement, as distinct from the method or protocol used for measurement (which would be an Inappropriate method for outcome measurement).</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2022-04-08 vote 11-1 (no rationale provided for the negative vote) by Muhammad Afzal, Paul Whaley, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, nelle.stocquart, nisha mathew, Harold Lehmann, Cauê Monaco</td><td>10/15/2021 vote 6-1 on "Outcome measurement conduct inappropriate = An outcome measurement bias due to incorrect application of the method or protocol." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 2-2 on "Outcome measurement conduct inappropriate = An outcome measurement bias due to incorrect application of the method or protocol." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
2021-11-05 vote 6-1 on "Inappropriate outcome measurement conduct = An outcome measurement bias due to incorrect application of the method or protocol." by Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey
2022-03-18 vote 4-1 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper
2022-03-25 vote 7-1 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Philippe Rocca-Serra, Brian S. Alper, Jesus Lopez-Alcalde, Paul Whaley, Muhammad Afzal</td><td>2021-10-15 comments: I would eliminate this definition - as suggests flawed study design..
2021-10-25 comments: replace inappropriate with incorrect; Should not be 'Inappropriate outcome measurement conduct' (instead of placing the adjective at the end?)
2021-11-05 comment: There is enormous overlap with this term and "Inappropriate outcome measurement method", so this one should be eliminated
As of 2021-11-05 this term is not being prepared for vote. The current ROB tools do not distinguish the inappropriate conduct (used in QUADAS-2) from inadequate method (used in most other ROB tools) in the same tool, so the demand for this term is uncertain and thus not applied for version 1 of the Code System.
On 2022-03-11 we revised this term to match corresponding changes that passed for Exposure Detection Bias.
2022-03-18 comment: Suggest edit alternative term from conduct to process
2022-03-25 comment: Recommend edit term definition so it reads: Outcome Measurement method Bias.
Suggest reviewing your complete taxonomy of terms and identify similarities or duplicate terms and potentially integrating terms by keeping as primary term versus adding to alternate term for prior vote with similar term definition or statements.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00104<a name="181513-SEVCO.5800104"> </a></td><td>Inconsistency in outcome measurement</td><td>An outcome measurement bias due to differences within groups in how the observed outcomes are measured.</td><td>"How the observed outcomes are measured" may refer to the methods applied for measurement or the application of those methods.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley, Robin Ann Yurk, Harold Lehmann</td><td>2022-01-21 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra</td><td>10/15/2021 vote 6-1 on "Imbalance in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Imbalance in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
2021-11-05 vote 6-1 on "Inconsistency in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey
2021-11-22 vote 3-2 on "Inconsistency in application of outcome measurement" = "An outcome measurement bias due to differences within or between groups in how the data are measured."2021-12-10 vote 5-1 by Joanne Dehnbostel, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley
PRIOR AGREEMENT 2021-12-17 vote 6-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Mario Tristan, C P Ooi, Jesus Lopez-Alcalde FOR DEFINITION: An outcome measurement bias due to differences within groups in how the data are measured. AND COMMENT FOR APPLICATION: "How the data are measured" may refer to the methods applied for data measurement or the application of those methods.</td><td>2021-10-15 comments: I would eliminate this definition
2021-10-25 comment: Replace Imbalance with Heterogeneity
2021-11-05 comment: This is a specific type of "Inappropriate outcome measurement method" so this term should be moved into that position or eliminated (are we really going to describe all of the inappropriate methods?) [[discussed in COKA WG and noted that ROB2 has separate questions 4.1 and 4.2 for these terms so we need to support that]
2021-11-22 comments: "The wording 'inconsistent method of outcome measurement' may better reflect the definition" and "May be pedantic, but is it data that are measured, or the outcome as a variable (that results in data)? I also wonder if we mean differences within groups - some variation would be expected, but what matters is if the variation results in systematic error in measuring the variable between groups. If we feel that e.g. a study design where two different ways of measuring outcome were implemented within groups, but this did not lead to bias across the exposure and control arms, then I would vote yes (pending clarification of "data")."2021-12-10 comment: It seems to not quite be correctly written. The two choices for definition are differently phrased ("application of methods" / "methods applied")even though I think they are supposed to refer to across groups or within groups, but both refer to within groups, so I am not sure how to interpret this.
2022-01-21 comment: As a comment: Is this term redundant, if the two child terms are the complete set of options for inconsistency in outcome measurement?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00243<a name="181513-SEVCO.5800243"> </a></td><td>Inconsistency in instruments used for outcome measurement</td><td>An outcome measurement bias due to differences within groups in the instruments used for measurement.</td><td>Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-01-21 vote 7-0 by Andrew Beck, Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra</td><td>2021-12-17 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde
2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan</td><td>2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ≅ methods applied)
2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00244<a name="181513-SEVCO.5800244"> </a></td><td>Inconsistency in processes used for outcome measurement</td><td>An outcome measurement bias due to differences within groups in the processes by which the instruments are used for measurement.</td><td>The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-01-21 vote 7-0 by Andrew Beck, Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra</td><td>2021-12-17 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde
2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan</td><td>2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ≅ methods applied)
2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00240<a name="181513-SEVCO.5800240"> </a></td><td>Imbalance in outcome measurement</td><td>An outcome measurement bias due to differences between groups in how the observed outcomes are measured.</td><td>"How the observed outcomes are measured" may refer to the methods applied for measurement or the application of those methods.</td><td>Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Robin Ann Yurk, Janice Tufte, Harold Lehmann, Mario Tristan, Kenneth Wilkins, Muhammad Afzal</td><td>2022-01-21 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra</td><td>2021-12-10 vote 5-0 by Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley but steering group decided to make changes consistent with changes to Inconsistency in outcome measurement.
PRIOR AGREEMENT 2021-12-17 vote 5-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati FOR DEFINITION: An outcome measurement bias due to differences between groups in how the data are measured. WITH COMMENT FOR APPLICATION: "How the data are measured" may refer to the methods applied for data measurement or the application of those methods.</td><td>2021-12-10 comment: Referring back to my comment on the inconsistency in method, I realise I hadn't read it quite right. In both cases, they maybe aren't quite as easy to parse as would be ideal but I can't think of a better definition. Maybe a use note to refer to how the terms are similar and clarify when one vs. the other should be used?
2022-01-21 comments: The term definition and comment is the same for Inconsistency in outcome measurement bias. Suggest combining the two terms by listing one as an alternative term. (yellow highlighting in messaging applied to show the differences in the terms)
As a comment: Is this term redundant, if the two child terms are the complete set of options for inconsistency in outcome measurement?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00245<a name="181513-SEVCO.5800245"> </a></td><td>Imbalance in instruments used for outcome measurement</td><td>An outcome measurement bias due to differences between groups in the instruments used for measurement.</td><td>Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra, Andrew Beck</td><td>2021-12-17 vote 4-1 by Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde
2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan</td><td>2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ≅ methods applied)
2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00246<a name="181513-SEVCO.5800246"> </a></td><td>Imbalance in processes used for outcome measurement</td><td>An outcome measurement bias due to differences between groups in the processes by which the instruments are used for measurement.</td><td>The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra, Andrew Beck</td><td>2021-12-17 vote 4-1 by Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Robin Ann Yurk
2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan</td><td>2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ≅ methods applied)
2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00060<a name="181513-SEVCO.5800060"> </a></td><td>Outcome Classification Bias</td><td>An outcome detection bias due to distortions in how the observed outcomes are classified.</td><td>If one is addressing a bias in the instruments or processes used to measure the observed outcome, use Outcome Measurement Bias. If one is addressing how the measured outcome is categorized, use Outcome Classification Bias.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra</td><td>PRIOR AGREEMENT 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper, 7/7 for renaming on 9/24/21: Janice/Brian/Eric/Paola/Jesus/Bhagvan/Mario FOR DEFINITION: An outcome detection bias due to distortions in how the data are classified.</td><td>2022-01-11 comment: Outcome Classification Bias has a similar term definition as Outcome Measurement Bias. May need to add an additional comment for application from T&O discussion.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00061<a name="181513-SEVCO.5800061"> </a></td><td>Outcome Classification System Bias</td><td>An outcome classification bias resulting from the definition or threshold used for outcome classification.</td><td>An outcome classification system bias suggests an internal validity problem in which the definition or threshold used for outcome classification does not represent the outcome of interest. If considering an external validity problem, the "Wrong question bias" (term not yet defined) may be used. An outcome classification system bias is present when there are differences between the outcome of interest and the definition or threshold used for outcome classification.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan</td><td>5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte,</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00105<a name="181513-SEVCO.5800105"> </a></td><td>Nonrepresentative definition for outcome classification</td><td>An outcome classification system bias due to a mismatch between the outcome of interest and the definition or threshold used for outcome measurement.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2021-12-10 vote 5-0 by Paul Whaley, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde</td><td>10/15/2021 vote 6-1 on "Nonrepresentative definition for outcome classification = An outcome classification system bias due to a definition or threshold that does not represent the outcome of interest." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
2021-11-29 vote 6-1 on "Nonrepresentative definition for outcome classification" = "An outcome classification system bias due to a definition or threshold that does not represent the outcome of interest." by Harold Lehmann, Paul Whaley, Janice Tufte, C P Ooi, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk</td><td>2021-10-15 comments: I would eliminate this definition
2021-11-29 comments: ("represent in its entirety" instead? A definition could *partially* represent the outcome of interest, so perhaps we want to make clear that this bias is invoked only for something that is more than "partial"?) "Represent" feels ambiguous, would it be useful to clarify what is meant here? Is it that it includes outcomes in addition to that of interest, and/or excludes outcomes that are of interest? Maybe that doesn't make things clearer.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>7</td><td style="white-space:nowrap"> SEVCO:00108<a name="181513-SEVCO.5800108"> </a></td><td>Surrogate marker bias for outcome classification</td><td>An outcome classification system bias due to use of a definition that is proxy for the outcome rather than direct observation of the outcome.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2021-12-10 vote 5-0 by Paul Whaley, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde</td><td>10/15/2021 vote 6-1 on "Surrogate marker bias for outcome classification = A nonrepresentative definition for outcome classification due to use of a factor associated with the outcome rather than a direct observation of the outcome." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
2021-11-29 vote 3-3 on "Surrogate marker bias for outcome classification" = "A nonrepresentative definition for outcome classification due to use of a proxy for the outcome rather than a direct observation of the outcome."</td><td>2021-10-15 comments: I would edit the definition: An outcome classification system bias due to use of a definition that is proxy rather than direct observation of the outcome. {{Definition changed as result of this comment}}
2021-11-29 comments: The 10/15 comments stated that the definition should start with "An outcome classification system bias...."; but this definition does not. A little pickier, I might say, "result from use of a definition" rather than "due to". The latter sounds like the bias will always occur; the former, that there is a bias as a result, in this instance.
I'm not sure I fully understand this definition. A surrogate would generally be used in place of an outcome that cannot readily be observed in a research setting. I am not sure how this can be a classification error (the surrogate is what the surrogate is). I can, however, see how it could be an error in inference (assuming that because the exposure affects the surrogate, then the exposure also affects the outcome of actual interest). Is this a helpful way of thinking about this, or would it just be over-complicating matters?
This suggested definition is more appropriate: An outcome classification system bias due to use of a definition that is proxy rather than direct observation of the outcome</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00106<a name="181513-SEVCO.5800106"> </a></td><td>Post-hoc definition of outcome</td><td>An outcome classification system bias due to defining the outcome after interacting with the study data.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2022-01-07 vote 9-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde, Harold Lehmann, Joanne Dehnbostel, Mario Tristan</td><td>10/15/2021 vote 6-1 on "Definition not prespecified for outcome classification = An outcome classification system bias due to absence of a predetermined definition." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Definition not prespecified for outcome classification = An outcome classification system bias due to absence of a predetermined definition." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
2021-12-03 vote 5-2 by Harold Lehmann, Paul Whaley, Janice Tufte, C P Ooi, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk
2021-12-10 vote 2-2 by Paul Whaley, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde</td><td>2021-10-15 comments: I would eliminate this definition as suggest flawed study design
2021-10-25 comment: I would phrase 'Not prespecified definition for outcome classification'
2021-12-03 comments: It feels uninformative to define "not prespecified" as "not predetermined". I wonder if "predetermined" can be clarified - presumably, the issue here is that the outcome is defined post-hoc, after data collection, so that outcome ends up being defined around the data rather than specified in advance of conduct of the research. // Rephrasing to this 'No prespecified definition for outcome classification' may be clearer and easier to understand.
2021-12-10 comments: Consider removing term. As methods are permitted to be revised for a variety of reasons with new definitions but would be described in methods or a revised protocol. If truly post-hoc after a data set is closed then there are different issues for discussion. /// Suggest changing "due to determination of the outcome definition" to "due to outcome being defined"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00062<a name="181513-SEVCO.5800062"> </a></td><td>Outcome Classification Process Bias</td><td>An outcome classification bias resulting from the application of the method used for outcome classification.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan</td><td>7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan</td><td>4-1 vote as of 9/17/2021 regarding Outcome Classification Process Bias (SEVCO:00062) (Classification process bias for outcome determination) [Draft Term] = An outcome misclassification bias resulting from the application of the method used for outcome classification.: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte,</td><td>comment: "This might be related to outcome classification bias (child relationship)"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00063<a name="181513-SEVCO.5800063"> </a></td><td>Incorporation Bias for outcome determination</td><td>An outcome classification bias due to the inclusion of the exposure under investigation in the method or process used for outcome classification.</td><td>In predictive model research, incorporation bias for outcome determination occurs if the predictor (explanatory variable) is included in the outcome definition.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan</td><td>5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte,</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00043<a name="181513-SEVCO.5800043"> </a></td><td>Exposure Detection Bias</td><td>A detection bias due to distortions in how an exposure of interest is determined.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Erfan Shamsoddin, Muhammad Afzal, Kenneth Wilkin</td><td>10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00055<a name="181513-SEVCO.5800055"> </a></td><td>Cognitive Interpretive Bias for exposure determination</td><td>An exposure detection bias due to the subjective nature of human interpretation.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. The human interpretation can be that of the observer or participant.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-02-04 vote 6-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper</td><td/><td>2022-02-04 comment: Consistency of phrasing with other definitions ("bias due to distortions in..."), need comment for application.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00056<a name="181513-SEVCO.5800056"> </a></td><td>Bias due to lack of masking for exposure determination</td><td>A cognitive interpretive bias for exposure determination due to awareness of the participant's status with respect to the outcome of interest or other relevant exposures.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Lack of blinding is not automatically a bias, but if awareness of some data systematically distorts the exposure determination then a 'Bias due to lack of masking for exposure determination' exists.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Paul Whaley</td><td>2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley</td><td>2022-02-04 vote 5-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper
2022-02-11 vote 8-1 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Janice Tufte, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde
2022-02-18 vote 10-3 by Rebecca Baker, Brian S. Alper, Mario Tristan, Paul Whaley, Sunu Alice Cherian, Janice Tufte, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Joanne Dehnbostel,Sumalatha A</td><td>2022-02-04 comment: Is it just awareness of the participant's status with respect solely to the outcome of interest? I could imagine being aware of e.g. socioeconomic status rather than outcome, and this potentially having an influence on exposure assessment. Blinding I think is supposed to be to as many characteristics of the participant as possible.
2022-02-11 comment: This definition assumes that the "Lack of blinding for exposure determination" always associates bias, which may not be the case. For example, if we want to assess the role of sex as a prognostic factor for ICU admission, the participant may not be blinded but this does not cause bias in his/her prognostic factor determination (sex)
2022-02-18 comments: As "lack of blinding" is contributing to but not the bias itself, perhaps rename to Awareness bias for exposure determination
This definition assumes that the "Lack of blinding for exposure determination" always associates bias, which may not be the case. For example, if we want to assess the role of sex as a prognostic factor for ICU admission, the participant may not be blinded but this does not cause bias in his/her prognostic factor determination (sex)
Not much difference between existing and new terminology
2022-02-25 comment: Suggest removing Lack of blinding during exposure assessment from alternative term and just list the other 3 alternative terms. The comment is based on your comment for application description.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00238<a name="181513-SEVCO.5800238"> </a></td><td>Observer bias for exposure determination</td><td>A cognitive interpretive bias for exposure determination due to subjective interpretations in the process of observing and recording information.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
Multiple types of bias can overlap. Observer bias is different than lack of blinding with respect to the outcome. Observer bias is about the influence of the observer's interpretation of what they are observing, whether or not the observer is aware of the participant's outcome.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel</td><td>2022-02-04 vote 6-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00239<a name="181513-SEVCO.5800239"> </a></td><td>Confirmation bias for exposure determination</td><td>An observer bias for exposure determination due to previous opinions or knowledge of a subject’s prior exposures or assessments.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan</td><td>2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00214<a name="181513-SEVCO.5800214"> </a></td><td>Recall bias for exposure determination</td><td>A cognitive interpretive bias for exposure determination due to differences in accuracy or completeness of recall of past events or experiences.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel</td><td>2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00215<a name="181513-SEVCO.5800215"> </a></td><td>Apprehension bias for exposure determination</td><td>A cognitive interpretive bias for exposure determination due to a study participant's responding or behaving differently when aware of being observed.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Mario Tristan</td><td>2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley</td><td/><td>2022-02-04 comment: What about using Hawthorne Effect for term definition and Apprehension Bias for alternative term</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00216<a name="181513-SEVCO.5800216"> </a></td><td>Hypothetical assessment bias for exposure determination</td><td>A cognitive interpretive bias for exposure determination due to a difference between an individual’s report of an imagined or hypothetical response from their actual response.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. The response may be a behavior or valuation. An individual's response to "What would you do?" or "What would you have done?" (an imagined or hypothetical response) may be different than the individual's response to "What did you do?" or observation of the individual's behavior (a reporting of an actual response). This bias is relevant for preference studies.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan</td><td>2022-02-11 vote 9-0 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Janice Tufte</td><td>2022-02-04 vote 4-1 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley</td><td>2022-02-04 comments: Is there a spelling error in Subjunctivity?
A minor issue - would the sentence "The response may be a behavior or valuation." be better placed in the comment for application (otherwise, would vote yes)
2022-02-11 comment: I would add a comment for application for the word hypothetical</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00217<a name="181513-SEVCO.5800217"> </a></td><td>Mimicry bias for exposure determination</td><td>A cognitive interpretive bias for exposure determination due to a misinterpretation of observations that resemble the exposure.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Other terms (Exposure Ascertainment Bias, Exposure Measurement Bias, Exposure Classification Bias) may be used to describe the process in Exposure Detection in which the bias occurs. The term 'Mimicry bias for exposure determination' is used to represent the type of cognitive interpretive bias occurring in this process.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-02-18 vote 11-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A</td><td/><td>2022-02-18 comments: Suggest insert alternative term: Duplicate
I'm not quite sure this is clear enough, though I don't have any concrete suggestions for improvement. It might be that I am not familiar enough with the issue in question to interpret the definition. Reading around this a bit, it resembles a misclassification type bias (for a given set of observations, the observer takes X to be cause when the true cause is Y). Given our model for bias (see our flow diagram), might it be better defined in those terms? -- RESOLVED IN GROUP DISCUSSION</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00218<a name="181513-SEVCO.5800218"> </a></td><td>Unacceptability bias for exposure determination</td><td>A cognitive interpretive bias for exposure determination due to distortions in response, response values, or recording of responses resulting from perception of the social unacceptability of an exposure.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Ken Wilkins, Lisa Schilling</td><td>2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00219<a name="181513-SEVCO.5800219"> </a></td><td>Exposure Ascertainment Bias</td><td>An exposure detection bias due to distortions in how the data are collected.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte</td><td/><td>2022-02-04 comment: Suggest modify alternative term to Data Collection Bias</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00220<a name="181513-SEVCO.5800220"> </a></td><td>Nonrepresentative observation period for exposure of interest</td><td>An exposure ascertainment bias due to differences in the time period used for observation of the exposure and the intended time period for the exposure of interest.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>2022-02-11 vote 9-0 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Janice Tufte</td><td>2022-02-04 vote 4-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte</td><td>2022-02-04 comment: I think this is about right but it could perhaps be tidied up a bit, e.g. using "time period" in both instances of "period"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00221<a name="181513-SEVCO.5800221"> </a></td><td>Nonrepresentative context for exposure ascertainment</td><td>An exposure ascertainment bias due to differences in the context in which the exposure is observed and the intended context for the exposure of interest.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
This term is used when the context used for exposure ascertainment is incorrect, insensitive, or nonspecific. If the context (whether representative or not) is applied inconsistently, then use the term "Inconsistency in exposure ascertainment"</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley</td><td>2022-02-18 vote 10-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A</td><td>2022-02-18 comments: Comment for application. I would delete sentence: If the method (whether dependable or undependable) is applied inconsistently then use the term inconsistency in application of exposure of ascertainment.
I'm not sure if "undependable" is the word we really want to use. Also, (1) no method for exposure ascertainment will give a strictly "correct" result, (2) inconsistency can result in random error and imprecision, not necessarily bias, (3) we are presumably worried about consistency over- or under-reading of a measurement method compared to some (possibly hypothetical) gold standard? Overall, it feels like there is more to discuss here.
2022-02-25 comment: I would delete or edit the current alternative term and replace with insensitive, or nonspecific context for exposure ascertainment.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00222<a name="181513-SEVCO.5800222"> </a></td><td>Inconsistency in exposure ascertainment</td><td>An exposure ascertainment bias due to differences within or between groups in how the data are collected.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
This term is used when the context (whether representative or not) is applied inconsistently. If the context used for exposure ascertainment is incorrect, insensitive, or nonspecific, then use the term "Nonrepresentative context for exposure ascertainment"</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley</td><td>2022-02-18 vote 8-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A</td><td>2022-02-18 comments: I would add comment for application from previous term.
If the method (whether dependable or undependable) is applied inconsistently then use the term inconsistency in application of exposure of ascertainment.
I don't really understand the term "Inconsistency in application of exposure ascertainment" - I am not clear what the nouns and verbs actually are here, nor what they refer to. I have been involved in the discussion of the underlying bias model and I still don't grasp the meaning here.
2022-02-25 comment: I would remove alternative term.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00223<a name="181513-SEVCO.5800223"> </a></td><td>Exposure Measurement Bias</td><td>An exposure detection bias due to distortions in how the observed exposures are measured.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
If one is addressing a bias in the instruments or processes used to measure the observed exposure, use Exposure Measurement Bias. If one is addressing how the measured exposure is categorized, use Exposure Classification Bias.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00224<a name="181513-SEVCO.5800224"> </a></td><td>Inappropriate method for exposure measurement</td><td>An exposure measurement bias due to use of an incorrect method or protocol.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley</td><td>2022-02-25 vote 11-2 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew</td><td>2022-02-25 comments: I would list measurement methods as examples under comment for application, such as pharma, survey...
I am not sure of the difference between this bias and "Undependable method for exposure ascertainment" bias. It also seems to me that "inappropriate" is a subjective term so I am not sure how it should be applied. [Side note: in the ballot, it might be useful to have terms arranged as they are in the SEVCO hierarchy, as this might be causing some of the confusion I am experiencing.]
The previous term convey almost similar meaning</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00225<a name="181513-SEVCO.5800225"> </a></td><td>Insensitive measure bias for exposure determination</td><td>An exposure measurement bias due to use of a method that does not reliably detect the exposure when the exposure is present.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Use of an inadequately sensitive exposure measure is likely to result in false negative findings.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>2022-02-25 vote 13-0 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew</td><td/><td>2022-02-18 comments: Suggest use term as Sensitivity Measure bias for exposure determination and insensitive measure bias for exposure determination for alternate term.
"Sensitivity" is not, in my experience, viewed exclusively in terms of measurement. Some experimental models cannot show the exposure (or outcome) because they are incapable of it, however it is measured in situ. For example, if the exposure was measured via presence of a metabolite, but the participant was not able to produce the metabolite, then the experiment would be insensitive regardless of measurement method. I am not sure this affects us here, but does it suggest a need for us to handle sensitivity in a comprehensive fashion? (Perhaps also specificity?)
As a side note, defining sensitivity well could be important for progress on risk of bias assessment methods used by EPA, who currently have assessment of "sensitivity" as a separate issue entirely outside of risk of bias assessment.
NEGATIVE VOTE CHANGED TO POSITIVE DURING DISCUSSION 2022-02-25</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00226<a name="181513-SEVCO.5800226"> </a></td><td>Nonspecific measure bias for exposure determination</td><td>An exposure measurement bias due to use of a method that falsely detects the exposure when the exposure is absent.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Use of an inadequately specific exposure measure is likely to result in false positive findings.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>2022-02-04 vote 5-0 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde</td><td/><td>2022-02-04 comment: Suggest use Specificity measure bias for exposure determination and non-specific measure bias for exposure determination for alternative term.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00228<a name="181513-SEVCO.5800228"> </a></td><td>Inappropriate application of method for exposure measurement</td><td>An exposure measurement bias due to inappropriate application of the method or protocol.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. An inappropriate application of the method or protocol suggests error is introduced by the process of measurement, as distinct from the method or protocol used for measurement (which would be an Inappropriate method for exposure measurement).</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley</td><td>2022-02-25 vote 12-1 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew</td><td>2022-02-11 comments: Add alternate term: Incorrect application of exposure measurement bias.
I think this is OK, but the term should be rewritten so it is easier to read and understand what it means (the syntax is awkward, as it could be read as one adjective and three nouns)
2022-03-11 comment: In documenting this, and the "inappropriate method for exposure measurement", I think it would be helpful to document what we mean by e.g. "method" vs. "application of method". I feel these are meta-terms like "study design feature" that are part of the scaffolding of SEVCO, but not part of SEVCO itself.</td><td/><td/><td>Noted for Outcome Detection Bias: As of 2021-11-05 this term is not being prepared for vote. The current ROB tools do not distinguish the inappropriate conduct (used in QUADAS-2) from inadequate method (used in most other ROB tools) in the same tool, so the demand for this term is uncertain and thus not applied for version 1 of the Code System.</td><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00229<a name="181513-SEVCO.5800229"> </a></td><td>Inconsistency in exposure measurement</td><td>An exposure measurement bias due to differences within groups in how the observed exposures are measured.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. "How the observed exposures are measured" may refer to the methods applied for measurement or the application of those methods.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>2022-02-11 vote 9-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Paul Whaley, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00247<a name="181513-SEVCO.5800247"> </a></td><td>Inconsistency in instruments used for exposure measurement</td><td>An exposure measurement bias due to differences within groups in the instruments for measurement.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00248<a name="181513-SEVCO.5800248"> </a></td><td>Inconsistency in processes used for exposure measurement</td><td>An exposure measurement bias due to differences within groups in the processes by which the instruments are used for measurement.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00241<a name="181513-SEVCO.5800241"> </a></td><td>Imbalance in exposure measurement</td><td>An exposure measurement bias due to differences between groups in how the observed exposures are measured.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. "How the observed exposures are measured" may refer to the methods applied for data measurement or the application of those methods.</td><td>Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Robin Ann Yurk, Janice Tufte, Harold Lehmann</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00249<a name="181513-SEVCO.5800249"> </a></td><td>Imbalance in instruments used for exposure measurement</td><td>An exposure measurement bias due to differences between groups in the instruments used for measurement.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00250<a name="181513-SEVCO.5800250"> </a></td><td>Imbalance in processes used for exposure measurement</td><td>An exposure measurement bias due to differences between groups in the processes by which the instruments are used for measurement.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies.</td><td>Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00230<a name="181513-SEVCO.5800230"> </a></td><td>Exposure Classification Bias</td><td>An exposure detection bias due to distortions in how the observed exposures are classified.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
If one is addressing a bias in the instruments or processes used to measure the observed exposure, use Exposure Measurement Bias. If one is addressing how the measured exposure is categorized, use Exposure Classification Bias.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00231<a name="181513-SEVCO.5800231"> </a></td><td>Exposure Definition Bias</td><td>An exposure classification bias resulting from the definition or threshold used for exposure classification.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
An exposure definition bias suggests an internal validity problem in which the definition or threshold used for exposure classification does not represent the exposure of interest. If considering an external validity problem, the "Wrong question bias" (term not yet defined) may be used. An exposure definition bias is present when there are differences between the exposure of interest and the definition or threshold used for exposure classification.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley</td><td>2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but comment discussion led to new term]</td><td>2022-02-11 comments: Suggest alternative term: threshold bias for exposure determination.
Suggest remove sentence on external validity problem....
In the comments, "term not yet identified", should be flagged for later replacement.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00232<a name="181513-SEVCO.5800232"> </a></td><td>Nonrepresentative definition for exposure classification</td><td>An exposure definition bias due to a mismatch between the exposure of interest and the definition or threshold used for exposure measurement.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2022-02-18 vote 6-0 by Joanne Dehnbostel, Alejandro Piscoya, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term]</td><td>2022-02-11 comment: Should there be a hyphen between "classification" and "system"? (Is it a system(s) bias or a classification-system bias?) (I think this question applies to several definitions)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>7</td><td style="white-space:nowrap"> SEVCO:00233<a name="181513-SEVCO.5800233"> </a></td><td>Surrogate marker bias for exposure classification</td><td>An exposure definition bias due to use of a definition that is proxy for the exposure rather than direct observation of the exposure.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term]</td><td>2022-02-11 comment: Suggest add alternative term: proxy bias for exposure classification system.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00234<a name="181513-SEVCO.5800234"> </a></td><td>Post-hoc definition of exposure</td><td>An exposure definition bias due to definition of the exposure after interacting with the study data.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley</td><td>2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term]</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00236<a name="181513-SEVCO.5800236"> </a></td><td>Classification process bias for exposure determination</td><td>An exposure classification bias resulting from the application of the method used for exposure classification.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
A classification process bias for exposure determination suggests error is introduced by the process of classification, as distinct from the definition or threshold used (which would be an Exposure Definition Bias).</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley</td><td>2022-02-18 vote 6-0 by Joanne Dehnbostel, Sumalatha A, Robin Ann Yurk, Janice Tufte, Harold Lehmann, Paul Whaley</td><td>2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but term changed to parallel changes to Exposure Definition Bias]</td><td>2022-02-11 comments: I would provide an example such as survey severity classification example of a method.
(Inconsistent capitalization)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00237<a name="181513-SEVCO.5800237"> </a></td><td>Incorporation Bias for exposure determination</td><td>An exposure classification bias due to the inclusion of the outcome or other relevant exposures under investigation in the method or process used for exposure classification.</td><td>The exposure of interest can be an intervention or a prognostic factor, depending on the research context. If the statistical analysis assumes independence of two variables, but one variable incorporates the other variable in its definition, the assumption will be false and the result will be distorted.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Paul Whaley</td><td>2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley</td><td>2022-02-25 vote 11-2 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew</td><td>2022-02-18 comments: Needs an alternative term or new term definition. I.e. Inclusion Bias for exposure definition for the term. Alternative term; eligibility bias for exposure determination
Definitely needs a comment for application, I can't picture what this means!</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00044<a name="181513-SEVCO.5800044"> </a></td><td>Confounder Detection Bias</td><td>A detection bias due to distortions in how the data for a potential confounder are determined.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00045<a name="181513-SEVCO.5800045"> </a></td><td>Detection Bias related to the reference standard</td><td>A detection bias due to distortions in how the reference standard result is determined.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00046<a name="181513-SEVCO.5800046"> </a></td><td>Detection Bias related to the index test</td><td>A detection bias due to distortions in how the index test result is determined.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal</td><td>5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte,</td><td>8/27/2021 vote 8-1 on "Detection Bias related to the index test (Bias for index text result determination) = A detection bias due to distortions in how the index text result is determined." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper</td><td>2021-08-27 comment: I think the word "text" should be "test" in the Alternative term and definition. Please consider broadening this term and definition to include distortions in how the index event is determined</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00383<a name="181513-SEVCO.5800383"> </a></td><td>Data Entry Bias</td><td>A detection bias due to differences between measured values and recorded values.</td><td>Data Entry Bias may include distorted results due to errors in transcription, translation, or transposition between the measured value and the recorded value, or between a recorded value and a subsequent recording of the value.</td><td>Brian S. Alper, Harold Lehmann, Janice Tufte, Muhammad Afzal, Kenneth Wilkins</td><td>2022-08-26 vote 7-0 by nisha mathew, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00389<a name="181513-SEVCO.5800389"> </a></td><td>Inappropriate time interval between predictor assessment and outcome determination</td><td>A detection bias involving the time interval between the observation of the predictor and outcome, where the interval used by the study differs from the interval assumed by the predictive model.</td><td>Nonrepresentative observation period for outcome of interest is defined as an outcome ascertainment bias due to differences in the period used for observation of the outcome and the period for the outcome of interest.
Nonrepresentative observation period for exposure of interest is defined as an exposure ascertainment bias due to differences in the time period used for observation of the exposure and the intended time period for the exposure of interest.
In the context of predictive modeling, the time interval between the exposure (predictor) and the outcome should be representative of the time interval of interest.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann</td><td>2023-10-20 vote 5-0 by Muhammad Afzal, Eric Harvey, Harold Lehmann, Louis Leff, Joanne Dehnbostel</td><td>2023-10-06 vote 3-1 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley</td><td>2023-10-06 comment: Two problems: (1) I am not sure how the definition equates to the term - in the term, it is about inappropriate time interval, but in the definition it is about the time interval not being that which is intended and representative of application of model. (2) I don't understand what is meant by the phrase "the intended time interval between the predictor and outcome that is representative of the application of the predictive model" - there are too many concepts all at once here, I think?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00021<a name="181513-SEVCO.5800021"> </a></td><td>Analysis Bias</td><td>A bias related to the analytic process applied to the data.</td><td/><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra, Mhuammad Afzal, Kenneth Wilkins</td><td>6/6 as of 8/15/2021: Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Mhuammad Afzal, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00022<a name="181513-SEVCO.5800022"> </a></td><td>Bias related to selection of the analysis</td><td>An analysis bias due to inappropriate choice of analysis methods before the analysis is applied.</td><td>An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias.</td><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra, Mhuammad Afzal, Kenneth Wilkins</td><td>6/6 as of 8/15/2021: Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Mhuammad Afzal, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00376<a name="181513-SEVCO.5800376"> </a></td><td>Bias related to selection of the data for analysis</td><td>An analysis bias due to inappropriate choice of data included in the analysis before the analysis is applied.</td><td>An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias.</td><td>Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel</td><td>2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan, Paola Rosati</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00213<a name="181513-SEVCO.5800213"> </a></td><td>Bias due to post-baseline factors influencing selection of the data for analysis</td><td>A bias related to selection of the data analysis based on participant characteristics observed after study enrollment.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel</td><td>2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan, Paola Rosati</td><td/><td/><td>ROBINS-I 2.1. Was selection of participants into the study (or into the analysis) based on participant characteristics observed after the start of intervention?</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00312<a name="181513-SEVCO.5800312"> </a></td><td>Missing or inadequate intention-to-treat analysis</td><td>A bias related to selection of the data analysis in which data are not completely analyzed according to the original assignment to comparison groups in an interventional study.</td><td>An intention-to-treat analysis may be defined as analysis of all randomized subjects according to their assigned intervention rather than according to the intervention actually received. There is considerable variation in reported studies with respect to the use of the term 'intention-to-treat analysis' and 'modified intention-to-treat analysis' but if the risk of bias assessment suggests an insufficient accounting for all participants as intended then one may report 'Inadequate intention-to-treat analysis'.
In non-randomized studies, this term may be used to denote missing or inadequate analysis according to the intended treatment, e.g prescribed medication vs. taken medication.</td><td>Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel</td><td>2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde</td><td>2022-05-13 vote 4-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati
2022-05-20 vote 9-1 by Joanne Dehnbostel, nelle.stocquart@kce.fgov.be, Eric M Harvey, Jesus Lopez-Alcalde, Paul Whaley, Robin Ann Yurk, Harold Lehmann, raradhikaag@gmail.com, Mario Tristan, Paola Rosati
2022-05-27 vote 4-1 by Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann</td><td>2022-05-13 comment: Instead of defining "Inadequate intention-to-treat analysis" why not defining waht "intention-to-treat analysis" is?
2022-05-20 comment: Suggest change term name to Intention to Treat Analysis and remove word inadequate from the term as this term includes the limitation of the analysis in the definition.
2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules.
Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}}</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00313<a name="181513-SEVCO.5800313"> </a></td><td>Missing or inadequate per-protocol analysis</td><td>A bias related to selection of the data analysis in which data are not completely analyzed according to the study protocol.</td><td>A per-protocol analysis may be defined as analysis of participants according to adherence to the assigned intervention (the 'treatment protocol') and/or according to adherence to the data collection protocol. Adherence may refer to adherence by the study participants or study personnel.</td><td>Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Paul Whaley, Harold Lehmann, Muhammad Afzal</td><td>2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde</td><td>2022-05-13 vote 4-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati
2022-05-20 vote 7-3 by Joanne Dehnbostel, nelle.stocquart@kce.fgov.be, Eric M Harvey, Jesus Lopez-Alcalde, Paul Whaley, Robin Ann Yurk, Harold Lehmann, raradhikaag@gmail.com, Mario Tristan, Paola Rosati
2022-05-27 vote 4-1 by Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann</td><td>2022-05-20 comments: I do not fully agree with this definition. I propose following the Cochrane Handbook:
Naïve ‘per-protocol’ analysis: analysis restricted to individuals who adhered to their assigned interventions.
Moreover, there is another analysis that is often biased:
‘As-treated’ analysis: analysis in which participants are analysed according to the intervention they actually received, even if their randomized allocation was to a different treatment group
I would present these as different analyses (not as synonims)
https://training.cochrane.org/handbook/current/chapter-08
------
I think I see what the definition is saying but it is rather hard to parse.
re: "Inadequate per-protocol analysis" = "A bias related to selection of the data analysis in which data are not completely analyzed according to the assignment to comparison groups according to the interventions received."
Suggest edit term so it reads per protocol analysis and remove the word inadequate. This type of analysis includes the bias in the term already
2022-05-13 comment: Instead of defining "Inadequate per-protocol analysis" why not defining what "per-protocol anlysis" is?
2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules.
Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}}</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00381<a name="181513-SEVCO.5800381"> </a></td><td>Missing or inadequate as-treated analysis</td><td>A bias related to selection of the data analysis in which data are not completely analyzed according to the interventions actually received.</td><td>An as-treated analysis may be defined as analysis of subjects according to the intervention actually received rather than their assigned intervention.</td><td>Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel</td><td>2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde</td><td/><td>2022-05-20 comments (from precursor term of Inadequate per-protocol analysis): I do not fully agree with this definition. I propose following the Cochrane Handbook:
Naïve ‘per-protocol’ analysis: analysis restricted to individuals who adhered to their assigned interventions.
Moreover, there is another analysis that is often biased:
‘As-treated’ analysis: analysis in which participants are analysed according to the intervention they actually received, even if their randomized allocation was to a different treatment group
I would present these as different analyses (not as synonims)
https://training.cochrane.org/handbook/current/chapter-08
------
I think I see what the definition is saying but it is rather hard to parse.
re: "Inadequate per-protocol analysis" = "A bias related to selection of the data analysis in which data are not completely analyzed according to the assignment to comparison groups according to the interventions received."
Suggest edit term so it reads per protocol analysis and remove the word inadequate. This type of analysis includes the bias in the term already
2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules.
Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}}</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00377<a name="181513-SEVCO.5800377"> </a></td><td>Bias related to selection of the variables for analysis</td><td>An analysis bias due to inappropriate choice of variables included in the analysis before the analysis is applied.</td><td>An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias.</td><td>Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-05-13 vote 5-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00292<a name="181513-SEVCO.5800292"> </a></td><td>Bias related to selection of the variables for adjustment for confounding</td><td>An analysis bias due to inappropriate choice of the variables for adjustment for confounding before the analysis is applied.</td><td>An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias.</td><td>Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-05-13 vote 5-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati</td><td/><td>This term was determined to also match 'Post-intervention confounding different (draft) Code: SEVCO:00283' which was originally derived from the trigger question from ROBINS-I: 1.6. Did the authors control for any post-intervention variables that could have been affected by the intervention? Detailed analysis found this to be more about improper control of 'confounding variables' that were not truly confounding variables.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00299<a name="181513-SEVCO.5800299"> </a></td><td>Bias controlling for time-varying confounding</td><td>A bias related to selection of the variables for adjustment for confounding in which the confounding is time-dependent.</td><td>An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Muhammad Afzal</td><td>2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00301<a name="181513-SEVCO.5800301"> </a></td><td>Inadequate adherence effect analysis</td><td>A bias related to selection of the variables for adjustment for confounding by adherence.</td><td>An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias.</td><td>Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00302<a name="181513-SEVCO.5800302"> </a></td><td>Predictors included in outcome definition</td><td>An analysis bias due to inappropriate choice of the variables for estimation of association in which one variable is incorporated in the definition of the other variable.</td><td>Predictors are also called covariates, risk indicators, prognostic factors, determinants, index test results, or independent variables (https://www.acpjournals.org/doi/10.7326/M18-1377).
If a predictor in the model forms part of the definition or assessment of the outcome that the model predicts, the association between predictor and outcome will likely be overestimated, and estimates of model performance will be optimistic; in diagnostic research, this problem is generally called incorporation bias. (https://www.acpjournals.org/doi/10.7326/M18-1377)
When this type of analysis bias is applied to predictive model analyses (in which the predictor is the exposure of interest), this type of bias is equivalent to "Incorporation bias for outcome determination" [SEVCO:00063]</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td>2022-07-29 comment: should "incorporation bias" be added as 'alternative term' ?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00319<a name="181513-SEVCO.5800319"> </a></td><td>Bias related to selection of predictors based on univariable analysis</td><td>An analysis bias due to inappropriate choice of the predictor variables for estimation of association in which predictors are selected based on statistically significant univariable associations (without adjustment for other predictors).</td><td>Predictors are also called covariates, risk indicators, prognostic factors, determinants, index test results, or independent variables (https://www.acpjournals.org/doi/10.7326/M18-1377).</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td/><td>PROBAST (https://www.acpjournals.org/doi/10.7326/M18-1377).:
4.5 Was selection of predictors based on univariable analysis avoided? (Model development studies only)
A data set will often have many features that could be used as candidate predictors, and in many studies researchers want to reduce the number of predictors during model development to produce a simpler model.
In a univariable analysis, individual predictors are tested for their association with the outcome. Researchers often select the predictors with a statistically significant univariable association (for example, P < 0.05) for inclusion in the development of a final prediction model. This method can lead to incorrect predictor selection because predictors are chosen on the basis of their statistical significance as a single predictor rather than in context with other predictors (49, 50, 191). Bias occurs when univariable modeling results in omission of variables from the model, because some predictors are important only after adjustment for other predictors, known from previous research to be important, did not reach statistical significance in the particular development set (for example, due to small sample size). Also, predictors may be selected on the basis of a spurious (accidental) association with the outcome in the development set.
A better approach to decide on omitting, combining, or including candidate predictors in multivariable modeling is to use nonstatistical methods—that is, methods without any statistical univariable pretesting of the associations between candidate predictors and outcome. Better methods include those based on existing knowledge of previously established predictors in combination with the reliability, consistency, applicability, availability, and costs of predictor measurement relevant to the targeted setting. Well-established predictors and those with clinical credibility should be included and retained in a prediction model regardless of any statistical significance (49, 50, 192). Alternatively, some statistical methods that are not based on prior statistical tests between predictor and outcome can be used to reduce the number of modeled predictors (for example, principal components analysis).</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00378<a name="181513-SEVCO.5800378"> </a></td><td>Bias related to selection of the analytic framework</td><td>An analysis bias due to inappropriate choice of the analytic framework before the analysis is applied.</td><td>An analytic framework is the model, scaffolding, or organizational representation of concepts used in analyzing the data. The concepts included in an analytic framework may involve data, variables, formulas, assumptions, and adjustments.</td><td>Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-05-27 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann, Joanne Dehnbostel</td><td>2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte [[but then the term changed in webmeeting 2022-05-13]]
2022-05-20 vote 4-2 by Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann</td><td>2022-05-20 comments:
seems to be entirely too much overlap with the "inappropriate analytic framework" term
I like this term and definition but I am not sure it is adequately differentiated from "inappropriate analytical framework". I think the term needs changing in some way.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00297<a name="181513-SEVCO.5800297"> </a></td><td>Inappropriate statistical model</td><td>A bias related to selection of the analytic framework in which the analytic model does not match the dataset characteristics or does not match the intention of the analysis.</td><td>A bias related to selection of the analytic framework is defined as an analysis bias due to inappropriate choice of the analytic framework before the analysis is applied.
An inappropriate statistical model may include one in which there is a mismatch between the realities of the data and the assumptions required for the analytic model. Complexities in the data may include univariate concerns (e.g. skewness or outliers) and multivariate concerns (e.g. curvilinearity, co-linearity, or latent associations between variables).</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel</td><td>2022-05-27 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann, Joanne Dehnbostel</td><td>2022-05-20 vote 5-1 by Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann</td><td>2022-05-20 comment:
I like this term and definition but I am not sure it is adequately differentiated from "Bias related to selection of the analytic framework". I think the term needs changing in some way.
2022-09-30 Steering Group change to Comment to application: comment added to this term instead of creating a new term for 'Inappropriate handling of complexities in the data'</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00375<a name="181513-SEVCO.5800375"> </a></td><td>Inappropriate modeling of censoring</td><td>An inappropriate statistical model due to inappropriate accounting for ranges of potential observation in which data observation is not possible.</td><td>An inappropriate statistical model is a bias related to selection of the analytic framework in which the analytic model does not match the dataset characteristics or does not match the intention of the analysis.
The "ranges of potential observation" may include periods of time (temporal ranges within which observation may occur), or ranges of detection with a measurement instrument (ranges of values that could be observed).
The concept of ranges of potential observation in which data observation is "not possible" may include impossibility due to physical realities (such as timing after competing risks or measurement instruments with limited ranges of detection) or impossibility due to administrative decisions (such as the observation period defined by the study protocol).</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-10-20 vote 7-0 by Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Mario Tristan, Paul Whaley, Janice Tufte, Eric Harvey</td><td/><td/><td>PROBAST 4.6 Were complexities in the data (e.g. censoring, competing risks, sampling of controls) accounted for appropriately?</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00316<a name="181513-SEVCO.5800316"> </a></td><td>Bias due to selection of the statistical significance threshold</td><td>An analysis bias resulting from selection of an inappropriate threshold for statistical significance.</td><td>The statistical significance threshold is part of the analytic framework. A bias related to selection of the analytic framework is defined as an analysis bias due to inappropriate choice of the analytic framework before the analysis is applied.
In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Paul Whaley</td><td>2022-06-24 vote 5-0 by Mario Tristan, Harold Lehmann, Eric Harvey, Janice Tufte, Louis Leff</td><td>2022-06-10 vote 5-1 by Brian S. Alper, Robin Ann Yurk, Paola Rosati, Mario Tristan, Harold Lehmann, Eric M Harvey
2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati</td><td>2022-06-10 comment: Consider editing the term definition to just Statistical significance threshold. For the alternative term remove word bias. For the comment for application remove the first sentence about bias.
2022-06-17 comments: I think I get it, but it is a bit tortured and I wonder if a normal user would interpret it correctly or understand it?
I am not sure we can rephrase the concept name making it more compact like "Statistical significance threshold selection bias"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00317<a name="181513-SEVCO.5800317"> </a></td><td>Bias related to multiple comparison adjustment</td><td>An analysis bias resulting from selection of a threshold for statistical significance which does not appropriately account for the effect of multiple comparisons on the statistical probability related to the result.</td><td>This bias may cause inappropriate rejection of the null hypothesis due to an unmodified threshold for significance in the face of multiple comparisons. This bias may also occur when adjustment for multiple comparisons is inappropriately applied and leads to failure to reject the null hypothesis.
A bias due to selection of the statistical significance threshold is defined as an analysis bias resulting from selection of an inappropriate threshold for statistical significance.
In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability.</td><td>Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Paul Whaley</td><td>2022-06-24 vote 5-0 by Muhammad Afzal, Mario Tristan, Harold Lehmann, Eric Harvey, Louis Leff</td><td>2022-06-10 vote 3-2 by Brian S. Alper, Robin Ann Yurk, Paola Rosati, Mario Tristan, Harold Lehmann
2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati</td><td>2022-06-10 comments: The measure does not have a statistical probability, the finding or result has a statistical probability. Change definition to "A statistical significance threshold selection bias in which the threshold for statistical significance does not account for the effect of multiple comparisons on the statistical probability related to the result."Is this a bias or just an incomplete analysis due to data requirements needed to compute the multiple comparison adjustment.
2022-06-17 comment: Looking at the significance threshold bias terms, the other two refer to selection of the analytic framework, but this one does not. Is there a reason for that?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>6</td><td style="white-space:nowrap"> SEVCO:00382<a name="181513-SEVCO.5800382"> </a></td><td>Mismatch of significance threshold and purpose</td><td>An analysis bias resulting from selection of a threshold for statistical significance which is inappropriate due to a mismatch between (1) how the statistical probability related to the result is determined and (2) the purpose for categorizing the result as statistically significant.</td><td>A threshold used for variable selection in regression analysis is often more liberal than a threshold used in hypothesis testing. Similarly a situation regarding safety may tolerate a higher chance of false positive findings so significance threshold may be higher. Some factors to consider include sample size, power of the test, and expected losses from Type I and Type II errors.
In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability.</td><td>Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Paul Whaley</td><td>2022-06-24 vote 5-0 by Muhammad Afzal, Mario Tristan, Harold Lehmann, Eric Harvey, Janice Tufte</td><td>2022-06-10 vote 2-2 by Brian S. Alper, Robin Ann Yurk, Mario Tristan, Harold Lehmann
2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati</td><td>2022-06-10 comments: A mismatch can occur even if the purpose was taken into account. As the term name "Mismatch of significance threshold and purpose" is a match for the definition of the parent term (Statistical significance threshold selection bias) there is a question of whether this term is needed.
Receiver operator curves are traditionally a statistic used to represent the continuum of cut point for the threshold value. The Sensitivity and Specificity can be calculated to evaluate the validity of the threshold cut point.
2022-06-17 comment: Add "Bias related to..." at beginning for consistency with others. What work is "selection of the analytic framework" doing in this definition?</td><td>How to Choose the Level of Significance: A Pedagogical Note -- The level of significance should be chosen with careful consideration of the key factors such as the sample size, power of the test, and expected losses from Type I and II errors. While the conventional levels may still serve as practical benchmarks, they should not be adopted mindlessly and mechanically for every application.
(https://mpra.ub.uni-muenchen.de/66373/1/MPRA_paper_66373.pdf)</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00304<a name="181513-SEVCO.5800304"> </a></td><td>Immortal time bias</td><td>A bias related to selection of the analytic framework in which an outcome variable includes an observation period during which the outcome could not have occurred.</td><td>Consider a study in which a sample is followed from 2000 to 2010. Mortality during this time period is the outcome, and receipt of Superdrug is the exposure.
--If 20 people received Superdrug in 2009 and 5 of them died in the subsequent year, the mortality with Superdrug is 25%.
--If 20 people never received Superdrug and 1 died each year so by 2010 the mortality without Superdrug is 50%.
Interpreting this result as Superdrug having a 50% relative risk reduction for mortality would be biased (distorted) by not accounting for the 9 years of time (immortal time) that the Superdrug recipients must have survived to be able to receive Superdrug in 2009.
If the outcome variable were defined as mortality 2009-2010, there would be no bias and the result would be a 150% relative risk increase.
If the outcome variable were defined as mortality 2000-2010, there is an immortal time bias (the Superdrug recipients could not have died before receiving Superdrug).</td><td>Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte</td><td>2022-07-22 vote 7-0 by Mario Tristan, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey</td><td>2022-07-15 vote 5-1 by Mario Tristan, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Janice Tufte</td><td>2022-07-15 comment: Why the need to specify "in a larger sample" in the second sentence, since there is no assumption about size of the sample in the first assertion?</td><td>Catalog of bias: A distortion that modifies an association between an exposure and an outcome, caused when a cohort study is designed so that follow-up includes a period of time where participants in the exposed group cannot experience the outcome and are essentially 'immortal'.
in https://academic.oup.com/aje/article/167/4/492/233064 :
Immortal time refers to a span of time in the observation or follow-up period of a cohort during which the outcome under study could not have occurred (13, 14). It usually occurs with the passing of time before a subject initiates a given exposure. While a subject is not truly immortal during this time span, the subject necessarily had to remain event free until start of exposure to be classified as exposed. An incorrect consideration of this unexposed time period in the design or analysis will lead to immortal time bias.
in JAMA https://jamanetwork.com/journals/jama/article-abstract/2776315
Such studies may be subject to immortal time bias, meaning that, during the period of observation, there is some interval during which the outcome event cannot occur
in https://watermark.silverchair.com/dyab157.pdf
In particular, incorrect handling of follow-up times in terms of exposure status in the analysis of such studies may introduce immortal time bias (ITB) in favour of the exposed group.2,3 Immortal time refers to a period of time in which, by design, participants in the exposed group cannot experience the outcome. This often happens in pharmacoepidemiologic studies in which treatment is prescribed at variable times (with delay) after disease diagnosis. The bias occurs when the exposed group is considered to be exposed during their entire follow-up time (even during periods in which they are theoretically unexposed) or their unexposed follow-up times are discarded.2,3</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00293<a name="181513-SEVCO.5800293"> </a></td><td>Inadequate sample size</td><td>A bias related to selection of the analytic framework in which the sample size invalidates the assumptions of the analytic framework.</td><td>An example of 'Inadequate sample size' is a finding of no effect with inadequate power to detect an effect. Another example of 'Inadequate sample size' is use of a parametric analysis with low numbers, which invalidates the assumptions for use of a parametric analysis.</td><td>Brian S. Alper, Harold Lehmann, Janice Tufte, Joanne Dehnbostel, Mario Tristan, Khalid Shahin</td><td>2022-07-22 vote 7-0 by Mario Tristan, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00294<a name="181513-SEVCO.5800294"> </a></td><td>Bias related to execution of the analysis</td><td>An analysis bias due to inappropriate decisions pertaining to preparation of data for analysis and/or conduct of the analysis.</td><td>"Bias related to selection of the analysis" is used when the wrong analysis is done (the analysis is planned wrongly). "Bias in processing of data" is used when the analysis is done wrong (the analysis is executed wrongly).</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Paul Whaley, Yuan Gao</td><td>2022-11-04 vote 5-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00305<a name="181513-SEVCO.5800305"> </a></td><td>Incomplete analysis</td><td>An analysis bias due to absence of a component of the analytic process.</td><td>Missing components may include addressing missing data, addressing potential confounders, checking model assumptions, or robustness checks for model misspecification.</td><td>Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Khalid Shahin</td><td>2022-09-09 vote 6-0 by Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Khalid Shahin, Janice Tufte, Eric Harvey</td><td>2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey
2022-08-19 vote 5-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra
2022-08-25 vote 8-1 by nisha mathew, Jesus Lopez-Alcalde, Cauê Monaco, Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra</td><td>2022-08-12 comment: Ambiguous as to whether the data is incomplete or the analytic process incomplete. Also seems to be ambiguous as to whether the analysis is of a selected subset of the existing data (thus relating to selection bias?), or of data that is not representative of the totality of theoretically available data (thus relating to external validity?).
2022-08-19 comment: tension between bias and process. Shouldn't it be "incomplete analysis related bias"? omission seems to indicate a wilful act. "absence" may be more neutral when considering a 'canonical / state of the art / standardised ' protocol.
"An analysis bias due to absence of a component deemed necessary in a state-of- art (possibly regulator-approved ) analytic process."</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00306<a name="181513-SEVCO.5800306"> </a></td><td>Inappropriate handling of uninterpretable data</td><td>An analysis bias due to omission of uninterpretable values, or their replacement with inappropriate values.</td><td>Inappropriate values may include use of non-representative imputation treating uninterpretable data like missing data.
In evaluation of diagnostic tests, omission of or inappropriate classification of test results would be Inappropriate handling of uninterpretable data.</td><td>Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal</td><td>2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew</td><td>2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey</td><td>2022-08-12 comment: I'm not sure I would understand the definition if I had not read the term, suggest rephrasing - "omission of accommodation for" is perhaps the problem part.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00307<a name="181513-SEVCO.5800307"> </a></td><td>Inappropriate handling of missing data</td><td>An analysis bias due to use of non-representative values in place of missing data.</td><td>Handling of missing data may address data missing at levels of single observations or groupings by encounter, participant, site, or subpopulation.</td><td>Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins</td><td>2022-08-12 vote 5-0 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00308<a name="181513-SEVCO.5800308"> </a></td><td>Inappropriate handling of variables</td><td>An analysis bias due to processing a variable in an incorrect role or with an incorrect datatype.</td><td>Typical variable roles are population, exposure, confounder, and outcome.
A variable datatype may be numerical (continuous or discrete) or categorical (ordinal or nominal).</td><td>Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins</td><td>2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew</td><td/><td/><td/><td/><td>Consider types to include Inappropriate handling of confounders, and Inappropriate handling of measurement error</td><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00300<a name="181513-SEVCO.5800300"> </a></td><td>Bias in adjustment for selection bias</td><td>An analysis bias due to inappropriate application of adjustment techniques for correction of bias in the selection of participants for analysis.</td><td>Bias in the selection of participants for analysis could occur due to Participant Selection Bias (SEVCO:00003) or participant-level Bias related to selection of the data for analysis (SEVCO:00376).
"It is in principle possible to correct for selection biases, for example by using inverse probability weights to create a pseudo-population in which the selection bias has been removed, or by modelling the distributions of the missing participants or follow up times and outcome events and including them using missing data methodology." (Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016 Oct 12;355:i4919. doi: 10.1136/bmj.i4919. PMID: 27733354; PMCID: PMC5062054. Supplementary Table A.)</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal</td><td>2022-09-30 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Janice Tufte, Eric Harve, Morufu Olalekan Raimi</td><td/><td/><td>"It is in principle possible to correct for selection biases, for example by using
inverse probability weights to create a pseudo-population in which the
selection bias has been removed, or by modelling the distributions of the
missing participants or follow up times and outcome events and including
them using missing data methodology." (Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016 Oct 12;355:i4919. doi: 10.1136/bmj.i4919. PMID: 27733354; PMCID: PMC5062054. Supplementary Table A.)</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00309<a name="181513-SEVCO.5800309"> </a></td><td>Data Transition Bias</td><td>An analysis bias due to differences between recorded data and data used for analysis.</td><td>Data Transition Bias may include distorted results due to errors in transcription, translation, erroneous mapping, or transposition between the recorded data (values, labels, and other metadata) and the data used for analysis. Data Transition Bias may occur due to any problem encountered during the Extraction, Transformation, and Loading (ETL) process in data exchange.</td><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-11-04 vote 5-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00311<a name="181513-SEVCO.5800311"> </a></td><td>Inappropriate handling of missing confounder data</td><td>An analysis bias due to use of non-representative values in place of missing data for variables in the role of confounder.</td><td>Handling of missing confounder data may address data missing at levels of single observations or groupings by encounter, participant, site, or subpopulation. Inappropriate handling of missing confounder data can result in misleading adjusted analyses.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00298<a name="181513-SEVCO.5800298"> </a></td><td>Computational Implementation Bias</td><td>An analysis bias due to miscalculations in the processing of the data.</td><td>This bias is intended to cover a broad range of errors in curating the data and performing the calculations specified or implied by the analytic plan, including but not limited to: memory allocation and other environmental specifications, data ingestion pipeline, statistical package choice and vetting, and syntax, semantics and logic of coding. this bias can be applied to both manual or computer based computation.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Khalid Shahin, Muhammad Afzal, Neeraj Ojha</td><td>2022-09-09 vote 6-0 by Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Khalid Shahin, Janice Tufte, Eric Harvey</td><td>2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey
2022-08-19 vote 4-2 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra</td><td>2022-08-12 comment: Not sure about including data entry errors among errors in software code - the latter is a computational error, the former is not. Also, the definition does not specify computational processing.
2022-08-19 comment: the class label is ambiguous: is it "computation error caused bias" or it is 'contradictions caused bias? The latter term does not add clarity.
Also, only data entry errors resulting from computational errors would fall under this type of bias, but not direct entry of values.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00324<a name="181513-SEVCO.5800324"> </a></td><td>Reported analysis not following pre-specified analysis plan</td><td>An analysis bias in which the reported analysis does not match the pre-specified analysis plan.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2023-03-10 vote 5-0 by Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00303<a name="181513-SEVCO.5800303"> </a></td><td>Collider bias</td><td>An analysis bias in which an estimation of association between two variables is distorted by controlling for a third variable affected by both variables of interest (or factors causing the variables of interest).</td><td>Collider bias occurs when an exposure and outcome (or factors causing these) each influence a common third variable and that variable or collider is controlled for by design or analysis. In contrast, confounding occurs when an exposure and outcome have a shared common cause that is not controlled for. (JAMA 2022 Mar 14 https://jamanetwork.com/journals/jama/fullarticle/2790247)
The "third variable" affected by both variables of interest can also be a "third variable" affected by an "intermediary variable" which is affected by both variables of interest.
An analysis bias is defined as a bias related to the analytic process applied to the data.
A bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]).</td><td>Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel, Paul Whaley</td><td>2022-07-08 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Janice Tufte, Harold Lehmann</td><td>2022-07-01 vote 3-2 by Harold Lehmann, Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Philippe Rocca-Serra</td><td>2022-07-01 comments: Is this the same as a confounding variable? If not, please differentiate.
the first comment seems a bit confusing: does collider bias occurs when the study design controls for a variable which is influenced by both the exposure and the outcome?
I'm not sure this is correct. My understanding is that collision comes into play when effect modifiers are treated as confounders (and possibly when confounders are treated as modifiers? I don't know if it is symmetric). This reads as though it is an analysis unadjusted for confounders, with the factor causing both the cause and effect variables.
Confounding: A < B > C and A > C
Modification: A > B > C and A > C
Collision: Conditioning on B under modification rather than confounding.</td><td>A structural classification of bias distinguishes between biases resulting from conditioning on common effects (“selection bias”) --- A Structural Approach to Selection Bias, https://journals.lww.com/epidem/Fulltext/2004/09000/A_Structural_Approach_to_Selection_Bias.20.aspx
Collider bias occurs when an exposure and outcome (or factors causing these) each influence a common third variable and that variable or collider is controlled for by design or analysis. In contrast, confounding occurs when an exposure and outcome have a shared common cause that is not controlled for. -- JAMA 2022 Mar 14 https://jamanetwork.com/journals/jama/fullarticle/2790247
https://catalogofbias.org/biases/collider-bias/ Collider bias = A distortion that modifies an association between an exposure and outcome, caused by attempts to control for a common effect of the exposure and outcome</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00314<a name="181513-SEVCO.5800314"> </a></td><td>Preliminary analysis bias</td><td>An analysis bias related to analysis of data before the complete dataset is available.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins</td><td>2022-12-23 vote 6-0 by Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00295<a name="181513-SEVCO.5800295"> </a></td><td>Data-dredging bias</td><td>An analysis bias involving use of data analyses that are not pre-specified and fully disclosed, to select analyses with desirable results.</td><td>Types of data analysis that lead to data-dredging bias include but are not limited to repeated subgroup analyses, repeated adjusted analyses, repeated analyses with different analytic models, and repeated analyses across many outcomes for many variations of defining outcomes, any of which can be done to select ("cherry-pick") the analyses that provide a desired result. The desired result may be statistically significant findings or other specific results.
The terms "p-hacking" and "Fishing expedition" are commonly used terms to describe data-dredging practices that lead to bias and are often used to imply bias.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, Paul Whaley, Kenneth Wilkins</td><td>2022-12-23 vote 6-0 by Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey</td><td>2022-12-09 votes 4-0 by Yuan Gao, Mario Tristan, Eric Harvey, Harold Lehmann
2022-12-16 votes 6-1 by Philippe Rocca-Serra, Janice Tufte, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann</td><td>2022-12-09 comment: Ioannidis, J. P. A. (2019) P values linked to null hypothesis significance testing (NHST) is the most widely (mis)used method of statistical inference. Empirical data suggest that across the biomedical literature (1990–2015), when abstracts use P values 96% of them have P values of 0.05 or less. The same percentage (96%) applies for full-text articles.
2022-12-16 comments: Delete comma in definition (before "that").
p-hacking and fishing expedition aren't synonyms but data processes leading to bias. "p-hacking induced bias" maybe</td><td>from Catalog of Bias (https://catalogofbias.org/biases/data-dredging-bias/): Data-dredging bias = A distortion that arises from presenting the results of unplanned statistical tests as if they were a fully prespecified course of analyses.
from BMJ Evidence-Based Medicine (https://ebm.bmj.com/content/27/4/209): Background: what is data dredging bias?
Data-dredging bias encompasses a number of more specific questionable practices (eg, fishing, p-hacking) all of which involve probing data using unplanned analyses and then reporting salient results without accurately describing the processes by which the results were generated.
from Wikipedia (https://en.wikipedia.org/wiki/Data_dredging): Data dredging (also known as data snooping or p-hacking) is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00322<a name="181513-SEVCO.5800322"> </a></td><td>final model not corresponding to multivariable analysis</td><td>An analysis bias in which the predictors and coefficients in the final model do not match the predictors and coefficients reported in the multivariable analysis.</td><td>This type of bias is applicable to model development studies and model selection within other study designs.</td><td>Kenneth Wilkins, Brian S. Alper</td><td/><td/><td/><td>from PROBAST:
4.9 Do predictors and their assigned weights in the final model correspond to the results from the reported multivariable analysis? (Model development studies only)
Predictors and coefficients of the final developed model, including intercept or baseline components, should be fully reported to allow others to correctly apply the model to other individuals. Mismatch between the presented final model and the reported results from the multivariable analysis (such as the intercept and predictor coefficients) is frequent. A review of prediction models in cancer in 2010 found that only 13 of 38 final prediction model equations (34%) used the same predictors and coefficients as the final presented multivariable analyses, 8 used the same predictors but different coefficients, 11 used neither the same coefficients nor the same predictors, and 6 used an unclear method to derive the final prediction model from the presented results of the multivariable analysis (121).
Bias can arise when the presented final model and the results reported from the multivariable analysis do not match. One way this can occur is when nonsignificant predictors are dropped from a larger model to arrive at a final presented model but the predictor coefficients from the larger model are used to define the final model, which are no longer correct. When predictors are dropped from a larger model, it is important to reestimate all predictor coefficients of the smaller model because the latter has become the final model. These newly estimated predictor coefficients are likely different even if nonsignificant or irrelevant predictors from the larger model are dropped.
When a study reports a final model in which both predictors and regression coefficients correspond to the reported results of the multivariable regression analysis or model, this question should be answered as Y. If the final model is based only on a selection of predictors from the reported multivariable regression analysis without refitting the smaller model, it should be answered as N or PN. When no information is given on the multivariable modeling from which predictors and regression coefficients are derived, it should be answered as NI.
This signaling question is not about detecting improper methods of selecting predictors for the final model; such methods are addressed in signaling question 4.5.</td><td>2023-12-01</td><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00310<a name="181513-SEVCO.5800310"> </a></td><td>Cognitive Interpretive Bias affecting analysis</td><td>A bias related to the analytic process due to the subjective nature of human interpretation.</td><td>The Cognitive Interpretive Bias affecting analysis can be mitigated by masking the analyst as to the assignments for the groups, and by specification of the analysis prior to data availability.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte</td><td>2022-11-18 vote 6-0 by Mahnoor Ahmed, Yuan Gao, Harold Lehmann, Jesus Lopez-Alcalde, Paul Whaley, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00379<a name="181513-SEVCO.5800379"> </a></td><td>Cognitive Interpretive Bias affecting analysis selection</td><td>A bias related to selection of the analysis due to the subjective nature of human interpretation.</td><td>Bias related to selection of the analysis is defined as an analysis bias due to inappropriate choice of analysis methods before the analysis is applied.
The Cognitive Interpretive Bias affecting analysis selection can be mitigated by masking the analyst as to the assignments for the groups, and by specification of the analysis prior to data availability.</td><td>Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel</td><td>2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00315<a name="181513-SEVCO.5800315"> </a></td><td>Availability bias affecting analysis selection</td><td>A Cognitive Interpretive Bias due to the use of information which is most readily available, rather than information which is most representative, affecting analysis selection.</td><td>Selection of inappropriate data or variables for analysis is an availability bias when the appropriate data or variables are not readily available to the analyst and therefore the appropriate analysis is not selected.
Selection of an inappropriate analysis due to familiarity with the analytic techniques is an availability bias when the appropriate technique is unfamiliar and therefore not selected.
The term "Availability bias affecting analysis selection" is about selection of the analysis and not about missing data.</td><td>Brian S. Alper, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-08-26 vote 7-0 by nisha mathew, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey</td><td>2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey
2022-08-19 vote 5-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra</td><td>2022-08-12 comment: Clarify as to whether this is exclusively about cognitive availability? Seems ambiguous in current phrasing. Would suggest comment for application to make clear specific circumstances in which this applies.
2022-08-19 comment: The definition is ambiguous about whether limits on access to the information is cognitive (e.g. familiarity) or otherwise. Also, the definition specifies "information" when the thing being selected is a technique for analysing information.</td><td>Catalogue of Bias:
Availability bias
A distortion that arises from the use of information which is most readily available, rather than that which is necessarily most representative.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00380<a name="181513-SEVCO.5800380"> </a></td><td>Cognitive interpretive bias affecting execution of the analysis</td><td>A bias in processing of data due to the subjective nature of human interpretation.</td><td>Bias in processing of data is defined as an analysis bias due to inappropriate decisions pertaining to preparation of data for analysis and/or conduct of the analysis.
This bias may be mitigated by the partial masking or blinding of the individuals conducting the analysis.</td><td>Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Paul Whaley, Yuan Gao, Harold Lehmann, Brian S. Alper</td><td>2022-12-02 vote 6-0 by Mario Tristan, Yuan Gao, Mahnoor Ahmed, Muhammad Afzal, Janice Tufte, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00296<a name="181513-SEVCO.5800296"> </a></td><td>Lack of blinding of data analysts</td><td>A cognitive interpretive bias affecting execution of the analysis due to the analyst's awareness of the participants' status with respect to the variables defining the comparison groups.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Joanne Dehnbostel, Paul Whaley</td><td>2022-12-02 vote 5-0 by Mario Tristan, Mahnoor Ahmed, Muhammad Afzal, Janice Tufte, Eric Harvey</td><td/><td>2022-12-02 comment: Should it be participants' statuses --- EWG discussion notes that "status" can be used for the plural</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00392<a name="181513-SEVCO.5800392"> </a></td><td>Inappropriate weighting bias</td><td>An analysis bias in which the weights used in model construction do not align with the target of estimation or estimand.</td><td>This bias often occurs with the omission of sampling weights in a model or in the process of trying to mitigate misrepresentation of a population due to sampling.
One example is use of an unweighted model with National Health and Nutrition Examination Survey (NHANES) data.
This bias occurs when attempting to reweight imbalanced classes in a model to make them representative of the source population, when weights drive estimation away from the target.</td><td>Brian S. Alper, Kenneth Wilkins</td><td>2023-10-13 vote 6-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00320<a name="181513-SEVCO.5800320"> </a></td><td>Inappropriate evaluation of predictive model performance measures</td><td>An analysis bias in which the method for analysis of a performance measure (such as calibration or discrimination) is not adequate or suitable for the predictive model.</td><td>According to PROBAST explanation, to fully gauge the predictive performance of a model, reviewers must assess both model calibration and discrimination (such as the c-index) addressing the entire range of the model-predicted probabilities. (https://www.acpjournals.org/doi/10.7326/M18-1377)</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins</td><td>2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00393<a name="181513-SEVCO.5800393"> </a></td><td>Inappropriate evaluation of calibration of predictive model</td><td>An analysis bias in which the method for analysis of calibration is not adequate or suitable for the predictive model.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins</td><td>2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte</td><td/><td>2023-10-06 comment: Is the bias because an analyst prefers one model over another when there might be a more appropriate one ( perhaps the analyst is not familiar with?)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00394<a name="181513-SEVCO.5800394"> </a></td><td>Inappropriate evaluation of discrimination of predictive model</td><td>An analysis bias in which the method for analysis of discrimination is not adequate or suitable for the predictive model.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins</td><td>2023-10-13 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00321<a name="181513-SEVCO.5800321"> </a></td><td>model overfitting</td><td>An analysis bias, specific to predictive model development studies, in which strategies to mitigate overfitting are not adequately applied.</td><td>Predictive model performance measures (calibration and discrimination) may be misinterpreted if there are no strategies to mitigate overfitting. This applies to development studies without external validation studies. Strategies to mitigate overfitting may include penalization/regularization, k-fold cross validation, train-test/validation split, etc.
From the PROBAST explanation (https://www.acpjournals.org/doi/10.7326/M18-1377): "quantifying the predictive performance of a model on the same data from which the model was developed (apparent performance) tends to give optimistic estimates of performance due to overfitting—that is, the model is too much adapted to the development data set. This optimism is higher when any of the following are present: too few outcome events in total, too few outcome events relative to the number of candidate predictors (small EPV), dichotomization of continuous predictors, use of predictor selection strategies based on univariable analyses, or use of traditional stepwise predictor selection strategies (for example, forward or backward selection) in multivariable analysis in small data sets (small EPV)"</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann</td><td>2023-10-20 vote 5-0 by Muhammad Afzal, Eric Harvey, Harold Lehmann, Louis Leff, Joanne Dehnbostel</td><td>2023-10-06 vote 3-1 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Janice Tufte</td><td>2023-10-06 comments: I am not sure about having a preferred term that actually consists of two terms - overfit and optimism. Is one a synonym of the other?
Optimism- being too over optimistic and fitting things into the model that really were not defined early on? (adding inappropriate data that can skew the outcomes?)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00023<a name="181513-SEVCO.5800023"> </a></td><td>Reporting Bias</td><td>A bias due to distortions in the selection of or representation of information in study results or research findings.</td><td>Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation]).</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao</td><td>2022-10-21 vote 7-0 by Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Mario Tristan, Brian Alper, Janice Tufte, Eric Harvey</td><td/><td/><td>CoB: Reporting biases = A systematic distortion that arises from the selective disclosure or withholding of information by parties involved in the design, conduct, analysis, or dissemination of a study or research findings (https://catalogofbias.org/biases/reporting-biases/) also notes: The Dictionary of Epidemiology defines reporting bias as the “selective revelation or suppression of information (e.g., about past medical history, smoking, sexual experiences) or of study results.”
The Cochrane Handbook states it arises “when the dissemination of research findings is influenced by the nature and direction of results.”
The James Lind Library states “biased reporting of research occurs when the direction or statistical significance of results influence whether and how research is reported.”
QUIPS: The Statistical Analysis and Reporting domain addresses the appropriateness of the study’s statistical analysis and completeness of reporting. It helps the assessor judge whether results are likely to be spurious or biased because of analysis or reporting. To make this judgment, the assessor considers the data presented to determine the adequacy of the analytic strategy and model-building process and investigates concerns about selective reporting. Selective reporting is an important issue in prognostic factor reviews because studies commonly report only factors positively associated with outcomes. A study would be considered to have low risk of bias if the statistical analysis is appropriate for the data, statistical assumptions are satisfied, and all primary outcomes are reported.
ROB2 = This domain addresses bias that arises because the reported result is selected (based on its direction, magnitude or statistical significance) from among multiple intervention effect estimates that were calculated by the trial investigators. We call this bias in selection of the reported result. Consideration of risk of bias requires distinction between:
• An outcome domain. This is a state or endpoint of interest, irrespective of how it is measured (e.g. severity
of depression);
• An outcome measurement. This is a specific way in which an outcome domain is measured (e.g. measurement of depression using the Hamilton rating scale 6 weeks after starting intervention); and
• An outcome analysis. This is a specific result obtained by analysing one or more outcome measurements (e.g. the difference in mean change in Hamilton rating scale scores from baseline to 6 weeks between experimental and comparator groups). This domain does not address bias due to selective non-reporting (or incomplete reporting) of outcome domains that were measured and analysed by the trial investigators (115). For example, deaths of trial participants may be recorded by the trialists, but the reports of the trial might contain no mortality data, or state only that the intervention effect estimate for mortality was not statistically significant. Such bias puts the result of a synthesis at risk because results are omitted based on their direction, magnitude or statistical significance. It should therefore be addressed at the review level, as part of an integrated assessment of the risk of reporting bias (116).
ROBINS-I = Bias in selection of the reported result</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00024<a name="181513-SEVCO.5800024"> </a></td><td>Selective Reporting Bias</td><td>A reporting bias due to inappropriate selection of the results or research findings that are reported.</td><td>A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.</td><td>Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao, Janice Tufte</td><td>2023-01-06 vote 5-0 by Harold Lehmann, Yuan Gao, Janice Tufte, Eric Harvey, Mario Tristan</td><td/><td/><td>MASTER-31. There was no discernible data dredging or selective reporting of the outcomes</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00330<a name="181513-SEVCO.5800330"> </a></td><td>Selective Outcome Reporting</td><td>A selective reporting bias due to inappropriate selection of which outcomes are reported within results or research findings.</td><td>A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Muhammad Afzal</td><td>2023-01-13 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> SEVCO:00336<a name="181513-SEVCO.5800336"> </a></td><td>Selective outcome measure reporting</td><td>A selective reporting bias due to inappropriate selection of which outcome measures are reported for an outcome.</td><td>Selective outcome measure reporting may be considered a type of selective outcome reporting in which the measurement method for determination of the outcome is interpreted as a distinct outcome.
A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Paul Whaley</td><td>2023-01-13 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00331<a name="181513-SEVCO.5800331"> </a></td><td>Selective subgroup reporting</td><td>A selective reporting bias due to inappropriate selection of subsets of groups of participants for which results or research findings are reported.</td><td>A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.
Selective subgroup reporting relates to choice of attributes of participants within cohorts, for example reporting limited to male patients.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Paul Whaley, Joanne Dehnbostel</td><td>2023-01-27 vote 7-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Paul Whaley, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00331a<a name="181513-SEVCO.5800331a"> </a></td><td>Selective comparison reporting</td><td>A selective reporting bias due to inappropriate selection of comparison groups for which results or research findings are reported.</td><td>A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.
Selective comparison reporting relates to choice of cohort definitions, for example an intention-to-treat analysis (as-randomized analysis) vs. an as-treated analysis.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Paul Whaley, Joanne Dehnbostel</td><td>2023-01-27 vote 6-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Eric Harvey</td><td>2023-01-20 vote 2-1 by Yuan Gao, Paul Whaley, Eric Harvey</td><td>2023-01-20 comment: I don't see enough of a connection between the term (selective comparison) and the definition, which does not seem to talk about comparisons.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00333<a name="181513-SEVCO.5800333"> </a></td><td>Selective analysis reporting from repeated analyses at multiple times</td><td>A selective reporting bias due to inappropriate selection of which analyses are reported for an outcome that was analyzed at multiple points in time in a longitudinal study.</td><td>A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.</td><td>Brian S. Alper, Paul Whaley, Janice Tufte, Joanne Dehnbostel</td><td>2023-01-27 vote 6-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00334<a name="181513-SEVCO.5800334"> </a></td><td>Selective analysis reporting from multiple analytic models</td><td>A selective reporting bias due to inappropriate selection of which analyses are reported for an outcome that was analyzed in multiple ways.</td><td>A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.
Adjustment reporting bias, or selective reporting of adjusted estimates, is a type of selective analysis reporting from multiple analytic models.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins</td><td>2023-02-10 vote 6-0 by Cauê Monaco, Paul Whaley, Janice Tufte, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00335<a name="181513-SEVCO.5800335"> </a></td><td>Selective threshold reporting bias</td><td>A selective reporting bias due to inappropriate selection of which thresholds (used for definitions of the variables) are reported.</td><td>A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.</td><td>Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins</td><td>2023-02-24 vote 7-0 by Harold Lehmann, Yasser Sami Amer, Mario Tristan, Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey</td><td>2023-02-10 vote 5-1 by Cauê Monaco, Paul Whaley, Janice Tufte, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey</td><td>2023-02-10 comment: I'm not clear how the definition relates specifically to reporting bias.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00025<a name="181513-SEVCO.5800025"> </a></td><td>Cognitive interpretive bias in reporting</td><td>A distortion in the representation of study results or research findings due to the subjective nature of human interpretation.</td><td>Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings.
Cognitive interpretive bias in reporting is about interpretation of the results rather than the choice of which results are presented (which would be Selective Reporting Bias).
Cognitive interpretive biases in reporting include selective theory reporting, confirmation bias, bias of rhetoric, novelty bias, popularity bias, and positive results bias.</td><td>Brian S. Alper, Paul Whaley, Harold Lehmann, Janice Tufte, Joanne Dehnbostel</td><td>2023-02-24 vote 7-0 by Harold Lehmann, Yasser Sami Amer, Mario Tristan, Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey</td><td>2023-02-10 vote 4-1 by Cauê Monaco, Paul Whaley, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey</td><td>2023-02-10 comment: I think the definition is sound but the comment for application should be extended to make it clearer that this is about interpretation of the results rather than the choice of which results are presented.</td><td>CoB: Spin bias = The intentional or unintentional distorted interpretation of research results, unjustifiably suggesting favourable or unfavourable findings that can result in misleading conclusions (https://catalogofbias.org/biases/spin-bias/)</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00338<a name="181513-SEVCO.5800338"> </a></td><td>Interpretation of results not addressing potential for bias</td><td>A cognitive interpretive bias in reporting whereby the reported interpretation of results does not adequately address potential for bias.</td><td>Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings.
Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation.
Interpretation of results not addressing potential for bias occurs when there is an absence of risk of bias assessment or incomplete inclusion of a risk of bias assessment in the interpretation of findings.</td><td>Brian S. Alper, Paul Whaley, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel</td><td>2023-03-03 vote 6-0 by A.G. Radhika, Cauê Monac, Janice Tufte, Harold Lehmann, Yasser Sami Amer, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00328<a name="181513-SEVCO.5800328"> </a></td><td>Results emphasized based on statistical significance</td><td>A cognitive interpretive bias in reporting whereby results with statistical significance are given exaggerated attention.</td><td>This bias may occur in several ways. Results may be interpreted as "positive" or "conclusive" if below the significance threshold and "negative" or "inconclusive" if above the significance threshold without proper interpretation of the meaning of the significance threshold. Results may be selectively emphasized in overall summarization of the results based on whether or not they are under the significance threshold. Results may be interpreted based on statistical significance instead of clinical significance, or results may misrepresent statistical significance and clinical significance as synonymous.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2023-04-07 vote 5-0 by Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Harold Lehmann</td><td/><td>2023-04-07 comments:
I support this term as written, although I would suggest that we consider adding that assessment of statistical significance without assessing clinical significance often leads to this bias.
I might suggest adding to Comment for application: "Another mis-interpretation is when statistical significance confused with clinical significance."</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00340<a name="181513-SEVCO.5800340"> </a></td><td>Confirmation bias in reporting</td><td>A cognitive interpretive bias in reporting due to the influence of an individual’s ideas, beliefs or hypotheses.</td><td>Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings.
Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins</td><td>2023-03-10 vote 8-0 by Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde, A.G. Rradhika, Janice Tufte, Eric Harvey, Cauê Monaco</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00329<a name="181513-SEVCO.5800329"> </a></td><td>External validity bias</td><td>A cognitive interpretive bias in reporting due to a mismatch between what the observed data represent and the results that were reported.</td><td>Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings.
Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation.
In the assessment of systematic reviews, this type of bias can be phrased as "Relevance of studies to research question not appropriately considered".</td><td>Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Muhammad Afzal</td><td>2023-4-14 by Janice Tufte, Eric Harvey, Harold Lehmann, Joanne Dehnbostel, Jesus Lopez-Alcalde</td><td/><td/><td>derived from ROBIS https://www.bristol.ac.uk/media-library/sites/social-community-medicine/robis/ROBIS%201.2%20Clean.pdf</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00327<a name="181513-SEVCO.5800327"> </a></td><td>Early dissemination bias</td><td>A reporting bias due to publication or reporting of results or research findings that change in subsequent reports.</td><td>One form of Early dissemination bias is the reporting of results in preprints or early versions during the peer review and publication process not matching the subsequent reports.
Another form of Early dissemination bias is the reporting of interim results (even if fully peer reviewed) when a study is ongoing and more data will be analyzed for the final results.
This bias may result from failure to disclose that the results are preliminary or subject to change.
This definition is not meant to indicate that preprints are inherently biased.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Paul Whaley</td><td>2023-04-14 by Janice Tufte, Eric Harvey, Harold Lehmann, Jesus Lopez-Alcalde, Joanne Dehnbostel</td><td>2023-04-07 vote 3-1 by Eric Harvey, Harold Lehmann, Paul Whaley, Jesus Lopez-Alcalde</td><td>2023-04-14 comments:
Should we make clear in Comment for Application that preprints represent *potential* bias, because preprinting does not prima facie mean bias?
It seems to me that the bias falls where the results do ot carefully convey that they are preliminary or early AND not to be read as final results -maybe could be word smithed Do you mean someone is reporting without full disclosure
2023-04-07 comments:
I would suggest "One form of potential Premature...", since prima facie, premature reporting does not *have* to be biased.
I feel that "reporting bias" has the same issue of being semantically loaded as "publication bias" - the problem is premature dissemination of results, via reporting them, publishing them, putting them in a press release, etc. So maybe "premature dissemination bias" could be considered as the preferred term? And then we could even consider "early dissemination bias" as that feels more objective than "premature", now that it is phrased this way.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00384<a name="181513-SEVCO.5800384"> </a></td><td>Fabrication Bias</td><td>A reporting bias resulting from intentional misrepresentation of any part of the study.</td><td>Examples include plagiarism, unjustified authorship, data manipulation, and intentional misrepresentation of figures and charts. Applying this code is a serious allegation of wrongdoing.</td><td>Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin</td><td>2023-04-21 vote 5-0 by Brian S. Alper, Janice Tufte, Harold Lehmann, Cauê Monaco, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00325<a name="181513-SEVCO.5800325"> </a></td><td>Inadequate reporting of methods</td><td>A reporting bias due to insufficient reporting of methods to determine the validity of the results.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal</td><td>2023-03-31 vote 5-0 by Harold Lehmann, Eric Harvey, Janice Tufte, Paola Rosati, Jesus Lopez-Alcalde</td><td/><td>2023-03-17 vote on "Inadequate Reporting Bias" 2-1 by Eric Harvey, Jesus Lopez-Alcalde, Janice Tufte
2023-03-17 comment on "Inadequate Reporting Bias": Inadequate reporting of methods is covered by another term. Recommend changing this term to "inadequate reporting of results" or deleting this term if terms covering "reporting results biases" have already been established.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00326<a name="181513-SEVCO.5800326"> </a></td><td>Inadequate explanation of participant withdrawals</td><td>A reporting bias due to insufficient reporting of reasons for withdrawals of participants after study enrollment.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte</td><td>2023-03-31 vote 5-0 by Harold Lehmann, Eric Harvey, Janice Tufte, Paola Rosati, Jesus Lopez-Alcalde</td><td/><td>2023-03-31 comment: Somewhere in this entry should be a link to the "withdrawal" SEVCO term. Or terms.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00026<a name="181513-SEVCO.5800026"> </a></td><td>Synthesis Bias</td><td>A bias in the conduct of a systematic review resulting from methods used to select, manipulate or interpret data for evidence synthesis.</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00345<a name="181513-SEVCO.5800345"> </a></td><td>Synthesis missing eligible studies</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00346<a name="181513-SEVCO.5800346"> </a></td><td>Study parameters not appropriate for synthesis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00347<a name="181513-SEVCO.5800347"> </a></td><td>Heterogeneity not addressed</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00348<a name="181513-SEVCO.5800348"> </a></td><td>Inconsistency within sensitivity analyses</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00349<a name="181513-SEVCO.5800349"> </a></td><td>Biases in studies influence synthesis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00350<a name="181513-SEVCO.5800350"> </a></td><td>Synthesis Information Bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00351<a name="181513-SEVCO.5800351"> </a></td><td>Insufficient study characteristics available for proper results interpretation</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00352<a name="181513-SEVCO.5800352"> </a></td><td>Incomplete collection of relevant study results for synthesis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00353<a name="181513-SEVCO.5800353"> </a></td><td>Methodologic quality assessment inadequate</td><td/><td>bias in risk of bias assessment?</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00354<a name="181513-SEVCO.5800354"> </a></td><td>Error in risk of bias assessment not minimized</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> SEVCO:00369<a name="181513-SEVCO.5800369"> </a></td><td>Error in data collection not minimized</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00028<a name="181513-SEVCO.5800028"> </a></td><td>Qualitative Research Bias</td><td>A bias specific to the design, conduct, analysis or reporting of qualitative research.</td><td>Qualitative research is a research approach that studies subjective aspects of social phenomenon and human behavior. Qualitative research may encompass any non-quantitative method of analysis. Qualitative research often explores the meaning individuals or groups assign to concepts.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco, Li Wang</td><td/><td/><td/><td>MMAT = “Qualitative research is an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem” (Creswell, 2013b, p. 3).</td><td>2023-12-01</td><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00356<a name="181513-SEVCO.5800356"> </a></td><td>Inappropriate qualitative approach</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00357<a name="181513-SEVCO.5800357"> </a></td><td>Inadequate qualitative data collection methods</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00358<a name="181513-SEVCO.5800358"> </a></td><td>Inappropriate qualitative analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00359<a name="181513-SEVCO.5800359"> </a></td><td>Unsubstantiated interpretation of results</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00360<a name="181513-SEVCO.5800360"> </a></td><td>Incoherence between data, analysis, and interpretation</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00029<a name="181513-SEVCO.5800029"> </a></td><td>Mixed Methods Research Bias</td><td>A bias specific to the coordination of design, conduct, analysis or reporting of qualitative research and quantitative research.</td><td/><td/><td/><td/><td/><td>MMAT: Mixed methods (MM) research involves combining qualitative (QUAL) and quantitative (QUAN) methods. In this tool, to be considered MM, studies have to meet the following criteria (Creswell and Plano Clark, 2017): (a) at least one QUAL method and one QUAN method are combined; (b) each method is used rigorously in accordance to the generally accepted criteria in the area (or tradition) of research invoked; and (c) the combination of the methods is carried out at the minimum through a MM design (defined a priori, or emerging) and the integration of the QUAL and QUAN phases, results, and data</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00361<a name="181513-SEVCO.5800361"> </a></td><td>Inadequate rationale for mixed methods design</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00362<a name="181513-SEVCO.5800362"> </a></td><td>Ineffective integration of study components</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00363<a name="181513-SEVCO.5800363"> </a></td><td>Inappropriate interpretation of integration of qualitative and quantitative findings</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00364<a name="181513-SEVCO.5800364"> </a></td><td>Inadequate handling of inconsistency</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00030<a name="181513-SEVCO.5800030"> </a></td><td>bias in validation assessment</td><td>A bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.</td><td>Bias in validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies.
A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins</td><td>2023-10-27 vote 5-0 by Brian S. Alper, Eric Harvey, Yasser Sami Amer, Janice Tufte, Harold Lehmann</td><td/><td/><td>PROBAST = ROB, which was defined to occur when shortcomings in study design, conduct, or analysis lead to systematically distorted estimates of model predictive performance. PROBAST enables a focused and transparent approach to assessing the ROB and applicability of studies that develop, validate, or update prediction models for individualized predictions. Prediction models are sometimes described as risk prediction models, predictive models, prediction indices or rules, or risk scores.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00368<a name="181513-SEVCO.5800368"> </a></td><td>bias in external validation assessment</td><td>A bias in validation assessment using a sample source that differs from those used in the derivation of the procedure.</td><td>Validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies.
A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure.
Bias in validation assessment is defined as a bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Bias in external validation assessment may be used for absence of any external validation assessment or inadequacy in external validation assessment.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal</td><td>2023-11-26 vote 5-0 by Harold Lehmann, Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00367<a name="181513-SEVCO.5800367"> </a></td><td>bias in internal validation assessment</td><td>A bias in validation assessment specific to a validation assessment that uses the same sample source that was used in the derivation of the procedure.</td><td>Validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies.
A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure.
Model derivation is often based on a portion of data available from a sample source, and internal validation is performed using the same sample data but a different set of data.
Whereas external validation is tested in populations that differ from the source used for derivation of the procedure, internal validation is tested in the same population.
Bias in validation assessment is defined as a bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Bias in internal validation assessment may be used for absence of any internal validation assessment or inadequacy in internal validation assessment. A common cause of bias in internal validation assessment is validation using the same data that was used for derivation.</td><td>Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal</td><td>2023-12-01 vote 5-0 by Xing Song, Javier Bracchiglione, Harold Lehmann, Eric Harvey, Caue Monaco</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00370<a name="181513-SEVCO.5800370"> </a></td><td>Early Study Termination Bias</td><td>A bias due to the decision to end the study earlier than planned.</td><td>Child terms (types of Early Study Termination Bias) may be used to report the reasons for bias in the decision to end the study earlier than planned. Bias resulting from the early study termination may be described with other terms in the code system.</td><td>Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin, Harold Lehmann, Joanne Dehnbostel</td><td>2022-04-08 vote 6-0 by nelle.stocquart, nisha mathew, Mario Tristan, Robin Ann Yurk, Harold Lehmann, Joanne Dehnbostel</td><td>2022-04-01 vote 4-1 by Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Robin Ann Yurk, Mario Tristan</td><td>2022-04-01 comment: Term Definition: Simplify so it reads.
A bias in the reported results due to early termination of a study resulting in incomplete data collection.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00371<a name="181513-SEVCO.5800371"> </a></td><td>Early study termination bias due to competing interests</td><td>An early study termination bias due to the decision to end the study being influenced by financial, commercial, legal, political, social, professional, or intellectual interests.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin</td><td>2022-04-01 vote 6-0 by Brian S. Alper, Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00372<a name="181513-SEVCO.5800372"> </a></td><td>Early study termination bias due to unplanned use of interim analysis</td><td>An early study termination bias due to awareness of study results without following a preplanned protocol for how interim results will influence the decision to terminate the study.</td><td/><td>Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel</td><td>2022-04-01 vote 5-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00373<a name="181513-SEVCO.5800373"> </a></td><td>Early study termination bias due to inappropriate statistical stopping rule</td><td>An early study termination bias due to use of an inappropriate model or threshold in the analysis used for determination to end the study.</td><td>An example of an inappropriate statistical stopping rule is one that does not account for multiple analyses (i.e. does not use a lower p value threshold) for a conclusion of benefit warranting early termination of the study.</td><td>Brian S. Alper, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel</td><td>2022-04-01 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan, Robin Ann Yurk</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> SEVCO:00374<a name="181513-SEVCO.5800374"> </a></td><td>Early study termination bias due to external factors</td><td>An early study termination bias due to a decision to end the study based on factors other than the results of interim analysis.</td><td>Examples of external factors may include cessation of funding, and safety or efficacy results reported by other studies.</td><td>Brian S. Alper, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel</td><td>2022-04-01 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan, Robin Ann Yurk</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:00027<a name="181513-SEVCO.5800027"> </a></td><td>conflict of interest</td><td>A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have motivations that could compromise their impartiality.</td><td>Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest".</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco</td><td>2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey</td><td>2023-11-10 vote 4-1 by Brian S. Alper, Harold Lehmann, Janice Tufte, Eric Harvey, Javier Bracchiglione</td><td>2023-11-10 comment: I do not think the term should be limited to goals and motivations this seems judgmental and manipulative. COI can be based on intellectual property and or current research work along the same subject where a researcher or partner is too involved with a project or paper on the same subject</td><td>MASTER-28. Conflict of interests were declared and absent</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00355<a name="181513-SEVCO.5800355"> </a></td><td>financial conflict of interest</td><td>A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have financial motivations that could compromise their impartiality.</td><td>Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. The financial motivations may be direct (e.g. salary or consulting fees) or indirect (e.g. stock interests or spousal financial interests). Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest".</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco</td><td>2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey</td><td/><td>2023-12-01 comment: I agree with the definition of the term, but I think it will be better to further explicit what "financial" means in the comments for application (e.g. salary, stocks, paid assistance to congress)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00252<a name="181513-SEVCO.5800252"> </a></td><td>nonfinancial conflict of interest</td><td>A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have non-financial motivations that could compromise their impartiality.</td><td>Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. The non-financial motivations may be related to social, political, professional, ideological, or other factors. Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest".</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco</td><td>2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey</td><td/><td>2023-12-01 comment: I agree with the definition of the term, but I think it will be better to further explicit what "non-financial" means in the comments for application (e.g. intellectual)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:00007<a name="181513-SEVCO.5800007"> </a></td><td>Rating of Bias Risk</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00186<a name="181513-SEVCO.5800186"> </a></td><td>Low risk of bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00187<a name="181513-SEVCO.5800187"> </a></td><td>Moderate risk of bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00188<a name="181513-SEVCO.5800188"> </a></td><td>High risk of bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00189<a name="181513-SEVCO.5800189"> </a></td><td>Serious risk of bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00190<a name="181513-SEVCO.5800190"> </a></td><td>Critical risk of bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00191<a name="181513-SEVCO.5800191"> </a></td><td>Some risk of bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00192<a name="181513-SEVCO.5800192"> </a></td><td>Unclear risk of bias</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:00193<a name="181513-SEVCO.5800193"> </a></td><td>Rating of Factor Presence</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00194<a name="181513-SEVCO.5800194"> </a></td><td>Factor present</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00195<a name="181513-SEVCO.5800195"> </a></td><td>Factor likely present</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00196<a name="181513-SEVCO.5800196"> </a></td><td>Factor likely absent</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00197<a name="181513-SEVCO.5800197"> </a></td><td>Factor absent</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00198<a name="181513-SEVCO.5800198"> </a></td><td>No information on factor presence</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00199<a name="181513-SEVCO.5800199"> </a></td><td>Factor presence or absence unclear</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:00200<a name="181513-SEVCO.5800200"> </a></td><td>Rating of Bias Direction</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00201<a name="181513-SEVCO.5800201"> </a></td><td>Risk of bias favoring experimental</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00202<a name="181513-SEVCO.5800202"> </a></td><td>Risk of bias favoring comparator</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00203<a name="181513-SEVCO.5800203"> </a></td><td>Risk of bias towards null</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00204<a name="181513-SEVCO.5800204"> </a></td><td>Risk of bias away from null</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00205<a name="181513-SEVCO.5800205"> </a></td><td>Risk of bias direction unpredictable</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">SEVCO:00206<a name="181513-SEVCO.5800206"> </a></td><td>Rating of Potential Influence</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00207<a name="181513-SEVCO.5800207"> </a></td><td>Factor has potential to impact results</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00208<a name="181513-SEVCO.5800208"> </a></td><td>Factor likely has potential to impact results</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00209<a name="181513-SEVCO.5800209"> </a></td><td>Factor likely does not have potential to impact results</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> SEVCO:00210<a name="181513-SEVCO.5800210"> </a></td><td>Factor does not have potential to impact results</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">STATO:0000039<a name="181513-STATO.580000039"> </a></td><td>Statistic</td><td>An information content entity that is a formalization of relationships between variables and value specification.</td><td>The 'statistic' does not include the numerical value for which the statistic is used--that would be the statistic value, and the 'statistic' does not include the model characteristics.</td><td>Brian S. Alper, Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin</td><td>revision 6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă; original approval 6/6 as of 9/27/2021: Harold Lehmann, Bhagvan Kommadi, Louis Leff, Janice Tufte, Joanne Dehnbostel, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000047<a name="181513-STATO.580000047"> </a></td><td>Count</td><td>A statistic that represents the number of instances or occurrences of something.</td><td>A count can only be denoted by non-negative integer values.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins</td><td>6/6 as of 9/27/2021: Harold Lehmann, Bhagvan Kommadi, Louis Leff, Janice Tufte, Joanne Dehnbostel, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000001<a name="181513-TBD.580000001"> </a></td><td>Sum</td><td>A statistic that represents the result of adding all the values in a collection of values.</td><td/><td>Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin</td><td>6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000002<a name="181513-TBD.580000002"> </a></td><td>Maximum Observed Value</td><td>A statistic that represents the largest non-null value in a collection of values that can be ordered by magnitude.</td><td/><td>Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin</td><td>6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000003<a name="181513-TBD.580000003"> </a></td><td>Minimum Observed Value</td><td>A statistic that represents the smallest non-null value in a collection of values that can be ordered by magnitude.</td><td/><td>Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin</td><td>6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000004<a name="181513-TBD.580000004"> </a></td><td>Maximum Possible Value</td><td>A statistic that represents the largest value that could occur.</td><td>This term may be used to denote the upper limit of a scale or score.</td><td>Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin</td><td>6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000005<a name="181513-TBD.580000005"> </a></td><td>Minimum Possible Value</td><td>A statistic that represents the smallest value that could occur.</td><td>This term may be used to denote the lower limit of a scale or score.</td><td>Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin</td><td>6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000006<a name="181513-TBD.580000006"> </a></td><td>Threshold</td><td>A statistic that represents the boundary at which something changes.</td><td>The thing that changes at the threshold value may be relevant for function, application, classification, or detection.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Kenneth Wilkins, Khalid Shahin</td><td>5/5 as of 10/11/2021: Janice Tufte, Joanne Dehnbostel, Louis Leff, Vignesh Subbian, Robin Ann Yurk</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000069<a name="181513-STATO.580000069"> </a></td><td>Degrees of Freedom</td><td>A statistic that represents the number of independent values used to calculate a statistical estimate. The number of degrees of freedom ν is equal to the number of independent units of information given the model.</td><td>The formula to calculate degrees of freedom will depend on the model. For example, the degrees of freedom for a sample standard deviation, given the sample mean, is N-1, because the Nth observation is no longer independent, given the N-1 other observations and the sample mean.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Kenneth Wilkins, Khalid Shahin, Harold Lehmann</td><td>6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen</td><td/><td>Include * in P = x1 * x2...to clarify this is a product.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000029<a name="181513-STATO.580000029"> </a></td><td>Measure of Central Tendency</td><td>A statistic that represents a central value for a set of data.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra</td><td>6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000573<a name="181513-STATO.580000573"> </a></td><td>Mean</td><td>A measure of central tendency calculated as the sum of a set of values divided by the number of values in the set.</td><td>A=sum[Ai] / n where i ranges from 1 to n and Ai represents the value of individual observations.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra</td><td>6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen</td><td/><td/><td/><td/><td/><td/><td>Measure of Central Tendency</td><td>Keskiarvo</td></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000177<a name="181513-TBD.580000177"> </a></td><td>Mean of differences</td><td>A mean of values in which each value is the subtraction of one quantity from another.</td><td>The primary use of this term is in analyzing within-individual differences.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Janice Tufte, Muhammad Afzal, Kenneth Wilkins</td><td>2021-12-15 vote 5-0 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Paola Rosati, Brian S. Alper</td><td>2021-12-01 vote 6-1 by Louis Leff, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte, C P Ooi</td><td>2021-12-01 comment: 'Difference in means' may be more appropriate. 'Mean value from one population subtract the mean value of another population' may be clearer reflecting the definition</td><td/><td/><td/><td/><td>Measure of Central Tendency</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000183<a name="181513-TBD.580000183"> </a></td><td>Mean time-to-event</td><td>A mean of values in which each value is the duration of time between the start of observation and the occurrence of an event.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao</td><td>2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000396<a name="181513-STATO.580000396"> </a></td><td>Geometric Mean</td><td>A measure of central tendency calculated as the nth root of the product of all of the observations in a data set (n being the number of all observations).</td><td>For n observations with values x1, x2, … xn, the product of all the values P = x1 * x2 … xn [also expressed as P = (x1)(x2)...(xn)]. The nth root of the product = (P)^(1/n).</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra</td><td>6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen</td><td/><td/><td/><td/><td/><td/><td>Measure of Central Tendency</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000574<a name="181513-STATO.580000574"> </a></td><td>Median</td><td>A measure of central tendency equal to the middle value (or mean of the two middle values) of a set of ordered data.</td><td>The median value is equal to the middle value of a set of ordered data with an odd number of values. The median value is calculated as the mean of the two middle values of a set of ordered data with an even number of values. The median is sometimes called the second quartile or fiftieth percentile.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte</td><td>2021-12-15 vote 6-0 by Robin Ann Yurk, Muhammad Afzal, Harold Lehmann, Janice Tufte, Paola Rosati, Khalid Shahin</td><td>6-1 on 2021-11-01 by Louis Leff, Vignesh Subbian, Pentti Nieminen, Bhagvan Kommadi, Janice Tufte, Sorana D. Bolboacă, Robin Ann Yurk
2021-12-01 vote 5-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte</td><td>2021-11-01 comment: the definition is appropriate. Suggest use alternative terms: center value, statistical median or middle value. I don't recommend using fiftieth percentile or second quartile
2021-12-01 comment: I would change definition to: A measure of central tendency equal to the middle value of a set of ordered data with an odd number of values. It could be calculated also as the mean of the two middle values of a set of ordered data with an even number of values. ((Perhaps simpler as: A measure of central tendency equal to the middle value of a set of ordered data. In a set of ordered data with an even number of values, the middle value is calculated as the mean of the two middle values.))</td><td/><td/><td/><td/><td>Measure of Central Tendency</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000184<a name="181513-TBD.580000184"> </a></td><td>Median time-to-event</td><td>A median of values in which each value is the duration of time between the start of observation and the occurrence of an event.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao</td><td>2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000033<a name="181513-STATO.580000033"> </a></td><td>Mode</td><td>A measure of central tendency that is the most frequently occurring value in a data set. If no value is repeated, there is no mode. If more than one value occurs with the same greatest frequency, each of these values is a mode.</td><td/><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte</td><td>7/7 on 2021-11-01 by Louis Leff, Vignesh Subbian, Pentti Nieminen, Bhagvan Kommadi, Janice Tufte, Sorana D. Bolboacă, Robin Ann Yurk</td><td/><td/><td/><td/><td/><td/><td>Measure of Central Tendency</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000397<a name="181513-STATO.580000397"> </a></td><td>Harmonic Mean</td><td>A measure of central tendency calculated by dividing the total number of observations by the sum of the reciprocals of each observed value.</td><td>Harmonic Mean = N/(1/a1+1/a2+1/a3+1/a4+...+1/aN) where a(i)= Individual observed value and N = Sample size (Number of observations)</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey, Yuan Gao</td><td/><td/><td>STATO:
The harmonic mean is a kind of mean which is calculated by dividing the total number of observations by the reciprocal of each number in a series. Harmonic Mean = N/(1/a1+1/a2+1/a3+1/a4+.......+1/aN) where a(i)= Individual score and N = Sample size (Number of scores)</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000007<a name="181513-TBD.580000007"> </a></td><td>Difference</td><td>A statistic that is a subtraction of one quantity from another.</td><td/><td>Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte</td><td>2021-12-01 vote 5-0 by Philippe Rocca-Serra, Paola Rosati, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000174<a name="181513-TBD.580000174"> </a></td><td>Absolute Difference</td><td>A statistic that is a subtraction of one quantity from another, with no modification of the resulting value.</td><td>As a type of statistic, "Absolute Difference" is the actual difference between two quantities and can be positive or negative depending on the order of subtraction. The term "Absolute Difference" should not be confused with the mathematical term 'absolute value' which is a numerical value without a negative sign.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Janice Tufte</td><td>2021-12-01 vote 5-0 by Philippe Rocca-Serra, Paola Rosati, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000008<a name="181513-TBD.580000008"> </a></td><td>Count Difference</td><td>A statistic that is a subtraction of one count from another.</td><td>The term Count Difference is used to specify the Absolute Difference is with respect to a count or number of items (such as number of events, platelet counts, sample size e.g. number of people in the group) to distinguish from differences in other types of statistics (mean difference, median difference, risk difference, etc.)</td><td>Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte</td><td>2021-12-15 vote 6-0 by Robin Ann Yurk, Janice Tufte, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal</td><td>2021-12-01 vote 5-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte
2021-12-08 vote 6-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann</td><td>2021-12-01 comment: Suggest include as an alternative term under difference and remove this term as unclear on distinction as a separate term.
2021-12-08 comment: Suggest removing this term and adding as an alternative term to Difference {(atlernative term and Comment for application added in response}}</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000457<a name="181513-STATO.580000457"> </a></td><td>Difference in means</td><td>A statistic that is a subtraction of one mean from another.</td><td>The primary use of this term is in analyzing between-group differences.</td><td>Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte</td><td>2021-12-01 vote 6-0 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte
2021-12-01 Steering group added comment for application and decided not to send out for vote again.</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000009<a name="181513-TBD.580000009"> </a></td><td>Difference in medians</td><td>A statistic that is a subtraction of one median from another.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Janice Tufte, Muhammad Afzal, Kenneth Wilkins</td><td>2021-12-01 vote 6-0 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000424<a name="181513-STATO.580000424"> </a></td><td>Risk Difference</td><td>A measure of association that is the subtraction of the risk of an event in one group from the risk of the same event in another group.</td><td/><td>Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Harold Lehmann, Janice Tufte</td><td>2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati</td><td/><td/><td>STATO: risk difference = The risk difference is the difference between the observed risks (proportions of individuals with the outcome of interest) in the two groups.
The risk difference is straightforward to interpret: it describes the actual difference in the observed risk of events between experimental and control interventions.</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000178<a name="181513-TBD.580000178"> </a></td><td>Difference-in-differences</td><td>A statistic that is a subtraction of one difference from another.</td><td>The term 'Difference-in-differences' may be used to assess the incremental benefit or harm of an intervention or exposure, where the effect of the exposure is measured as a difference (for example, pre-post testing comparison of values before and after the exposure) in two groups being compared.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte</td><td>2022-06-08 vote 6-0 by Robin Ann Yurk, Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati</td><td/><td>2022-06-08 comment: do you want to add to comment for application, pre-post testing or as an alternative term?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000180<a name="181513-TBD.580000180"> </a></td><td>Relative Difference</td><td>A statistic that is a difference between 1 and a ratio of the two quantities being compared.</td><td>Relative Difference = 1 - ( a / b ). Because 1 - ( a / b ) is not equal to 1 - ( b / a ), Relative Difference may be expressed as "Relative Difference with respect to b" when referring to 1 - ( a / b ).
The relative difference can also be defined as a statistic that is a ratio of the absolute difference (of the two quantities being compared) to the reference value (one of the quantities being compared). Relative Difference = ( b - a ) / ( b ) where b is the reference value and this may also be called "Relative Difference with respect to b"</td><td>Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-06-29 vote 5-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Eric Harvey</td><td>2022-06-15 vote 2-2 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte
2022-06-22 vote 4-2 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte</td><td>2022-06-15 comments: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term.
To me this definition is unclear...sorry, what it means? Is it weird to a have a ratio of a difference to a reference value? Sorry, but I am unable to understand this definition.
relative and absolute difference seems confusing to me</td><td>Example of a relative difference (relative to placebo) that is not a relative mean difference or a relative risk difference:
Relative median difference (%) = [(active median - placebo median) / placebo median] x 100.
This can be transformed to:
Relative median difference = (active median / placebo median) - 1.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000176<a name="181513-TBD.580000176"> </a></td><td>Relative Mean Difference</td><td>A statistic that is a difference between 1 and a ratio of the two mean values being compared.</td><td>Relative Mean Difference = 1 - ( a / b ) where a and b are mean values.
The relative mean difference can also be defined as a statistic that is a ratio of the difference in means to the reference mean value. Relative Mean Difference = ( b - a ) / ( b ) where b is the reference mean value and a is another mean value.</td><td>Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-06-29 vote 6-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Eric Harvey</td><td>2022-06-15 vote 3-1 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte
2022-06-22 vote 5-1 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte</td><td>2022-06-15 comment: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term.
2022-06-29 comment: Relative Mean Difference is_a kind of Relative difference where the quantities being compared are two means, one of which is or acts as reference mean value (additional comment: define 'reference mean value' if it refers to something more specific</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000010<a name="181513-TBD.580000010"> </a></td><td>Relative Risk Difference</td><td>A statistic that is a difference between 1 and a ratio of the two risk values being compared.</td><td>Relative Risk Difference = 1 - ( a / b ) where a and b are risk values.
The relative risk difference can also be defined as a statistic that is a ratio of the risk difference to the risk used as a reference. Relative Risk Difference = ( b - a ) / ( b ) where b is the reference risk value and a is another risk value.</td><td>Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-06-22 vote 6-0 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte</td><td>2022-06-15 vote 3-1 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte
2022-06-22 vote 5-1 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte</td><td>2022-06-15 comment: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000100<a name="181513-STATO.580000100"> </a></td><td>Standardized Mean Difference</td><td>A statistic that is a difference between two means, divided by a statistical measure of dispersion.</td><td>In English, "standardized" is often used to express relative comparison to any reference value. However, in SEVCO, "standardized" is used to express relative comparison to a statistical measure of dispersion.
In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
For example, in Cohen's d statistic, the statistical measure of dispersion is specified as the square root of an average of the variances of the two groups being compared.</td><td>Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin</td><td>2022-07-20 vote 6-0 by Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td>2022-07-06 vote 3-1 by Paola Rosati, Harold Lehmann, Robin Ann Yurk, Eric Harvey</td><td>2022-07-06 comment: Consider listing Cohen's D statistic as an alternative term or selecting one of the two terms to be the term to evaluate as the definition are similar.</td><td>STATO: standardized mean difference (Cohen's d statistic, SMD) = standardized mean difference is data item computed by forming the difference between two means, divided by an estimate of the within-group standard deviation.
It is used to provide an estimatation of the effect size between two treatments when the predictor (independent variable) is categorical and the response(dependent) variable is continuous</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000011<a name="181513-TBD.580000011"> </a></td><td>Cohen’s d statistic</td><td>A standardized mean difference which is calculated as a difference between two means, divided by a square root of an average of the variances of the two groups.</td><td>A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
In Cohen's d statistic, the statistical measure of dispersion is specified as the square root of an average of the variances of the two groups being compared. The variances of the two groups are based on within-group standard deviations.
For sample sizes < 50, a correction factor is used.</td><td>Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel</td><td>2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td>2022-07-06 vote 3-1 by Paola Rosati, Harold Lehmann, Robin Ann Yurk, Eric Harvey</td><td>2022-07-06 comment: Consider listing SMD statistic as an alternative term or selecting one of the two terms to be the term to evaluate as the definition are similar and the other term to be the alternative term.</td><td>STATO: standardized mean difference (Cohen's d statistic, SMD) = standardized mean difference is data item computed by forming the difference between two means, divided by an estimate of the within-group standard deviation.
It is used to provide an estimatation of the effect size between two treatments when the predictor (independent variable) is categorical and the response(dependent) variable is continuous</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000135<a name="181513-STATO.580000135"> </a></td><td>Strictly standardized mean difference</td><td>A standardized mean difference which is calculated as a difference between two means, divided by the standard error of the difference between the two means.</td><td>A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
In Strictly standardized mean difference, the statistical measure of dispersion is specified as the standard error of the difference between means [SEVCO TBD:0000063].</td><td>Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-07-20 vote 6-0 by Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td/><td>STATO: strictly standardized mean difference (SSMS) is a standardized mean difference which corresponds to the ratio of mean to the standard deviation of the difference between two groups. SSMD directly measures the magnitude of difference between two groups. SSMD is widely used in High Content Screen for hit selection and quality control. When the data is preprocessed using log-transformation as normally done in HTS experiments, SSMD is the mean of log fold change divided by the standard deviation of log fold change with respect to a negative reference. In other words, SSMD is the average fold change (on the log scale) penalized by the variability of fold change (on the log scale). For quality control, one index for the quality of an HTS assay is the magnitude of difference between a positive control and a negative reference in an assay plate. For hit selection, the size of effects of a compound (i.e., a small molecule or an siRNA) is represented by the magnitude of difference between the compound and a negative reference. SSMD directly measures the magnitude of difference between two groups. Therefore, SSMD can be used for both quality control and hit selection in HTS experiments.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000319<a name="181513-STATO.580000319"> </a></td><td>Hedges’s g</td><td>A standardized mean difference which is calculated as a difference between two means, divided by the pooled standard deviation.</td><td>A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
In Hedges’s g, the statistical measure of dispersion is specified as the pooled standard deviation. There is a correction factor for small sample sizes.</td><td>Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td/><td>STATO: Hedges's g = Hedges's g is an estimator of effect size, which is similar to Cohen's d and is a measure based on a standardized difference. However, the denominator, corresponding to a pooled standard deviation, is computed differently from Cohen's d coefficient, by applying a correction factor (which involves a Gamma function).</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000320<a name="181513-STATO.580000320"> </a></td><td>Glass’s delta</td><td>A standardized mean difference which is calculated as a difference between two means (of an experimental group and a control group), divided by the standard deviation of the control group.</td><td>A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
In Glass's delta, the statistical measure of dispersion is specified as the standard deviation of the control group. There is a correction factor for small sample sizes.</td><td>Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel</td><td>2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td/><td>STATO: Glass's delta is an estimator of effect size which is similar to Cohen's d but where the denominator corresponds only to the standard deviation of the control group (or second group). It is considered less biais than the Cohen's d for estimating effect sizes based on means and distances between means.</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000012<a name="181513-TBD.580000012"> </a></td><td>Reciprocal of Difference</td><td>A statistic that is a quotient of one and a difference.</td><td>A difference is a statistic that is a subtraction of one quantity from another.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel</td><td>2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000013<a name="181513-TBD.580000013"> </a></td><td>Number Needed to Treat</td><td>A statistic that represents the number of units that needs to be treated to prevent one additional undesired outcome. The Number Needed to Treat is calculated as the reciprocal of a treatment effect estimate, where the effect estimate is expressed as a risk difference.</td><td>The Number Needed to Treat (NNT) value is often rounded up to the next highest whole integer.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2022-07-20 vote 7-0 by Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td/><td>Centre for Evidence-Based Medicine
Number Needed to Treat (NNT)
The Number Needed to Treat (NNT) is the number of patients you need to treat to prevent one additional bad outcome (death, stroke, etc.).
https://www.cebm.ox.ac.uk/resources/ebm-tools/number-needed-to-treat-nnt</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000014<a name="181513-TBD.580000014"> </a></td><td>Number Needed to Screen to Detect</td><td>A statistic that represents the number of units that needs to be tested to identify one additional case. The Number Needed to Screen to Detect is calculated as the reciprocal of a difference in rate of detected cases with and without screening.</td><td>The Number Needed to Screen (NNS) value is often rounded up to the next highest whole integer. The Number Needed to Screen to Detect is distinct from the Number Needed to Screen to Prevent as the formulas to calculate are different, even though both may be abbreviated as NNS.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2022-08-03 vote 5-0 by Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Philippe Rocca-Serra, Harold Lehmann</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000015<a name="181513-TBD.580000015"> </a></td><td>Number Needed to Screen to Prevent</td><td>A statistic that represents the number of units that needs to be tested to prevent one additional adverse outcome, assuming that positive testing will lead to preventive intervention. The Number Needed to Screen to Prevent is calculated as the Number Needed to Treat divided by the prevalence.</td><td>The Number Needed to Screen (NNS) value is often rounded up to the next highest whole integer. The Number Needed to Screen to Detect is distinct from the Number Needed to Screen to Prevent as the formulas to calculate are different, even though both may be abbreviated as NNS.
The formula may be adjusted for test performance characteristics (e.g. dividing by the sensitivity) or assumptions regarding acceptance or adherence of interventions.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2022-08-03 vote 5-0 by Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Philippe Rocca-Serra, Harold Lehmann</td><td/><td/><td>BMJ 1998;317: 307 Number needed to screen: development of a statistic for disease screening
Number needed to screen is defined as the number of people that need to be screened for a given duration to prevent one death or adverse event.
Number needed to screen was then calculated by dividing the number needed to treat for treating risk factors by the prevalence of disease that was unrecognised or untreated.
https://www.bmj.com/content/317/7154/307.long</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000016<a name="181513-TBD.580000016"> </a></td><td>Number Needed to Harm</td><td>A statistic that represents the number of units that, if treated or exposed to the intervention, to lead to one additional undesired outcome. The Number Needed to Harm is calculated as the reciprocal of a treatment effect estimate, where the effect estimate is expressed as a risk difference.</td><td>The Number Needed to Harm (NNH) value is often rounded down to the next lowest whole integer.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte</td><td>2022-07-20 vote 7-0 by Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey</td><td/><td/><td>Centre for Evidence-Based Medicine
Number needed to treat (NNT): The number of patients who need to be treated to prevent one bad outcome. It is the inverse of the ARR: NNT=1/ARR. Numbers needed to harm (NNH)-the number of patients who, if they received the experimental treatment, would lead to one additional person being harmed compared with patients who receive the control treatment; calculated as 1/ARI.
https://www.cebm.ox.ac.uk/resources/ebm-tools/glossary</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000184<a name="181513-STATO.580000184"> </a></td><td>Ratio</td><td>A statistic that is a quotient of two quantities.</td><td>Although some definitions for Ratio include "with the same units of measurement" and some definitions for Ratio include "a dimensionless quotient", not all definitions have these concepts, and there are ratios with units of measurement that are different for numerator and denominator such as event rate, body mass index, and cost-effectiveness ratio.</td><td>Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal</td><td>2022-01-05 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde</td><td>2021-12-22 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Jesus Lopez-Alcalde
2021-12-29 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, C P Ooi</td><td>2021-12-22 comment: I suggest adding "with the same measurement units"2021-12-29 comment: I agree with the term definition. However, the comment could be improved and I would not include BMI as an example as an index may not necessarily be a ratio but a more complex statistic or calculation
2022-01-05 comment: Comment, I would remove body mass index from the comment section as an example as an index is a unique statistical defnition.</td><td>STATO: A ratio is a data item which is formed with two numbers r and s is written r/s, where r is the numerator and s is the denominator. The ratio of r to s is equivalent to the quotient r/s.
NCIt: The quotient of one quantity divided by another, with the same units of measurement.
UMLS: Quotient of quantities of the same kind for different components within the same system.
OECD: A ratio is a number that expresses the relative size of two other numbers.
OCRe: A ratio is a quotient of quantities of the same kind for different components within the same system.
SCO: A ratio is a relationship between two numbers of the same kind expressed arithmetically as a dimensionless quotient of the two which explicitly indicates how many times the first number contains the second.
Quotient of quantities of the same kind for different components within the same system.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000017<a name="181513-TBD.580000017"> </a></td><td>Percentage</td><td>A ratio that is multiplied by 100, and has the same units of measurement in the numerator and the denominator.</td><td>When a percentage is a fraction of hundred or proportion per hundred, then the percentage is the proportion multiplied by 100. However, a percentage can be greater than 100% so the definition is a ratio that is multiplied by 100.
Proportion is SEVCO code of TBD:0000018, Ratio is SEVCO code of STATO:0000184</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Khalid Shahin</td><td>2022-01-07 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde, Mario Tristan</td><td>2022-01-05 vote 5-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde</td><td>2022-01-05 comment: Instead of "A ratio" I would propose "A proportion that is multiplied by 100, [...]"</td><td>NCIt-A fraction or ratio with 100 understood as the denominator. Alt definition One hundred times the quotient of one quantity divided by another, with the same units of measurement.
OECD-A percentage is a special type of proportion where the ratio is multiplied by a constant, 100, so that the ratio is expressed per 100.
SCO-A fraction or ratio with 100 understood as the denominator.
UMLS-A unit for expressing a number as a fraction of hundred (on the basis of a rate or proportion per hundred)-NCI</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:MeasAccu<a name="181513-TBD.58MeasAccu"> </a></td><td>Measurement Accuracy</td><td>A percentage in which the numerator represents the absolute value of one minus the difference between the true value and the observed value, and the denominator represents the true value.</td><td/><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel</td><td>2022-08-24 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey</td><td/><td/><td>from https://www.sciencedirect.com/topics/engineering/measurement-accuracy
Measurement Accuracy
Measurement accuracy is defined as the closeness of agreement between a measured quantity value and a true quantity value of a measurand (i.e., the quantity intended to be measured) (ISO-JCGM 200, 2008), and is often limited by calibration errors.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:RelativeImportance<a name="181513-TBD.58RelativeImportance"> </a></td><td>Relative Importance</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000018<a name="181513-TBD.580000018"> </a></td><td>Proportion</td><td>A ratio in which the numerator represents a part, fraction or share of the amount represented by the denominator.</td><td>The value of a proportion must be between 0 and 1 (inclusive). Proportions may represent the frequency of some phenomenon of interest within a population, or may represent a subset of a whole.</td><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Khalid Shahin</td><td>2022-01-07 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde, Mario Tristan</td><td>2021-12-29 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, C P Ooi</td><td>2021-12-29 comment: I agree with the term definition. However, for the comment, I would edit to include the OCRe defintion: A proportion is a measure of the frequency of some phenomenon of interest within a population.</td><td>STATO: observed risk [as a data item STATO_0000423] = the proportion of individuals in a population with the outcome of interest
NCIt: A part, fraction, share, or number considered in relation to the whole amount or number.
OECD Definition:
A proportion is a special type of ratio in which the denominator includes the numerator.
An example is the proportion of deaths that occurred to males which would be deaths to males divided by deaths to males plus deaths to females (i.e. the total population).
OCRe: A proportion is a measure of the frequency of some phenomenon of interest within an average population</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000413<a name="181513-STATO.580000413"> </a></td><td>Incidence</td><td>A proportion in which the numerator represents new events.</td><td>Outside of the Scientific Evidence Code System (SEVCO), there is substantial inconsistency in the terms and definitions used for incidence and related concepts.
Within SEVCO, Incidence is a proportion in which the numerator represents new events. The denominator may represent the entire population or may represent that population at risk (i.e., those without prior events).
Disease incidence is the ratio of the number of new cases of a disease divided by the number of persons at risk for the disease. Incidence is a proportion because the persons in the numerator, those who develop disease, are all included in the denominator (the entire population).
When a time period or a duration of time is used to define the period of time in which the incidence is measured, the statistic type is Incidence. Examples include 1-year incidence, in-hospital incidence, and cumulative incidence.
When time is considered as a variable in the formalization of the statistic, such as incidence per unit of time, then the statistic type is Incidence Rate (SEVCO code of TBD:0000024)</td><td>Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Janice Tufte</td><td>2022-01-12 vote 6-0 by Harold Lehman, Mario Tristan, janice tufte, Andrew Beck, Robin Ann Yurk, Paul Harris</td><td>2022-01-05 vote 2-2 by Robin Ann Yurk, janice tufte, Jesus Lopez-Alcalde, Harold Lehmann</td><td>2022-01-05 comments: I propose "The number of new occurrences of an event (for example, infection) in a population at risk over a particular period of time.
Doesn't denominator need to include a time component? Even if not, the time component should be referenced in the Comment. Also, do note that, in public health, the numerator *attempts* to be a subset of the denominator, but that relationship cannot be assured. (E.g., fertility incidence may be number of births (vital statistics) with denominator of number of women of child bearing age (census).
2022-06-15 Expert Working Group/Steering Committee removed 'Risk' as alternative term as we created a separate term for 'Risk' (TBD:0000185)</td><td>STATO: Incidence is the ratio of the number of new cases of a disease divided by the number of persons at risk for the disease.
NCIt The relative frequency of occurrence of something.
OBCS A data item that refers to the number of new events that have occurred in a specific time interval divided by the population at risk at the beginning of the time interval. The result gives the likelihood of developing an event in that time interval.
UMLS The number of new cases of a given disease during a given period in a specified population. It also is used for the rate at which new events occur in a defined population. It is differentiated from PREVALENCE, which refers to all cases in the population at a given time. (MSH)
The relative frequency of occurrence of something. (NCI)
The number of new cases of a disease diagnosed each year. (NCI)
CDC: https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html
Incidence refers to the occurrence of new cases of disease or injury in a population over a specified period of time. Although some epidemiologists use incidence to mean the number of new cases in a community, others use incidence to mean the number of new cases per unit of population.
Two types of incidence are commonly used — incidence proportion and incidence rate.
Incidence proportion or risk
Synonyms for incidence proportion
Attack rate
Risk
Probability of developing disease
Cumulative incidence
Definition of incidence proportion
Incidence proportion is the proportion of an initially disease-free population that develops disease, becomes injured, or dies during a specified (usually limited) period of time. Synonyms include attack rate, risk, probability of getting disease, and cumulative incidence. Incidence proportion is a proportion because the persons in the numerator, those who develop disease, are all included in the denominator (the entire population).</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000412<a name="181513-STATO.580000412"> </a></td><td>Prevalence</td><td>A proportion in which the numerator represents all events of interest (for example, both new and preexisting cases of a disease) in the population, which is represented by the denominator.</td><td>Prevalence is the proportion of persons in a population who have a particular disease or attribute at a specified point in time or over a specified period of time.
Prevalence differs from incidence in that prevalence includes all cases, both new and preexisting, in the population at the specified time, whereas incidence is limited to new cases only.
In Bayesian calculations, the prevalence value is often used as the pre-test probability or prior probability value, but these probability values are not always based on or derived from the prevalence value.</td><td>Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Janice Tufte, Kenneth Wilkins, Harold Lehmann</td><td>2022-02-02 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper</td><td>2022-01-12 vote 6-1 by Robin Ann Yurk, janice tufte, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Andrew Beck, Paul Harris
2022-01-19 vote 2-1 by Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya
2022-01-26 vote 7-1 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Robin Ann Yurk, Brian S. Alper, Jesus Lopez-Alcalde</td><td>2022-01-12 comments: I propose "A proportion in which the numerator represents all events (new and preexisting)." I think it is very important to detail "new and preexisting"2022-01-03 comment: The comment here is better than for incidence (why not copy this comment into "Incidence," and edit?). But it still feels like the denominator should be called out in the definition.
2022-01-19 comment: I would edit the term definition to include ...as part of a denominator of a broader population.
2022-01-26 comments: (1) suggestion: alter the definition to: A proportion in which the numerator represents all events of interest (e.g. both new and preexisting cases of a disease) in the population, which is represented by the denominator.
(2) I would delete this sentence from the comment for application. " Prevalence is a proportion because the persons in the numerator, those who develop or have disease, are all included in the denominator (the entire population)"(3) Probability should be a type of Proportion but distinct from Prevalence. Probability relates to the likelihood of something, but in that sense incidence and prevalence are both probabilities. If Prevalence and Probability were considered synonyms then one would still not call it the same as "Pre-test" or "Prior" probability. The term pre-test probability could be a type of (child of) probability.
2022-02-02 comment: I would remove the statement ..In Bayesian calculations, as the pre-test probability is a formula with new variables.</td><td>STATO: prevalence is a ratio formed by the number of subjects diagnosed with a disease divided by the total population size.
Period prevalence:
The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population.
a prevalence rate that occurs at a specific period of time
Point prevalence:
NCIt The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population.
OBCS a prevalence rate that occurs at a specific point of time
UMLS:
The total number of cases of a given disease in a specified population at a designated time. It is differentiated from INCIDENCE, which refers to the number of new cases in the population at a given time. (MSH)
The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population. (NCI)
Proportion of the people having a certain disease or condition in a given population (CHV)
CDC https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html
Point prevalence = Number of current cases (new and preexisting) at a specified point in time / Population at the same specified point in time
Period prevalence = Number of current cases (new and preexisting) over a specified period of time / Average or mid-interval population
Definition of prevalence
Prevalence, sometimes referred to as prevalence rate, is the proportion of persons in a population who have a particular disease or attribute at a specified point in time or over a specified period of time. Prevalence differs from incidence in that prevalence includes all cases, both new and preexisting, in the population at the specified time, whereas incidence is limited to new cases only.
Point prevalence refers to the prevalence measured at a particular point in time. It is the proportion of persons with a particular disease or attribute on a particular date.
Period prevalence refers to prevalence measured over an interval of time. It is the proportion of persons with a particular disease or attribute at any time during the interval.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000233<a name="181513-STATO.580000233"> </a></td><td>Sensitivity</td><td>A proportion in which the numerator represents the detected items within the denominator that represents all items with the targeted attribute.</td><td>In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the sensitivity is the proportion of true positives (all people with the disease who test positive) within all people with the disease (true positives plus false negatives). Sn = TP / (TP + FN).
In information retrieval, recall is the proportion of items correctly retrieved within all relevant items.
True positive rate (TPR) is listed as an alternative term because of common usage, but TPR is not a Rate as defined in SEVCO.</td><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal</td><td>2022-01-26 vote 10-0 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya, Janice Tufte, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde</td><td>2022-01-19 vote 3-1 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya</td><td>2022-01-19 comment: I would remove recall from alternative terms and comment for applications, as it is a specialized informatics measures and list it as a separate term. (EWG discussion: This comment is not persuasive. If the same statistic type (formula) has different names in different contexts we still want one common code for the concept. This consolidation of terms is the purpose of a standardized terminology or controlled vocabulary where we are controlling the code for the concept, not the name for common use.)</td><td>STATO: true positive rate (recall, sensitivity) = sensitivity is a measurement datum qualifying a binary classification test and is computed by substracting the false negative rate to the integral numeral 1
NCIt diagnostic sensitivity The probability that a test will produce a true positive result when used on effected subjects as compared to a reference or "gold standard". The sensitivity of a test can be determined by calculating: number of true positive results divided by the sum of true positive results plus number of false negative results.
OBCS-
a data item that measures the proportion of actual positives which are correctly identified as such (e.g. the percentage of sick people who are correctly identified as having the condition).
OCRe
An index of performance of a discriminant test calculated as the percentage of correct positives in all true positives
STATO
sensitivity is a measurement datum qualifying a binary classification test and is computed by subtracting the false negative rate to the integral numeral 1
NICE glossary-Sensitivity of a test-How well a test detects what it is testing for. It is the proportion of people with the disease or condition that are correctly identified by the study test. For example, a test with a sensitivity of 96% will, on average, correctly identify 96 people in every 100 who truly have the condition, but incorrectly identify as not having the condition 4 people in every 100 who truly have it. It is different from positive predictive value.
MeSH scope note-sensitivity and specificity-Scope Note
Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000134<a name="181513-STATO.580000134"> </a></td><td>Specificity</td><td>A proportion in which the numerator represents the non-detected items within the denominator that represents all items without the targeted attribute.</td><td>In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the specificity is the proportion of true negatives (all people without the disease who test negative) within all people without the disease (true negatives plus false positives). Sp = TN / (TN + FP).
True Negative Rate (TNR) is listed as an alternative term because of common usage, but TNR is not a Rate as defined in SEVCO.</td><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal</td><td>2022-01-19 vote 5-0 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya, Janice Tufte</td><td/><td/><td>STATO: true negative rate (specificity) = specificity is a measurement datum qualifying a binary classification test and is computed by substracting the false positive rate to the integral numeral 1
NCIt The probability that a test will produce a true negative result when used on non-effected subjects as compared to a reference or "gold standard". The specificity of a test can be determined by calculating: number of true negative results divided by the sum of true negative results plus number of false positive results.
OBCS a data item that refers to the proportion of negatives in a binary classification test which are correctly identified
OCRe An index of performance of a discriminant test calculated as the percentage of negatives in all true negatives
NICE glossary-Specificity (of a test)
How well a test correctly identifies people who do not have what it is testing for. It is the proportion of people without the disease or condition that are correctly identified by the study test. For example, a test with a specificity of 96% will, on average, correctly identify 96 people in every 100 who truly do not have the condition, but incorrectly identify as having the condition 4 people in every 100 who truly do not have it. It is different from negative predictive value.
MeSH scope note-sensitivity and specificity-Scope Note
Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000022<a name="181513-TBD.580000022"> </a></td><td>Positive Predictive Value</td><td>A proportion in which the numerator represents the correctly detected items within the denominator that represents all items detected.</td><td>In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the positive predictive value is the proportion of true positives (all people with the disease who test positive) within all the people with a positive test (true positives plus false positives). PPV = TP / (TP + FP).
In information retrieval, 'precision' is the proportion of items correctly retrieved within all retrieved items.
In Bayesian calculations, the 'Positive Predictive Value' is equivalent to the 'post-test probability' or 'posterior probability' following a positive test.</td><td>Harold Lehmann, Kenneth Wilkins, Phillippe Rocca-Serra, Joanne Dehnbostel</td><td>2022-02-02 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper</td><td>2022-01-26 vote 7-1 by Robin Ann Yurk, Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde</td><td>2022-01-26 comments: (1) I would remove precision from alternative terms
(2) minor change=quote terms of interest: In information retrieval, `precision` is the proportion of items correctly retrieved within all retrieved items.
The terms `post-test probability` and `posterior probability` are used in Bayesian calculations.
(3) Post-test probability is not fully synonymous with positive predictrive value. A negative predictive value is also the "post-test" probability of a true negative if the test has a negative result. And a test with a continuous rather than binary result could have a post-test probability that is neither positive nor negative predictive value. Post-test probability (and posterior probability) should become a child of probability.
2022-02-02 comment: I would remove the alternate term Precision and the comment for application for precision.</td><td>NCIt The probability that an individual is affected with the condition when a positive test result is observed. Predictive values should only be calculated from cohort studies or studies that legitimately reflect the number of people in the population who have the condition of interest at that time since predictive values are inherently dependent upon the prevalence. PPVDT can be determined by calculating: number of true positive results divided by the sum of true positive results plus number of false positive results.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000023<a name="181513-TBD.580000023"> </a></td><td>Negative Predictive Value</td><td>A proportion in which the numerator represents the correctly non-detected items within the denominator that represents all items not detected.</td><td>In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the negative predictive value is the proportion of true negatives (all people without the disease who test negative) within all the people with a negative test (true negatives plus false negatives). NPV = TN / (TN + FN).</td><td>Harold Lehmann, Ken Wilkins, Phillippe Rocca-Serra, Joanne Dehnbostel</td><td>2022-01-26 vote 8-0 by Robin Ann Yurk, Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde</td><td/><td/><td>NCIt The probability that an individual is not affected with the condition when a negative test result is observed. This measure of accuracy should only be used if the data on the prevalence of condition of interest in given population is available. NPVDT can be determined by calculating: number of true negative results divided by the sum of true negative results plus number of false negative results.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000149<a name="181513-TBD.580000149"> </a></td><td>Diagnostic Yield</td><td>A proportion in which the numerator represents the correctly detected items within the denominator that represents all items tested.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-08-10 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati</td><td/><td>2022-08-10 comment: I would simply suggest to simplify the definition to:
A proportion obtained by dividing the number of correctly detected items (numerator) by the number of all items tested (denominator)</td><td>"Diagnostic yield was defined as the number of participants with positive findings for advanced neoplasia relative to all participants" in https://pubs.rsna.org/doi/10.1148/radiol.12112486
Other 'definitions' found include synonymous use with sensitivity, and 'diagnostic yield' describing the statistic array of TP, FP, TN, and FN data.
https://medical-dictionary.thefreedictionary.com/diagnostic+yield
Diagnostic yield The likelihood that a test or procedure will provide the information needed to establish a diagnosis.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000185<a name="181513-TBD.580000185"> </a></td><td>Risk</td><td>A proportion in which the numerator represents the cases in which an event or characteristic occurs and the denominator represents all possible cases.</td><td>In the English language, 'risk' may be used synonymously with 'hazard', 'chance', 'likelihood', 'relative likelihood', 'probability' and many other terms. In SEVCO the term 'risk' is explicitly defined for how it is used in other terms such as 'Risk Ratio' and 'Relative Risk Difference' The statistical definition of 'risk' does not have a negative or undesirable connotation.
Risk may be conditioned on many factors. In such cases the statistic type is Risk and the statistic may be reported as a conditional risk (for example, predicted risk).
When a time period or a duration of time is used to define the period of time in which the risk is measured, the statistic type is Risk. Examples include 1-year risk, in-hospital risk, and cumulative risk.
In frequentist statistics, the risk is a ratio of the number of events to the number of possible cases. In subjective Bayesian statistics, the risk is a proportion as a whole that represents degree of belief, where 0 represents certainty that an event will not occur and 1 represents certainty that the event will occur.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte</td><td>2022-06-29 vote 6-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Eric Harvey</td><td>2022-06-22 vote 4-1 by Eric Harvey, Janice Tufte, Mario Tristan, Muhammad Afza, Eric M Harvey</td><td>2022-06-22 comment: It is confusing to consider "conditional probability" as an alternative term for "risk."
Generally;
"Conditional probability" refers to a probability whose value is dependent upon the occurrence of some process/event. In contrast, "risk" refers to the probability that an event will occur.
Mathematically;
"Conditional probability" is a measure of the probability of an event occurring, given that another event has already occurred. Let us have two events, A and B, and we want to know P(A) given P(B); notationally, P(A|B). Here the word 'given' defines a subset of the population of events because it applies condition on B. For example, if we care about the incidence of COVID-19 in men only, we might want to know P(COVID-19 | male). This means that first, pick out all the males, and second, figure out the probability they will get COVID-19. More formally, what P(A|B) says is: pick out the events to which both P(A) and P(B) apply and consider them as part of the subset of events to which only P(B) applies: hence P(A/B) = P(A and B)/P(B).
In simple words, what we are doing with P(A|B) = P(A and B) | P(B) is selecting out the same subset of the event population in both the
the numerator and the denominator: in this case, only men.
While "risk" by definition involves no condition. Taking the same example, we can say, "what is the risk of COVID-19?" here, we refer to the whole population; however, we can apply can make a condition over it like "what is the risk of COVID-19 in males?" This risk may be taken as "conditional risk," and it could be taken as an alternative term to conditional probability.
Conclusion: Let us define two terms, "risk" and "conditional risk," as a subset of "risk." Then "conditional probability" shall be taken as an alternative term to "conditional risk."
One more important point about the current definition of "Risk," i.e.,
Risk = A proportion in which the numerator represents the probability that an event or characteristic occurs and the denominator represents the probability that the event or characteristic occurs or does not occur.
If we write symbolically, it will look like this; P(A)/P(A or B), where A indicates "positive," which is the occurrence of something, and B indicates "negative," which is the non-occurrence of the same. We can write it formally as P(A) / P(AUB). In set theory, when there is "OR," in other words, "Union" infer the True value when either of them is True. It means A is true, or B is true, or both are true; we will get the true result. Interestingly, occurrence and non-occurrence are mutually exclusive, so two situations arise.
I) when the event occurs: P(A) / P(AUB) --> P(A)/P(A) = 1
II) when the event does not occur: P(A) / P(AUB) --> P(A)/P(B) = Odds
Therefore the definition needs to be revised for the correct meaning of the denominator. I believe the denominator refers to the whole population where some people are at risk and some are not, while the numerator refers to only those at risk.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000019<a name="181513-TBD.580000019"> </a></td><td>Odds</td><td>A ratio in which the numerator represents the probability that an event will occur and the denominator represents the probability that an event will not occur.</td><td>'Odds' and 'Odds ratio' are different terms. 'Odds' is a ratio of probabilities. 'Odds ratio' is a ratio of two different odds.
Odds are calculated as p / (1-p) where p is the probability of event occurrence. When p = 0, the odds = 0. When p = 1, the odds may be expressed as not calculable or as "odds against = 0".
Odds may be expressed as p:(1-p). Odds may be expressed as p:q where q = 1-p. Odds may be expressed as a:b where a and b are multiples of p and (1-p). Examples of different expressions of the same odds include 3:2, 3/2, 0.6:0.4, 0.6/0.4, and 1.5.
Odds may be expressed as "odds for" or "odds in favor" (e.g. 1:5 for a "3" on a 6-sided die) or "odds against" (e.g. 5:1 against a "3" on a 6-sided die).
The term "betting odds" used in gambling that involves financial amounts in the formulation is not an "Odds" in the definition of the Scientific Evidence Code System.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Kenneth Wilkins, Muhammad Afzal</td><td>2022-03-22 vote 5-0 by Muhammad Afzal, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk</td><td>2022-02-16 vote 4-2 by Janice Tufte, Paola Rosati, Eric Moyer, Harold Lehmann, Robin Ann Yurk, Jesus Lopez-Alcalde
2022-02-23 vote 5-2 by nisha mathew, Harold Lehman, Paola Rosati, Sunu Alice Cherian, Robin Ann Yurk, Joanne Dehnbostel, Sumalatha A
2022-03-09 vote 3-1 by Robin Ann Yurk, Janice Tufte, Eric Moyer, nisha mathew
2022-03-16 vote 8-1 by Robin Ann Yurk, Janice Tufte, Eric Moyer, nisha mathew, Harold Lehmann, Philippe Rocca-Serra, Louis Leff, Paola Rosati, Mario Tristan</td><td>2022-02-16 comments: The term definition and comment for application are clear and well written. It would help to have a discussion on the Parent and Child relationships for this term as right now you only have Statistic, Ratio, Odds. In statistics for the scientific code system is Statistic, Ratio, Odds Ratio a better sequence and put the Odds under comment for application.
This term needs two alternative terms: "Odds For" and "Odds in Favor." The definition needs to deal with the cases p=1 and p=0. (I can think of 3 questions regarding these cases. (1) Are they defined? (2) Is p=1 the same as ∞. (3) Does 3:0=1:0?)
We should mention that this term does not include gambling odds. (As I understand it, gambling odds are the ratio of stake to winnings with several representations and frequently have a "rounding" factor to ensure a profit for the bookmaker).
Another issue is whether to represent "Odds Against" in the vocabulary. It could come up when annotating an immutable pre-existing source that gives odds as odds against; for example, an NLP system that scans published works to output labels for sections of the text.
A term related to "Odds" missing from the parent branch, "Ratio," is "Log Odds."
(Not unique to this term, but I noticed it here) The children of "Statistic" should inherit the application comment from "Statistic" about distinguishing between the statistic and statistic value. That way, a reader will not need to read the whole tree to know that 1.5 is not "Odds"; it is "Odds statistic value." (However, I do not see a place for "Odds statistic value" in the tree.)
Finally, the repetition of "Odds may be expressed as" is awkward.
2022-02-23 comments: "Odds may be expressed as p:(1-p). Odds may be expressed as p:q where q = 1-p. " Sounds redundant.
alternative terms: Probablity likelihood, chance ---{{Group meeting decided that 'probability' and 'likelihood' are terms we may consider adding to the SEVCO but they are not alternative terms for odds, 'chance' is considered a lay term and not a specific statistical term for the code system}}
Odds is a computational function such as addition, subtraction, multiplication. Odds Ratio may be better term for the term definition. This comment is based on your term definition and comment for application.
2022-03-09 comment: Edit the term definition: A ratio of probabilities in which the numerator represents the probability of the number of times an event will occur and the denominator represents the probability of the number of time an event will not occur. (Steering group 2022-03-09 considers the suggested change does not add clarification or improved understanding.)
2022-03-16 comment: I would delete likelihood from the term definition as in statistics it introduces a different formula such as likelihood ratio.
My suggestion is to simplify to a ration in which the numerator represents the number of times an event will occur and the denominator represents the number of times an event will not occur. (Steering group 2022-03-16 again considers the suggested change to include "number of times" not persuasive, but changed "relative likelihood" to "probability" in the definition to avoid the potential confusion with likelihood ratio.)</td><td>OCRe:
Odds is a quotient in which the relative likelihood that an event will occur is divided by the the relative likelihood that it won't. In probability theory and statistics, where the variable "p" is the probability in favor of the event, and the probability against the event is 1-p, "the odds" of the event are the quotient of the two, or p / (1-p)</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000020<a name="181513-TBD.580000020"> </a></td><td>Rate</td><td>A ratio in which the numerator represents any quantity and the denominator represents an interval of time.</td><td>When the numerator represents a count, the rate is an Event Rate.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal</td><td>2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte</td><td>2022-03-30 vote 4-2 by Cauê Monaco, Muhammad Afzal, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk
2022-04-06 vote 3-2 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, nisha mathew</td><td>2022-03-30 comments: A rate does not necessarily represent time. "In math, a rate is a special ratio in which the two terms are in different units"Edit term definition: A proportion represented by a rate of an event count for another quantified measure.
My comments are more focused on in the term definition and comment you have incomplete definitions or comments as you only describe in the term definition the denominator and in the comment you only describe the numerator.
Improvement Suggestion: By definition, a rate would have both a numerator and denominator so it is important for your to include in a definition both numerator and denominator.
The term definition should read: A ratio in which the numerator represents an event count and the denominator represents the total sum of the events considered as a count and non count.
The underlying concept for Rate is that the Denominator is a measure of time. So we need a definition where the numerator is X and the denominator is a measure time.
Our approach to definitions has been:
Ratio = A statistic that is a quotient of two quantities. [[By definition any statistic that is a ratio has a numerator and a denominator. Any statistic that has a numerator and a denominator is a Ratio, and may be given a more specific term when it is a type of Ratio.]] The Ratio definition inherits the Statistic definition so we do not re-define statistic.
Rate = A ratio in which the denominator represents a duration of time. This means that when we constrain the definition of ratio to limit to statistics where the denominator represents a duration of time, then the type of Ratio is a Rate. There is a logic to this approach to setting a definition, but your comment shows that it feel lacking because it does not mention the numerator. There is no constraint or modification being applied to the numerator.
Perhaps we can try “Rate = A ratio in which the numerator represents any quantity and the denominator represents a duration of time.”
Would that help clarify this item?
2022-04-06 comments: I would insert in the term definition, the numerator represents a quantity defined as a unit which is a smaller part of the denominator divided by the total sum of units in the denominator.
the concepts, "frequency of events" and "over a specified period of time" are not reflected in this definition
2022-04-27 comment: Edit term definition: A proportion represented by a rate of an event count or another quantified measure divided by the total sum of units. {{Discussion by Expert Working Group: The proposed definition describes a Proportion, but a Rate is NOT a Proportion.}}</td><td>NCIt
Rate = A measurement of degree, speed, or frequency relative to time.
OBCS
rate= A quality of a single process inhering in a bearer by virtue of the bearer's occurrence per unit time.
OCRe
Rate = A rate is a quantity per unit of time.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000024<a name="181513-TBD.580000024"> </a></td><td>Incidence Rate</td><td>A rate in which the number of new events per total at risk is divided by an interval of time.</td><td>Incidence is defined as a proportion in which the numerator represents new events and the denominator represents the total at risk for events. Rate is defined as a ratio in which the numerator represents any quantity and the denominator represents an interval of time. The interval of time used for the denominator may be data-dependent when the duration of observation varies across the observations.
In the method for calculating incidence rate (described at https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html), the numerator is the "Number of new cases of disease or injury during the specified period" and the denominator is the "Time each person was observed, totaled for all persons"</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal</td><td>2022-05-25 vote 6-0 by Jesus Lopez-Alcalde, Brian S. Alper, Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Harold Lehmann</td><td>2022-05-11 vote 7-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Janice Tufte, Harold Lehmann, Paola Rosati, Robin Ann Yurk</td><td>2022-05-11 comment: Suggest improving current term definition with the definition in comment for application. The alternative terms I am not sure fit here--you may want to add more detail for the alternate terms to the comment for application.
2022-05-25 comment: The definition defines the ideal ("at risk"); very often, however, incidence rates are calculated more grossly. While they are semantically wrong, they are quantitatively correct. Classic: birth incidence. The proper denominator would be fertile women, but *could * be calculated "per woman" or even "per capita".</td><td>NCIt
Incidence Rate = The frequency of new occurrences of an event during a specified time period.
CDC: https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html
Incidence refers to the occurrence of new cases of disease or injury in a population over a specified period of time. Although some epidemiologists use incidence to mean the number of new cases in a community, others use incidence to mean the number of new cases per unit of population.
Two types of incidence are commonly used — incidence proportion and incidence rate.
Synonyms for incidence rate
Person-time rate
Definition of incidence rate
Incidence rate or person-time rate is a measure of incidence that incorporates time directly into the denominator. A person-time rate is generally calculated from a long-term cohort follow-up study, wherein enrollees are followed over time and the occurrence of new cases of disease is documented. Typically, each person is observed from an established starting time until one of four “end points” is reached: onset of disease, death, migration out of the study (“lost to follow-up”), or the end of the study. Similar to the incidence proportion, the numerator of the incidence rate is the number of new cases identified during the period of observation. However, the denominator differs. The denominator is the sum of the time each person was observed, totaled for all persons. This denominator represents the total time the population was at risk of and being watched for disease. Thus, the incidence rate is the ratio of the number of cases to the total time the population is at risk of disease.
alternative terms for incidence rate (incidence density, average hazard) noted at https://www.sjsu.edu/faculty/gerstman/eks/formula_sheet.pdf</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000025<a name="181513-TBD.580000025"> </a></td><td>Hazard Rate</td><td>A conditional instantaneous rate in which the numerator represents an incidence conditioned on survival to a specified time, and the denominator represents a time interval with a duration approaching zero.</td><td>In the definition of Hazard Rate, the term "survival" is not literally about life and death but is used to represent existence without experiencing the event. "Hazard" as a statistical term is not specific to "bad" or "dangerous" events.
A hazard rate is expressed as a unitless numerator per unit of time, occurring at a specified time, and conditioned on survival to that time.
A hazard rate is mathematically the negative derivative of the log of the survival function. The survival function is the probability of surviving past a specified point in time, expressed as Pr{ T >= t }.
A hazard rate is also mathematically defined as lim(dt -> 0) [ Pr{ ( t <= T < t + dt ) | ( T >= t ) } / dt ].</td><td>Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper</td><td>2022-05-12 vote 8-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew</td><td>2022-04-06 vote 4-3 by Mario Tristan, Robin Ann Yurk, Cauê Monaco, Harold Lehmann, Paola Rosati, Jesus Lopez-Alcalde, nisha mathew</td><td>2022-04-06 comments: An instantaneous rate in which the numerator represents an incidence and the denominator represents a time interval conditioned on survival to a specified time with a duration approaching zero
A hazard is any danger or peril. It does not necessarily represent a survival/death relationship.
I would add a vote choice: No Comment-Specialized Term or Not Applicable or some other choice as this is specialized formula.</td><td>A Dictionary of Epidemiology (5 ed.) by Miquel Porta
Hazard rate = A theoretical measure of the probability of occurrence of an event per unit time at risk; e.g., death or new disease, at a point in time, t, defined mathematically as the limit, as ∆t approaches zero, of the probability that an individual well at time t will experience the event by t + ∆t, divided by ∆t.
formula expressed at https://data.princeton.edu/wws509/notes/c7s1</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000168<a name="181513-TBD.580000168"> </a></td><td>Event Rate</td><td>The number of occurrences per unit of time.</td><td>An event rate is a ratio in which the numerator represents a count and the denominator represents an interval of time.
When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
--If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency.
--If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate.
--If the denominator includes an interval of space, the type of ratio is a Number Density</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann</td><td>2022-05-12 vote 8-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew</td><td/><td>2022-04-27 comment: The term definition and comment for application are comprehensive. However, for your comment for application I would only use the following...
An event rate is a ratio in which the numerator represents a count and the denominator represents an interval of time.
When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
{{Expert Working Group discussion: the comment providing instructions for choosing among 4 related and confusing terms is considered useful for guidance, and purposefully mentions other terms that may be more appropriate.}}</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000171<a name="181513-TBD.580000171"> </a></td><td>Event Frequency Rate</td><td>A ratio in which the numerator represents an event frequency and the denominator represents an interval of time.</td><td>When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
--If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency.
--If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate.
--If the denominator includes an interval of space, the type of ratio is a Number Density</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann</td><td>2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000170<a name="181513-TBD.580000170"> </a></td><td>Event Frequency</td><td>A ratio in which the numerator represents a count and the denominator represents a count (without involving an interval of time).</td><td>When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
--If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency.
--If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate.
--If the denominator includes an interval of space, the type of ratio is a Number Density</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann</td><td>2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000172<a name="181513-TBD.580000172"> </a></td><td>Density</td><td>A ratio in which the numerator represents any quantity and the denominator represents an interval of space (distance, area, or volume).</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann</td><td>2022-05-08 vote 7-0 by Mario Tristan, Janice Tufte, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Eric M Harvey</td><td/><td>2022-05-08 comment: Examples would be nice, since "linear density" is not a traditional measure</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000173<a name="181513-TBD.580000173"> </a></td><td>Number Density</td><td>A ratio in which the numerator represents a count and the denominator represents an interval of space (distance, area, or volume).</td><td>When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
--If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency.
--If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate.
--If the denominator includes an interval of space, the type of ratio is a Number Density</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann</td><td>2022-05-27 vote 10-0 by Khalid Shahin, Joanne Dehnbostel, Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Muhammad Afzal, nisha mathew, Janice Tufte</td><td>2022-05-12 vote 8-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte</td><td>2022-05-12 comment: I wonder if we could define better "Number Density" as Density ratio {{2022-05-18 discussion found 2 instances of the term 'Number density' matching our definition, and the term 'density ratio' defines a density divided by a density which does not match this concept.}}</td><td>Wikipedia https://en.wikipedia.org/wiki/Number_density
The number density (symbol: n or ρN) is an intensive quantity used to describe the degree of concentration of countable objects (particles, molecules, phonons, cells, galaxies, etc.) in physical space: three-dimensional volumetric number density, two-dimensional areal number density, or one-dimensional linear number density. Population density is an example of areal number density.
IUPAC Gold Book https://goldbook.iupac.org/terms/view/N04262
number density, n
Number of particles divided by the volume they occupy.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000175<a name="181513-TBD.580000175"> </a></td><td>Concentration</td><td>A ratio in which the numerator is a measure of the solute and the denominator is a measure of the solvent.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann</td><td>2022-05-08 vote 6-0 by Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Eric M Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000040<a name="181513-TBD.580000040"> </a></td><td>Measure of Association</td><td>A statistic that quantitatively represents a relationship between two or more variables.</td><td/><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Neeraj Ojha</td><td>2022-03-16 vote 7-0 by Mario Tristan, Paola Rosati, Louis Leff, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte</td><td/><td>2022-02-24 comment: There are measures of association between more than two variables, for example, an estimator of interaction information. So, this should be "two or more variables" (or just "variables").
Also, I don't like the term "represents", I'd prefer to say "A statistic that quantifies a relationship between variables."</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000021<a name="181513-TBD.580000021"> </a></td><td>Ratio-based Measure of Association</td><td>A measure of association expressed as a ratio.</td><td>This categorical (parent) term can be used for a statistic that is a ratio, quantifies a relationship between two variables, and is not found in the child terms.</td><td>Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal</td><td>2022-12-28 vote 7-0 by Janice Tufte, Mario Tristan, Joanne Dehnbostel, Harold Lehman, Yuan Gao, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td>2022-02-24 comment: I think we should replace "represents" with "quantifies" and remove the restriction to two variables.
"A statistic that is a ratio and quantifies a relationship between variables."
Second, I think you want a more restrictive definition than a statistic that is a ratio. For example, the uncertainty coefficient, I(X;Y)/H(Y), is a ratio and a measure of association, but I don't think you'd consider it a ratio-based measure of association (maybe you would, in which case this is OK).
You should also consider whether monotonic transformations of ratios count as ratio-based measures. It is common for people to take logarithms of ratios.
I'm not sure what the utility is of this category. When does someone need it? Could we just put all its children Measure of Association?</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000026<a name="181513-TBD.580000026"> </a></td><td>Hazard Ratio</td><td>A measure of association that is the ratio of the hazard rate of an event in one group to the hazard rate of the same event in another group.</td><td>Hazard rate (SEVCO TBD:0000025) is defined as: A conditional instantaneous rate in which the numerator represents an incidence conditioned on survival to a specified time, and the denominator represents a time interval with a duration approaching zero.
The groups being compared are often the exposed group versus the unexposed group, but hazard ratio can also be applied to comparisons of one exposure relative to another exposure.
A hazard ratio of one means there is no difference between two groups in terms of their hazard rates, based on whether or not they were exposed to a certain substance or factor, or how they responded to two interventions being compared. A hazard ratio of greater than one implies an association of greater risk, and a hazard ratio of less than one implies an association of lower risk.
The hazard ratio can be calculated from studies in which the proportion of exposed participants who had the event is known, the proportion of unexposed participants who had the event is known, and the timing of events for each participant is known or estimable, such as a cohort study or clinical trial.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati</td><td/><td/><td>NCIt:
Hazard ratio = A measure of how often a particular event happens in one group compared to how often it happens in another group, over time. In cancer research, hazard ratios are often used in clinical trials to measure survival at any point in time in a group of patients who have been given a specific treatment compared to a control group given another treatment or a placebo. A hazard ratio of one means that there is no difference in survival between the two groups. A hazard ratio of greater than one or less than one means that survival was better in one of the groups.
https://www.statisticshowto.com/hazard-ratio/
The hazard ratio is a comparison between the probability of events in a treatment group, compared to the probability of events in a control group.
Hazard Ratio in Clinical Trials (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC478551/)
The hazard ratio is an estimate of the ratio of the hazard rate in the treated versus the control group. The hazard rate is the probability that if the event in question has not already occurred, it will occur in the next time interval, divided by the length of that interval. The time interval is made very short, so that in effect the hazard rate represents an instantaneous rate.
The Hazards of Hazard Ratios (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653612/)
The hazard ratio (HR) is the main, and often the only, effect measure reported in many epidemiologic studies. For dichotomous, non–time-varying exposures, the HR is defined as the hazard in the exposed groups divided by the hazard in the unexposed groups. For all practical purposes, hazards can be thought of as incidence rates and thus the HR can be roughly interpreted as the incidence rate ratio. The HR is commonly and conveniently estimated via a Cox proportional hazards model, which can include potential confounders as covariates.</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000027<a name="181513-TBD.580000027"> </a></td><td>Incidence Rate Ratio</td><td>A measure of association that is the ratio of two incidence rates.</td><td>Incidence Rate (SEVCO TBD:0000024) is defined as: A rate in which the number of new events per total at risk is divided by an interval of time.
The incidence rates may refer to the same event comparing two different groups, or the same group comparing two different events.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati</td><td/><td/><td/><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000032<a name="181513-TBD.580000032"> </a></td><td>Standardized Incidence Ratio</td><td>An incidence rate ratio in which the numerator is the incidence rate in a group and the denominator is the incidence rate for a reference population.</td><td>The incidence rate used for the denominator may be an expected incidence rate for a reference population. The reference population may refer to a general population of the geographic area from which the cohort was selected.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati</td><td/><td>2022-06-08 comment: ... and the denominator is the incidence rate or expected incidence rate for a reference population.
Comment for application: The reference population may refer to a general population of the geographic area from which the cohort was selected.</td><td/><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000182<a name="181513-STATO.580000182"> </a></td><td>Odds Ratio</td><td>A measure of association that is the ratio of two odds.</td><td/><td>Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte</td><td>2022-03-16 vote 8-0 by Mario Tristan, Paola Rosati, Louis Leff, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte, Eric Moyer</td><td/><td/><td>STATO: odds ratio (OR) = Odds ratio is a ratio that measures effect size, that is the strength of association between 2 dichotomous variables, one describing an exposure and one describing an outcome.
It represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of that exposure ( the probability of the event occuring divided by the probability of an event not occurring). The odds ratio is a ratio of describing the strength of association or non-independence between two binary data values by forming the ratio of the odds for the first group and the odds for the second group. Odds ratio are used when one wants to compare the odds of something occurring to two different groups.
UMLS: The ratio of two odds. The exposure-odds ratio for case control data is the ratio of the odds in favor of exposure among cases to the odds in favor of exposure among noncases. The disease-odds ratio for a cohort or cross section is the ratio of the odds in favor of disease among the exposed to the odds in favor of disease among the unexposed. The prevalence-odds ratio refers to an odds ratio derived cross-sectionally from studies of prevalent cases. (MSH)
A measure of the odds of an event happening in one group compared to the odds of the same event happening in another group. In cancer research, odds ratios are most often used in case-control (backward looking) studies to find out if being exposed to a certain substance or other factor increases the risk of cancer. For example, researchers may study a group of individuals with cancer (cases) and another group without cancer (controls) to see how many people in each group were exposed to a certain substance or factor. They calculate the odds of exposure in both groups and then compare the odds. An odds ratio of one means that both groups had the same odds of exposure and, therefore, the exposure probably does not increase the risk of cancer. An odds ratio of greater than one means that the exposure may increase the risk of cancer, and an odds ratio of less than one means that the exposure may reduce the risk of cancer. (NCI)
The ratio of the odds of an event occurring in one group to the odds of it occurring in another group, or to a sample-based estimate of that ratio. (NCI)
NICE: Compares the odds (probability) of something happening in 1 group with the odds of it happening in another. An odds ratio of 1 shows that the odds of the event happening (for example, a person developing a disease or a treatment working) is the same for both groups. An odds ratio of greater than 1 means that the event is more likely in the first group than the second. An odds ratio of less than 1 means that the event is less likely in the first group than in the second group.</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000028<a name="181513-TBD.580000028"> </a></td><td>Prevalence Ratio</td><td>A measure of association that is the ratio of two prevalences.</td><td>Prevalence (SEVCO STATO:0000412) is defined as: A proportion in which the numerator represents all events of interest (for example, both new and preexisting cases of a disease) in the population, which is represented by the denominator.
The Prevalence Ratio indicates the magnitude of the prevalence of an event/outcome in one group of subjects/individuals (with characteristics/attribute) relative to another group (with different characteristics/attributes), such as the prevalence of the disease among the exposed persons to the prevalence of the disease among the unexposed persons.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Janice Tufte</td><td>2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati</td><td/><td>2022-06-08 comment: Comment for application: The prevalence Ratio indicates how large is the prevalence of an event/outcome in one group of subjects/individuals (with characteristics/attribute) relative to another group (without the characteristics/attributes), such as the prevalence of the disease among the exposed persons to the prevalence of the disease among the unexposed persons.</td><td>https://www.ctspedia.org/do/view/CTSpedia/PrevalenceRatio#:~:text=Reference-,Definition%20of%20Prevalence%20Ratio,the%20proportion%20with%20the%20exposure.
The ratio of the proportion of the persons with disease over the proportion with the exposure.
Calculation is described here:
https://sphweb.bumc.bu.edu/otlt/MPH-Modules/PH717-QuantCore/PH717-Module3-Frequency-Association/PH717-Module3-Frequency-Association12.html</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000245<a name="181513-STATO.580000245"> </a></td><td>Risk Ratio</td><td>A measure of association that is the ratio of the risk of an event in one group to the risk of the same event in another group.</td><td>The groups being compared are often the exposed group versus the unexposed group, but risk ratio can also be applied to comparisons of one exposure relative to another exposure.
A risk ratio of one means there is no difference between two groups in terms of their risk, based on whether or not they were exposed to a certain substance or factor, or how they responded to two interventions being compared. A risk ratio of greater than one implies an association of greater risk, and a risk ratio of less than one implies an association of lower risk.
The risk ratio can be calculated from studies in which the proportion of exposed participants who had the event is known and the proportion of unexposed participants who had the event is known, such as a cohort study or clinical trial.</td><td>Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-06-08 vote 6-0 by Robin Ann Yurk, Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati</td><td/><td/><td>STATO: relative risk (risk ratio) = Relative risk is a measurement datum which denotes the risk of an 'event' relative to an 'exposure'. Relative risk is calculated by forming the ratio of the probability of the event occurring in the exposed group versus the probability of this event occurring in the non-exposed group.
NCIt Relative Risk A measure of the risk of a certain event happening in one group compared to the risk of the same event happening in another group. In cancer research, risk ratios are used in prospective (forward looking) studies, such as cohort studies and clinical trials. A risk ratio of one means there is no difference between two groups in terms of their risk of cancer, based on whether or not they were exposed to a certain substance or factor, or how they responded to two treatments being compared. A risk ratio of greater than one or of less than one usually means that being exposed to a certain substance or factor either increases (risk ratio greater than one) or decreases (risk ratio less than one) the risk of cancer, or that the treatments being compared do not have the same effects
OBCS relative risk A data item that equals the incidence in exposed individuals divided by the incidence in unexposed individuals. The relative risk can be calculated from studies in which the proportion of patients exposed and unexposed to a risk is known, such as a cohort study.
CDC https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section5.html:
A risk ratio (RR), also called relative risk, compares the risk of a health event (disease, injury, risk factor, or death) among one group with the risk among another group. It does so by dividing the risk (incidence proportion, attack rate) in group 1 by the risk (incidence proportion, attack rate) in group 2. The two groups are typically differentiated by such demographic factors as sex (e.g., males versus females) or by exposure to a suspected risk factor (e.g., did or did not eat potato salad). Often, the group of primary interest is labeled the exposed group, and the comparison group is labeled the unexposed group.</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000411<a name="181513-STATO.580000411"> </a></td><td>Likelihood Ratio Positive</td><td>A measure of association that is the ratio of the probability of the test giving a positive result when testing an affected subject and the probability of the test giving a positive result when a subject is not affected.</td><td>The probability of the test giving a positive result when testing an affected subject is also called the sensitivity [SEVCO term STATO:0000233] or true positive rate. The probability of the test giving a positive result when a subject is not affected is called the false positive rate and is calculated as 1 minus the specificity [SEVCO term STATO:0000134]. The Likelihood Ratio Positive (LR+) is calculated as Sensitivity / (1 - Specificity).
The Likelihood Ratio Positive may also be calculated as the posterior probability (positive predictive value) divided by the prior probability (prevalence).
When the test result is a specific value on a continuous scale, the Likelihood Ratio Positive is the ratio of the likelihood of the test giving the specific value when testing an affected subject and the likelihood of the test giving the specific value when a subject is not affected.
In the context of a probability distribution function, e.g. normal distribution, the x axis is the value and y axis is the likelihood.</td><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-08-10 vote 5-0 by Robin Ann Yurk, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati</td><td/><td/><td>STATO: positive likelihood ratio (likelihood ratio for positive results) = the likelihood ratio of positive results is a ratio which is form by dividing the sensitivity value of a test by the difference between 1 and specificity of the test. This can be expressed also as dividing the probability of the test giving a positive result when testing an affected subject versus the probability of the test giving a positive result when a subject is not affected.
AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes:
The positive and negative likelihood ratios (LR+ and LR-, respectively) quantify the change in the certainty of the “diagnosis” conferred by test results. More specifically, the likelihood ratios transform the pretest odds to the posttest odds of a given (positive or negative) diagnosis:
posttest odds = pretest odds x LR
For a positive result with the medical test, the positive likelihood ratio would be used in the above relationship; for a negative result with the medical test portable monitor, the negative likelihood ratio would be used.
If a given medical test has very good ability to predict the “true disease status,” its positive likelihood ratio will be high (i.e., will greatly increase the odds of a positive diagnosis) and its negative likelihood ratio will be low (i.e., will diminish substantially the likelihood of the positive diagnosis). A completely non-informative portable monitor would have likelihood ratios equal to 1 (i.e., does not transform the pre-test odds substantially in the equation above). Typically, a positive likelihood ratio of 10 or more and a negative likelihood ratio of 0.1 or less are considered to represent informative tests.3 We note that other, more lenient boundaries for LR+ and LR- can be used3 and that the choice of the boundaries is a subjective decision. It is interesting to note that studies with high LR+ and low LR- can be readily identified in the square sensitivity/100 percent-specificity plot, as shown in the Appendix Figure above.</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000410<a name="181513-STATO.580000410"> </a></td><td>Likelihood Ratio Negative</td><td>A measure of association that is the ratio of the probability of the test giving a negative result when testing an affected subject and the probability of the test giving a negative result when a subject is not affected.</td><td>The probability of the test giving a negative result when testing an affected subject is also called the false negative rate and is calculated as 1 minus the sensitivity [SEVCO term STATO:0000233]. The probability of the test giving a negative result when a subject is not affected is called the specificity [SEVCO term STATO:0000134] or true negative rate. The Likelihood Ratio Negative (LR-) is calculated as (1 - Sensitivity ) / Specificity.
The Likelihood Ratio Negative may also be calculated as the posterior probability (1 - negative predictive value) divided by the prior probability (prevalence).</td><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal</td><td>2022-08-10 vote 5-0 by Robin Ann Yurk, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati</td><td/><td/><td>STATO: negative likelihood ratio (likelihood ratio for negative results) = the likelihood ratio of negative results is a ratio which is formed by dividing the difference between 1 and sensitivity of the test by the specificity value of a test.. This can be expressed also as dividing the probability of a person who has the disease testing negative by the probability of a person who does not have the disease testing negative.
AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes:
The positive and negative likelihood ratios (LR+ and LR-, respectively) quantify the change in the certainty of the “diagnosis” conferred by test results. More specifically, the likelihood ratios transform the pretest odds to the posttest odds of a given (positive or negative) diagnosis:
posttest odds = pretest odds x LR
For a positive result with the medical test, the positive likelihood ratio would be used in the above relationship; for a negative result with the medical test portable monitor, the negative likelihood ratio would be used.
If a given medical test has very good ability to predict the “true disease status,” its positive likelihood ratio will be high (i.e., will greatly increase the odds of a positive diagnosis) and its negative likelihood ratio will be low (i.e., will diminish substantially the likelihood of the positive diagnosis). A completely non-informative portable monitor would have likelihood ratios equal to 1 (i.e., does not transform the pre-test odds substantially in the equation above). Typically, a positive likelihood ratio of 10 or more and a negative likelihood ratio of 0.1 or less are considered to represent informative tests.3 We note that other, more lenient boundaries for LR+ and LR- can be used3 and that the choice of the boundaries is a subjective decision. It is interesting to note that studies with high LR+ and low LR- can be readily identified in the square sensitivity/100 percent-specificity plot, as shown in the Appendix Figure above.</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000029<a name="181513-TBD.580000029"> </a></td><td>Positive Clinical Utility Index</td><td>DEFERRED</td><td/><td/><td/><td/><td/><td>Mitchell AJ 2011 https://www.psycho-oncology.info/686.pdf https://link.springer.com/article/10.1007/s10654-011-9561-x
positive clinical utility index = sensitivity x PPV
Asberg 2019 A new index of clinical utility for diagnostic tests at https://www.tandfonline.com/doi/full/10.1080/00365513.2019.1677938
We propose a new clinical utility index (CUI), which is the expected gain in utility (EGU) of the test divided by the EGU of an ideal test, both adjusted for EGU of the optimal clinical action without testing. The index expresses the relative benefit of using the test compared to using an optimal test when making a clinical decision.
Expected gain in utility (EGU) of a clinical option, at a certain probability of disease (p), is the difference between its expected utility and the expected utility of another option, for instance doing nothing [4]. The EGU of the option W at probability p is EGUp(W) = p×BW – (1 − p)×CW ......CUI is then a complicated equation.</td><td/><td>2022-08-10 discussion: Considering 2 source definitions that are incompatible and limited usage overall, decision made to defer this term to future consideration for SEVCO.</td><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000030<a name="181513-TBD.580000030"> </a></td><td>Negative Clinical Utility Index</td><td>DEFERRED</td><td/><td/><td/><td/><td/><td>see Positive Clinical Utility Index</td><td/><td>2022-08-10 discussion: Considering 2 source definitions that are incompatible and limited usage overall, decision made to defer this term to future consideration for SEVCO.</td><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000415<a name="181513-STATO.580000415"> </a></td><td>Diagnostic Accuracy</td><td>A measure of association that is the ratio of the number of correct results to the total number tested.</td><td>Where results are reported as positive or negative, correct results are reported as true, and incorrect results are reported as false, the diagnostic accuracy is calculated as ( True Positives + True Negatives ) / ( True Positives + True Negatives + False Positives + False Negatives ).
For continuous values, Measurement Accuracy (SEVCO term: TBD:MeasAccu) would be used instead of Diagnostic Accuracy.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel</td><td>2022-08-24 vote 6-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey, Janice Tufte</td><td/><td/><td>STATO: "accuracy (Rand accuracy, Rand index) = in the context of binary classification, accuracy is defined as the proportion of true results (both true positives and true negatives) to the total number of cases examined (the sum of true positive, true negative, false positive and false negative).
It can be understood as a measure of the proximity of measurement results to the true value."</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000033<a name="181513-TBD.580000033"> </a></td><td>Diagnostic Odds Ratio</td><td>A measure of association that is the ratio of the odds of a positive test in those with disease relative to the odds of a positive test in those without disease.</td><td>The Diagnostic Odds Ratio may be calculated as the Likelihood Ratio Positive divided by the Likelihood Ratio Negative. The Diagnostic Odds Ratio is an overall measure of the discriminatory power of a test and does not distinguish between the power to detect (rule in) or exclude (rule out).</td><td>Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins</td><td>2022-08-31 vote 6-0 by Janice Tufte, nisha mathew, Muhammad Afza, Harold Lehmann, Philippe Rocca-Serra, Eric Harvey</td><td/><td/><td>AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes:
The diagnostic odds ratio (DOR) describes the odds of a positive test in those with disease relative to the odds of a positive test in those without disease.4 It can be computed in terms of sensitivity and specificity as well as in terms of positive and negative likelihood ratios (DOR = LR+/LR-). Thus this single measure includes information about both sensitivity and specificity and tends to be reasonably constant despite diagnostic threshold. However, it is impossible to use diagnostic odds ratios to weigh sensitivity and specificity separately, and to distinguish between tests with high sensitivity and low specificity and tests with low sensitivity and high specificity.
Another disadvantage is that it is difficult for clinicians to understand and apply, limiting its clinical value. This is partly because they are not often exposed to diagnostic odds ratios. A diagnostic odds ratio is similar to an odds ratio that measures strength of association in an observational study or effect size in a trial. However, contrary to the typical effect size magnitudes of such odds ratios (often between 0.5 and 2), diagnostic odds ratios can attain much larger values (often greater than 100).</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000524<a name="181513-STATO.580000524"> </a></td><td>Phi coefficient</td><td>A measure of association, ranging from -1 to 1, that measures the strength and direction of the linear relationship between two binary variables.</td><td>For a 2×2 contingency table where A, B, C, and D represent the observation frequencies (the cell count), the formula for the phi coefficient ($\Phi$) is:
$$
\Phi = \frac{AD - BC}{\sqrt{(A+B)(C+D)(A+C)(B+D)}}
$$</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal</td><td>2023-01-25 vote 6-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Eric Harvey</td><td/><td/><td>STATO: Matthews correlation coefficient (MCC) = Matthews Correlation Coefficient (or MCC) is a correlation coefficient which is a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975.</td><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000031<a name="181513-TBD.580000031"> </a></td><td>Measure of Agreement</td><td>A measure of association of two variables representing measurements of the same attribute of an entity.</td><td>The term 'Measure of Agreement' is primarily used as a class for types of measure of agreement listed in the hierarchy but may be used as the code for a measure of agreement that is not listed.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel</td><td>2022-12-21 vote 5-0 by Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Janice Tuft, Harold Lehmann</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000034<a name="181513-TBD.580000034"> </a></td><td>Kappa</td><td>A measure of agreement among categorical assessments, corrected for chance agreement.</td><td>In the literature, the same eponymic term (e.g., 'Cohen's kappa') is used with different formulas. In SEVCO, we define each term with a single formula, and recommend annotators to choose the SEVCO term based on the formula.
This is a widely used term to measure inter-rater reliability. Refer to measures of association to see other terms: for example, intra-class correlation coefficient (ICC).</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal</td><td>2022-09-14 (After deleting one "yes" vote
at the request of the voter) vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin</td><td/><td>2022-09-14 Comment "I recommend adding ....is a measure of interrater reliability or is this an Interrater reliability testing an alternate term."</td><td>OBCS kappa statistic = a generic term for several similar measures of agreement used with categorical data; typically used in assessing the degree to which two or more raters, examining the same data, agree on assigning data to categories</td><td/><td/><td/><td>Measure of Agreement</td><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000037<a name="181513-TBD.580000037"> </a></td><td>Simple chance-corrected agreement coefficient</td><td>A Kappa statistic in which the expected agreement by chance is based on an assumption that all possible categories for assignment are equally likely.</td><td>A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement.
In the simple chance-corrected agreement coefficient, the expected chance agreement is modeled as the inverse of the number of categories (1/q) where q is the number of possible categories for assignment.
The simple chance-corrected agreement coefficient is calculated as ( p[a] − 1/q ) / ( 1 − 1/q ) where p[a] is the observed percent agreement and q is the number of possible categories for assignment.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins</td><td>2022-09-14 vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin</td><td/><td/><td>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5965565
Gwet KL. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance. Educ Psychol Meas. 2016 Aug;76(4):609-637. doi: 10.1177/0013164415596420. Epub 2015 Jul 28. PMID: 29795880; PMCID: PMC5965565.
Brennan and Prediger (1981) proposed a simple chance-corrected agreement coefficient, which generalizes to multiple raters and multiple categories, the G-index previously proposed by Holley and Guilford (1964) for two raters and two categories. What is known as the Holley–Guilford G-index was previously proposed independently by various authors under different names. Among them are Guttman (1945), Bennett, Alpert, and Goldstein (1954), and Maxwell (1977). For an interrater reliability experiment involving r raters who classify n subjects into one of q possible categories, the Brennan-Prediger coefficient is given by
κ[BP] = ( p[a] − 1/q ) / ( 1 − 1/q ),
where the percent agreement p[a] is defined by Equation (3 -- see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5965565/#disp-formula3-0013164415596420), and the percent chance agreement is a constant representing the inverse of the number of categories.</td><td/><td/><td/><td>Measure of Agreement</td><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000038<a name="181513-TBD.580000038"> </a></td><td>Cohen’s Kappa</td><td>A Kappa statistic in which the expected agreement by chance is based on an assumption that the likelihood of each category for assignment is based on the proportion observed, and the number of raters is 2.</td><td>A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement.
In Cohen's kappa, the expected chance agreement is modeled as the summation of the differences, between the square of the expected probability of the category and the quotient of its variance divided by 2 (the number of raters), for each category.
Cohen's kappa is calculated as ( p[a] − p[e] ) / ( 1 − p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins</td><td>2022-09-14 vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin</td><td/><td/><td>OBCS cohen's kappa measurement (inter-rater agreement, inter-annotator agreement; inter-rater agreement, inter-annotator agreement) = a statistical measure of agreement for categorical data; a measure of inter-rater agreement or inter-annotator agreement</td><td/><td/><td/><td>Measure of Agreement</td><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000155<a name="181513-TBD.580000155"> </a></td><td>Modified Cohen’s Kappa for more than 2 raters</td><td>A Kappa statistic in which the expected agreement by chance is based on an assumption that the likelihood of each category for assignment is based on the proportion observed, and the number of raters is more than 2.</td><td>A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement.
In the modified Cohen's kappa for more than 2 raters, the expected chance agreement is modeled as the summation of the differences, between the square of the expected probability of the category and the quotient of its variance divided by the number of raters, for each category.
The modified Cohen's kappa for more than 2 raters is calculated as ( p[a] − p[e] ) / ( 1 − p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins</td><td>2022-09-14 vote 5-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde</td><td/><td/><td>OBCS cohen's kappa measurement (inter-rater agreement, inter-annotator agreement; inter-rater agreement, inter-annotator agreement) = a statistical measure of agreement for categorical data; a measure of inter-rater agreement or inter-annotator agreement</td><td/><td/><td/><td>Measure of Agreement</td><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000039<a name="181513-TBD.580000039"> </a></td><td>Scott’s pi</td><td>A Kappa statistic where the expected agreement between two raters is expressed in terms of the square of arithmetic means of marginal proportions of each assessment category.</td><td>Scott's pi is a kappa statistic for two raters that assumes the likelihood of each category for assignment is based on the same distribution of rater responses, leading to the use of squared arithmetic means of the marginal proportion of each assessment category as its estimate of "chance agreement."
Pr(expected) is calculated using squared "joint proportions" which are squared arithmetic means of the marginal proportions of each assessment category, in contrast to Cohen's Kappa which uses squared geometric means.
Scott's pi = ( p[a] − p[e] ) / ( 1 − p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement expressed as the squared joint proportions of the marginal sums.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins</td><td>2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann</td><td/><td>2022-09-21 comment: I think there should be a formula included in the comment for application as with all the other Kappa terms
2022-09-28 adjustment: Steering Group changed the first sentence of Comment for application to better represent the assumption.</td><td/><td/><td/><td/><td>Measure of Agreement</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000035<a name="181513-TBD.580000035"> </a></td><td>Misclassification Rate</td><td>A ratio of the number of incorrect results to the total number tested.</td><td>Where results are reported as positive or negative, incorrect results are reported as false, and correct results are reported as true, the misclassification rate is calculated as ( False Positives + False Negatives ) / ( True Positives + True Negatives + False Positives + False Negatives ).</td><td>Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel</td><td>2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000036<a name="181513-TBD.580000036"> </a></td><td>F1-score</td><td>A ratio representing the harmonic mean of recall and precision.</td><td>The F1-score is used as a measure of quality for classification algorithms and information retrieval strategies, where 1 represents the best precision and recall and 0 represents the worst precision and recall.
A harmonic mean of a set of quantities is the reciprocal of the arithmetic mean of the reciprocals of each quantity. The F score is thus calculated as 1 / (the arithmetic mean of the reciprocals), or:
F = 1 / ( ( (1/recall) + (1/precision) ) / 2 )
F = 2*( (precision*recall) / (precision+recall) )
Recall is sensitivity STATO:0000233
Precision (PPV) is SEVCO TBD:0000022
[[F-beta will be defined elsewhere in the code system.]]</td><td>Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Brian S. Alper</td><td>2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann</td><td/><td/><td>OCRe F measure [not used due to inaccuracy in the definition]</td><td/><td/><td/><td>Measure of Agreement</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000042<a name="181513-TBD.580000042"> </a></td><td>Measure of Correlation</td><td>A measure of association between ordinal or continuous variables.</td><td>A value of 0 means no association. A positive value means a positive association (as one variable increases, the other variable increases). A negative value means a negative association (as one variable increases, the other variable decreases).
For correlation coefficients, the possible values range from +1 (perfect positive association) to -1 (perfect negative association).</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao, Kenneth Wilkins, Harold Lehmann</td><td>2022-11-16 vote 5-0 by Brian S. Alper, Philippe Rocca-Serra, Harold Lehman, Jesus Lopez-Alcalde, Eric Harvey</td><td>2022-10-26 vote 6-1 by Yuan Gao, Philippe Rocca-Serra, Eric Harvey, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte</td><td>2022-10-26 comments: there are measures of correlation which characterise non-linear relation between 2 variables . so I was wondering if there was a need to specify "measure of linear correlation" , where a subclass would be 'correlation coefficient). The type 'measure of correlation' becoming a parent class for the 'measure of non-linear correlation'
Should we say, "A value of 0 means no linear association, a value of +1 mean perfect positive linear (a positive slope) association, and a value of -1 means perfect negative association (a negative slope)."</td><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000301<a name="181513-STATO.580000301"> </a></td><td>Covariance</td><td>A measure of correlation that is not normalized by the variances of the variables.</td><td>A measure of correlation is a measure of association between ordinal or continuous variables.
Covariance is used in the calculation of other measures of correlation. Covariance can only be calculated for interval or continuous variables.
Because the covariance is not normalized by the variances of the variables, the magnitude of the covariance is not informative without consideration of the magnitude of the respective variances. Covariance is informative regarding whether both variables vary in the same direction (positive covariance) or in the opposite direction (negative covariance).
Covariance for a sample is calculated as the mean of the products of deviations from the sample mean for the variables.
Cov(X,Y) = Σ (($x_i – \overline{x}$) ($y_i – \overline{y}$)) / (n-1) where $x_i$ is one the observed values of X, $\overline{x}$ is the sample mean of X, $y_i$ is one the observed values of Y, and $\overline{y}$ is the sample mean of Y.
Covariance as the population-level quantity is given by the expected value of the product of deviations from the mean for the variables.
Cov(X, Y) = E [ (X - μ) (Y - ν) ] where μ = E(X) and ν = E(Y)
Covariance is a continuous value with a range of negative infinity to positive infinity.</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Yuan Gao, Khalid Shahin, Muhammad Afzal</td><td>2022-11-23 vote 5-0 by Mario Tristan, Yuan Gao, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann</td><td/><td/><td>STATO: "covariance = The covariance is a measurement data item about the strength of correlation between a set (2 or more) of random variables.
The covariance is obtained by forming:
cov(X,Y)=E([X-E(X)][Y-E(Y)] where E(X), E(Y) is the expected value (mean) of variable X and Y respectively.
covariance is symmetric so cov(X,Y)=cov(Y,X).
The covariance is usefull when looking at the variance of the sum of the 2 random variables since:
var(X+Y) = var(X) +var(Y) +2cov(X,Y)
The covariance cov(x,y) is used to obtain the coefficient of correlation cor(x,y) by normalizing (dividing) cov(x,y) but the product of the standard deviations of x and y."</td><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000280<a name="181513-STATO.580000280"> </a></td><td>Pearson Correlation Coefficient</td><td>A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the linear relationship between values of two continuous variables.</td><td>A measure of correlation is a measure of association between ordinal or continuous variables. Pearson correlation coefficient is designed to be used between continuous variables.
Pearson correlation coefficient for a sample ($r$) is calculated as $r = \dfrac{\widehat{cov}(x,y)}{s_x*s_y}$ where $ \widehat{cov}(x,y)$ is the estimated covariance, and $s_x$ and $s_y$ are the sample standard deviations.
Pearson correlation coefficient for a population ($\rho$) is defined as $\rho= \dfrac{cov(X,Y)}{\sigma_X*\sigma_Y}$ where cov(X,Y) is covariance of X and Y and $\sigma_X$ and $\sigma_Y$ are the population standard deviations.
Assumptions for computing Pearson's correlation coefficient include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution.
Covariance is [defined in SEVCO](https://fevir.net/resources/CodeSystem/27270#STATO:0000301).</td><td>Kenneth Wilkins, Muhammad Afzal, Yuan Gao, Khalid Shahin, Joanne Dehnbostel, Brian S. Alper, Harold Lehmann</td><td>2022-12-07 vote 5-0 by Muhammad Afzal, Mario Tristan, Eric Harvey, Yuan Gao, Mahnoor Ahmed</td><td/><td/><td>STATO: "Pearson's correlation coefficient (Pearson product-moment correlation coefficient; Pearson's r; r statistics) = The Pearson's correlation coefficient is a correlation coefficient which evaluates two continuous variables for association strength in a data sample. It assumes that both variables are normally distributed and linearity exists.
The coefficient is calculated by dividing their covariance with the product of their individual standard deviations. It is a normalized measurement of how the two are linearly related."</td><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000201<a name="181513-STATO.580000201"> </a></td><td>Spearman Rank-Order Correlation Coefficient</td><td>A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated as the Pearson correlation coefficient between the rank values.</td><td>A measure of correlation is a measure of association between ordinal or continuous variables. Spearman rank-order correlation coefficient is designed to be used between ordinal and/or continuous variables.
The Spearman rank-order correlation coefficient can identify monotonic (i.e. consistently non-increasing or consistently non-decreasing) relationships, whether the relationships are linear or non-linear.
The Spearman rank-order correlation coefficient between two variables is equal to the [Pearson correlation coefficient](https://fevir.net/resources/CodeSystem/27270#STATO:0000280) between the rank values of those two variables.
The Spearman rank-order correlation coefficient is the nonparametric counterpart to the Pearson correlation coefficient and may be used when the assumptions for computing Pearson's correlation coefficient (include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution) are not met. The Spearman rank-order correlation coefficient is appropriate when either variable has outliers, is ordinal, or is not normally distributed; when the variances of the two variables are unequal; or when the apparent relationship between the variables is non-linear. The assumptions for computing Spearman rank-order correlation coefficient include a monotonic relationship between 2 continuous or ordinal variables.</td><td>Kenneth Wilkins, Muhammad Afzal, Yuan Gao, Joanne Dehnbostel, Brian S. Alper, Harold Lehmann, Noor Ahmed</td><td>2022-12-14 vote 5-0 by Jesus Lopez-Alcalde, Yuan Gao, Mario Tristan, Eric Harvey, Harold Lehmann</td><td/><td>2022-12-07 comment: The fundamental difference between the two correlation coefficients is that the Pearson coefficient works with a linear relationship between the two variables whereas the Spearman Coefficient works with monotonic relationships as well.</td><td>STATO: "Spearman's rank correlation coefficient (Spearman's rho) = Spearman's rank correlation coefficient is a correlation coefficient which is a nonparametric measure of statistical dependence between two ranked variables. It assesses how well the relationship between two variables can be described using a monotonic function. If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other.
Spearman's coefficient may be used when the conditions for computing Pearson's correlation are not met (e.g linearity, normality of the 2 continuous variables) but may require a ranking transformation of the variables"</td><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000240<a name="181513-STATO.580000240"> </a></td><td>Kendall Correlation Coefficient</td><td>A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated based on the difference in the number of concordant and discordant pairs of rankings divided by the number of all possible pairs of rankings.</td><td>A measure of correlation is a measure of association between ordinal or continuous variables. Kendall's correlation coefficient is designed to be used between ordinal variables (or continuous variables converted to ordinal variables).
The Kendall's correlation coefficient can identify monotonic (i.e. consistently non-increasing or consistently non-decreasing) relationships, whether the relationships are linear or non-linear.
The Kendall's correlation coefficient between two variables is calculated by determining the concordance or discordance of each pair of ranked values (whether or not two raters are concordant in one value being ranked equal or higher to the other value), and then dividing the difference between the number of concordant values ($n_c$) and the number of discordant values ($n_d$) by the number of pairs of ranked values ($\frac{1}{2}n(n-1)$).
$$
\tau = \dfrac{n_c - n_d}{\frac{1}{2}n(n-1)}
$$
The Kendall's correlation coefficient is a nonparametric statistic and may be used when the assumptions for computing Pearson's correlation coefficient (include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution) are not met. The Kendall's correlation coefficient is appropriate when either variable has outliers, is ordinal, or is not normally distributed; when the variances of the two variables are unequal; or when the apparent relationship between the variables is non-linear. The assumptions for computing Kendall's correlation coefficient include a monotonic relationship between 2 ordinal variables.</td><td>Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel</td><td>2023-01-25 vote 6-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Eric Harvey</td><td>2022-12-21 vote 5-0 by Joanne Dehnbostel, Mario Trista, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann
2023-01-04 definition change by Steering Committee</td><td/><td>STATO: Kendall's correlation coefficient (Kendall's tau (τ) coefficient; Kendall rank correlation coefficient) = Kendall's correlation coefficient is a correlation coefficient between 2 ordinal variables (natively or following a ranking procedure) and may be used when the conditions for computing Pearson's correlation are not met (e.g linearity, normality of the 2 continuous variables)</td><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000043<a name="181513-TBD.580000043"> </a></td><td>Goodman and Kruskal’s Gamma</td><td>A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated based on the difference in the number of concordant and discordant pairs of rankings divided by the total number of pairs of rankings, where ties are not counted among the pairs of rankings.</td><td/><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel</td><td>2023-01-25 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Eric Harvey</td><td/><td/><td><a href="https://stats.stackexchange.com/questions/18112/how-do-the-goodman-kruskal-gamma-and-the-kendall-tau-or-spearman-rho-correlation">https://stats.stackexchange.com/questions/18112/how-do-the-goodman-kruskal-gamma-and-the-kendall-tau-or-spearman-rho-correlation</a></td><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000565<a name="181513-STATO.580000565"> </a></td><td>Regression Coefficient</td><td>A measure of association that is used as the coefficient of an independent variable in a regression model, of the dependent variable, which is linear in its parameters.</td><td>A value of zero means no association. The sign (positive or negative) reflects the direction of association.</td><td>Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal</td><td>2023-02-07 vote 5-0 by Cauê Monaco, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td>STATO: regression coefficient = a regression coefficient is a data item generated by a type of data transformation called a regression, which aims to model a response variable by expression the predictor variables as part of a function where variable terms are modified by a number. A regression coefficient is one such number.</td><td/><td/><td/><td>Measure of Association</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000044<a name="181513-TBD.580000044"> </a></td><td>Measure of Calibration</td><td>A measure of association between a variable representing known or true values and a variable representing measured or predicted values.</td><td>Calibration is often used for measurement devices. The known or true values may be called the reference standard.
Calibration is also used for predictive models.</td><td>Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Kenneth Wilkins</td><td/><td/><td/><td/><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000045<a name="181513-TBD.580000045"> </a></td><td>Mean calibration</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000046<a name="181513-TBD.580000046"> </a></td><td>Calibration-in-the-large</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000047<a name="181513-TBD.580000047"> </a></td><td>Calibration intercept</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000048<a name="181513-TBD.580000048"> </a></td><td>Calibration slope</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Measure of Correlation</td><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000028<a name="181513-STATO.580000028"> </a></td><td>Measure of Dispersion</td><td>A statistic that represents the variation or spread among data values in a dataset or data distribution.</td><td>This categorical (parent) term can be used for a statistic that is a measure of dispersion and is not found in the child terms.</td><td>Brian S. Alper, Kenneth Wilkins, Yuan Gao, Joanne Dehnbostel</td><td>2023-02-07 vote 5-0 by Cauê Monaco, Harold Lehmann, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td>STATO: measure of variation (measure of dispersion) = measure of variation or statistical dispersion is a data item which describes how much a theoritical distribution or dataset is spread.
NCIt: "Statistical dispersion-
The variation between data values in a sample."
UMLS: "Dispersion (C0332624)
Definition: The variation between data values in a sample.
Semantic Types: Spatial Concept"</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000035<a name="181513-STATO.580000035"> </a></td><td>Range</td><td>A measure of dispersion calculated as the difference between the maximum observed value and the minimum observed value.</td><td>A measure of dispersion is a statistic that represents the variation or spread among data values in a dataset or data distribution.
The maximum observed value is a statistic that represents the largest non-null value in a collection of values that can be ordered by magnitude.
The minimum observed value is a statistic that represents the smallest non-null value in a collection of values that can be ordered by magnitude.
A range (as a statistic) is represented as a single value (the difference between maximum and minimum observed values) while, in common language, the term range is often expressed with two values (from the minimum to maximum values, or from the lower limit to the higher limit).</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal</td><td>2023-02-20 vote 5-0 by Philippe Rocca-Serra, Janice Tufte, Harold Lehmann, Mario Tristan, Eric Harvey</td><td/><td/><td>STATO: range = the range is a measure of variation which describes the difference between the lowest score and the highest score in a set of numbers (a data set)</td><td/><td/><td/><td>Measure of Dispersion</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:00000164<a name="181513-STATO.5800000164"> </a></td><td>Interquartile range</td><td>A measure of dispersion calculated as the difference between the 75th percentile and the 25th percentile.</td><td>A measure of dispersion is a statistic that represents the variation or spread among data values in a dataset or data distribution.
The 75th percentile is the median of the portion of the dataset or distribution with values greater than the median value.
The 25th percentile is the median of the portion of the dataset or distribution with values lesser than the median value.
An interquartile range (as a statistic) is represented as a single value (the difference between 75th and 25th percentiles) while, in common language, the term interquartile range is often expressed with two values (the 25th percentile and the 75th percentile).</td><td>Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal</td><td>2023-02-20 vote 5-0 by Philippe Rocca-Serra, Janice Tufte, Harold Lehmann, Mario Tristan, Eric Harvey</td><td/><td/><td>STATO: "inter quartile range = The interquartile range is a data item which corresponds to the difference between the upper quartile (3rd quartile) and lower quartile (1st quartile).
The interquartile range contains the second quartile or median.
The interquartile range is a data item providing a measure of data dispersion"</td><td/><td/><td/><td>Measure of Dispersion</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000237<a name="181513-STATO.580000237"> </a></td><td>Standard deviation</td><td>A measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset.</td><td>Standard deviation for sample is a standard deviation in which the dataset is a sample. Standard deviation for population, when used as a statistical model parameter, is not a standard deviation as a type of statistic.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Kenneth Wilkins</td><td>2023-05-15 vote 6-0 by Muhammad Afzal, Brian S. Alper, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Harold Lehmann</td><td/><td/><td>STATO: standard deviation (σ) = The standard deviation of a random variable, statistical population, data set, or probability distribution is a measure of variation which correspond to the average distance from the mean of the data set to any given point of that dataset. It also corresponds to the square root of its variance.</td><td/><td/><td/><td>Measure of Dispersion</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000052<a name="181513-TBD.580000052"> </a></td><td>Standard deviation for sample</td><td>A standard deviation that is the square root of the quotient of the summation across data points of the square of the distance from each data point to the sample mean, and the degrees of freedom (where the degrees of freedom is sample size minus one).</td><td>Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset.
Standard deviation for sample is a standard deviation in which the dataset is a sample.
The formula for the standard deviation for sample ($s$) is:
$$
s = \sqrt \frac{\sum\\{\substack{n\\i=1}} (x_i - \overline{x})^2}{n - 1}
$$
where $n$ is the sample size (the number of independent observations, indexed by $i$), $x$ is observed value, and $\overline{x}$ is the sample mean.
The formula to calculate degrees of freedom depends on the model. For the degrees of freedom for a sample standard deviation, given the sample mean, it is n-1, because the nth observation is no longer independent, given the n-1 other observations and the sample mean.</td><td>Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Kenneth Wilkins</td><td>2023-05-15 vote 5-0 by Muhammad Afzal, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann</td><td/><td/><td/><td/><td/><td/><td>Measure of Dispersion</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000113<a name="181513-STATO.580000113"> </a></td><td>Variance</td><td>A measure of dispersion that represents the square of the standard deviation.</td><td>Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset.
Variance for sample is a variance in which the dataset is a sample. Variance for population, when used as a probability distribution parameter, is not a variance as a type of statistic.</td><td>Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel</td><td>2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey</td><td/><td/><td>STATO: variance (σ2) = variance is a data item about a random variable or probability distribution. it is equivalent to the square of the standard deviation. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean (expected value).The variance is the second moment of a distribution.</td><td/><td/><td/><td>Measure of Dispersion</td><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000055<a name="181513-TBD.580000055"> </a></td><td>variance for sample</td><td>A variance that is the quotient of the summation across data points of the square of the distance from each data point to the sample mean, and the degrees of freedom (where the degrees of freedom is sample size minus one).</td><td>Variance is defined as a measure of dispersion that represents the square of the standard deviation. Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset.
Variance for sample is a variance in which the dataset is a sample.
The formula for the variance for sample ($s^2$) is:
$$
s^2 = \frac{\sum\\{\substack{n\\i=1}} (x_i - \overline{x})^2}{n - 1}
$$
where $n$ is the sample size (the number of independent observations, indexed by $i$), $x$ is observed value, and $\overline{x}$ is the sample mean.
The formula to calculate degrees of freedom depends on the model. For the degrees of freedom for a sample variance, given the sample mean, it is n-1, because the nth observation is no longer independent, given the n-1 other observations and the sample mean.</td><td>Kenneth Wilkins, Brian S. Alper, Muhammad Afzal</td><td>2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td>Measure of Dispersion</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000049<a name="181513-TBD.580000049"> </a></td><td>Gini index</td><td>A measure of dispersion that is half the relative mean absolute difference between all pairs of observed values.</td><td>The Gini index is typically used as a measure of inequality for income, wealth, or resource distribution.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Kenneth Wilkins</td><td>2023-12-04 vote 5-0 by Yasser Sami Amer, Xing Song, Eric Harvey, Harold Lehmann, Brian S. Alper</td><td>2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey BUT comment of "between all pairs of observed values?" led to recognition of incorrect definition</td><td/><td/><td/><td/><td/><td>Measure of Dispersion</td><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000057<a name="181513-TBD.580000057"> </a></td><td>Standard error</td><td>A measure of dispersion applied to estimates across hypothetical repeated random samples.</td><td>A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.</td><td>Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Xing Song, Joanne Dehnbostel, Muhammad Afzal</td><td/><td/><td/><td>STATO: It is a measure of how precise is an estimate of the statistical parameter is. Standard error is the estimated standard deviation of an estimate. It measures the uncertainty associated with the estimate. Compared with the standard deviations of the underlying distribution, which are usually unknown, standard errors can be calculated from observed data.</td><td>2023-11-27T20:00:00</td><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000037<a name="181513-STATO.580000037"> </a></td><td>standard error of the mean</td><td>A measure of dispersion applied to means across hypothetical repeated random samples.</td><td>A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.
The standard error of the mean is calculated by dividing the sample standard deviation (STATO:0000237) by the square root of n, the size (number of observations) of the sample.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal, Xing Song, Kenneth Wilkins, Joanne Dehnbostel</td><td/><td/><td/><td>STATO: The standard error of the mean (SEM) is data item denoting the standard deviation of the sample-mean's estimate of a population mean. It is calculated by dividing the sample standard deviation (i.e., the sample-based estimate of the standard deviation of the population) by the square root of n , the size (number of observations) of the sample.</td><td>2023-11-27</td><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000062<a name="181513-TBD.580000062"> </a></td><td>standard error of the proportion</td><td>A measure of dispersion applied to proportions across hypothetical repeated random samples.</td><td>A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.
The formula for the standard error of the sample proportion ($SE(\hat{p})$) is:
$$
SE(\hat{p}) = \sqrt \frac{\hat{p}(1-\hat{p})} {n}
$$
where $\hat{p}$ is the sample proportion and $n$ is the size (number of observations) of the sample.</td><td>Brian S. Alper, Kenneth Wilkins, Xing Song, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal</td><td/><td/><td/><td/><td>2023-11-27</td><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000063<a name="181513-TBD.580000063"> </a></td><td>standard error of the difference between independent means</td><td>A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples.</td><td>A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.
In cases where the samples are assumed to have unequal population variances for X, the formula for the standard error of the sample difference between means ($SE_{unequal}(\overline{x}_{1} - \overline{x}_{2})$) is:
$$
SE_{unequal}(\overline{x}_{1} - \overline{x}_{2}) = \sqrt{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}
$$
where $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means, $s^2_1$ and $s^2_2$ are the sample standard deviations, and $n_1$ and $n_2$ are the sizes (number of observations) of the samples.
In cases where the samples are assumed to have the same (equal) population variance for X, the formula for the standard error of the sample difference between means ($SE_{equal}(\overline{x}_{1} - \overline{x}_{2})$) is:
$$
SE_{equal}(\overline{x}_{1} - \overline{x}_{2}) = \sqrt{\frac{n_1 s^2_1 + n_2 s^2_2}{n_1 + n_2 - 2}}
$$
where $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means, $s^2_1$ and $s^2_2$ are the sample standard deviations, and $n_1$ and $n_2$ are the sizes (number of observations) of the samples.</td><td>Harold Lehmann, Kenneth Wilkins, Brian S. Alper</td><td/><td/><td/><td/><td>2023-12-04</td><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000064<a name="181513-TBD.580000064"> </a></td><td>standard error of the difference between independent proportions</td><td>A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples.</td><td>A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.
The formula for the standard error of the sample difference between proportions ($SE(\hat{p}_1 - \hat{p}_2)$) is:
$$
SE(\hat{p}_1 - \hat{p}_2) = \sqrt {\frac{\hat{p}_1(1-\hat{p}_1)} {n_1} + \frac{\hat{p}_2(1-\hat{p}_2)} {n_2}}
$$
where $\hat{p}_1$ and $\hat{p}_2$ are the sample proportions and $n_1$ and $n_2$ are the sizes (number of observations) of the samples.</td><td>Harold Lehmann, Kenneth Wilkins, Brian S. Alper</td><td/><td/><td/><td/><td>2023-12-04</td><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000455<a name="181513-STATO.580000455"> </a></td><td>credible interval</td><td>The range in which the value of the parameter of interest is likely to reside, typically within a posterior probability distribution.</td><td>The credible interval is used in Bayesian analysis and plays an analogous role to the confidence interval in frequentist statistics.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal</td><td>2023-11-27 vote 5-0 by Xing Song, Yasser Sami Amer, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td>STATO: In Bayesian statistics context, a credible interval is an interval of a posterior distribution which is such that the density at any point inside the interval is greater than the density at any point outside and that the area under the curve for that interval is equal to a prespecified probability level. For any probability level there is generally only one such interval, which is also often known as the highest posterior density region. Unlike the usual confidence interval associated with frequentist inference, here the intervals specify the range within which parameters lie with a certain probability. The Bayesian counterparts of the confidence interval used in Frequentists Statistics.
UMLS: "Interval (C1272706)
Definition: The period of time or the distance separating two instances, events, or occurrences.
Semantic Types: Temporal Concept"
OBCS: A quantitative confidence value that is used in Bayesian analysis to describe the range in which a posterior probability estimate is likely to reside.
OECD: calculated interval-The interval containing possible values for a suppressed cell in a table, given the table structure and the values published.
SCO: interval-An interval is a set of real numbers that includes all numbers between any two numbers in the set.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000196<a name="181513-STATO.580000196"> </a></td><td>confidence interval</td><td>The estimated range of values that encompasses the point estimate and quantifies the uncertainty about that estimate in terms of a prespecified level of coverage, expected to include the true value between upper and lower bounds, across hypothetically repeated random samples, with all assumptions regarding the sampling distribution across random samples having been fully met.</td><td>The prespecified level of coverage is commonly 0.95 or 95%.
Confidence cannot be directly interpreted as a probability. This is in contrast to credibility for credible intervals. Confidence only conveys uncertainty indirectly by reflecting a long term relative frequency across hypothetically repeated sample estimates.
Width of a confidence interval can convey precision. This precision can be increased by increasing the sample size in most cases assuming variability in sample is only due to random sample-to-sample variation.</td><td>Brian S. Alper, Harold Lehmann, Ken Wilkins, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte</td><td>2023-11-27 vote 5-0 by Xing Song, Yasser Sami Amer, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey</td><td/><td/><td>STATO: A confidence interval is a data item which defines an range of values in which a measurement or trial falls corresponding to a given probability. also confidence interval calculation is a data transformation which determines a confidence interval for a given statistical parameter
NCIt: A range of values for a parameter that may contain the parameter and the degree of confidence that it is in fact there. A measure of the precision of an estimated value. The interval represents the range of values, consistent with the data, that is believed to encompass the "true" value with high probability (usually 95%). The confidence interval is expressed in the same units as the estimate. Wider intervals indicate lower precision; narrow intervals, greater precision. [CONSORT Statement]
OBCS: A quantitative confidence value that refers to an interval give values within which there is a high probability (95 percent by convention) that the true population value can be found. The calculation of a confidence interval considers the standard deviation of the data and the number of observations. Thus, a confidence interval narrows as the number of observations increases, or its variance (dispersion) decreases.
CDISC Glossary: A measure of the precision of an estimated value. The interval represents the range of values, consistent with the data, that is believed to encompass the "true" value with high probability (usually 95%). The confidence interval is expressed in the same units as the estimate. Wider intervals indicate lower precision; narrow intervals, greater precision. [CONSORT Statement]
NICE: "Confidence interval
A way of expressing how certain we are about the findings from a study, using statistics. It gives a range of results that is likely to include the 'true' value for the population. A wide confidence interval (CI) indicates a lack of certainty about the true effect of the test or treatment - often because a small group of patients has been studied. A narrow CI indicates a more precise estimate (for example, if a large number of patients have been studied).
The CI is usually stated as '95% CI', which means that the range of values has a 95 in a 100 chance of including the 'true' value. For example, a study may state that 'based on our sample findings, we are 95% certain that the 'true' population blood pressure is not higher than 150 and not lower than 110'. In such a case the 95% CI would be 110 to 150."
OECD: A confidence interval is an interval which has a known and controlled probability (generally 95% or 99%) to contain the true value.
"Rothman textbook: confidence interval, which provides a range of values for the association, under the hypothesis that only random variation has created discrepancies between the true value of the association under study and the value observed in the data (Altman et al., 2000; see Chapters 13 through 16) Altman DG, Machin D, Bryant TN, Gardner MJ, eds. Statistics with confidence, 2nd ed. London: BMJ Books, 2000
"</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000065<a name="181513-TBD.580000065"> </a></td><td>Measure of Discrimination</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000066<a name="181513-TBD.580000066"> </a></td><td>Area Under the Curve</td><td/><td/><td/><td/><td/><td/><td>STATO: area under curve is a measurement datum which corresponds to the surface define by the x-axis and bound by the line graph represented in a 2 dimensional plot resulting from an integration or integrative calculus. The interpretation of this measurement datum depends on the variables plotted in the graph</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000067<a name="181513-TBD.580000067"> </a></td><td>C-statistic</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000068<a name="181513-TBD.580000068"> </a></td><td>Measure of Heterogeneity</td><td/><td/><td/><td/><td/><td/><td>STATO: a measure of heterogeneity in meta-analysis is a data item which aims to describe the variation in study outcomes between studies.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000069<a name="181513-TBD.580000069"> </a></td><td>Chi square for homogeneity</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000070<a name="181513-TBD.580000070"> </a></td><td>Cochran’s Q statistic</td><td/><td/><td/><td/><td/><td/><td>STATO: Cochran's Q test is a statistical test used for unreplicated randomized block design experiments with a binary response variable and paired data. In the analysis of two-way randomized block designs where the response variable can take only two possible outcomes (coded as 0 and 1), Cochran's Q test is a non-parametric statistical test to verify whether k treatments have identical effects.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000071<a name="181513-TBD.580000071"> </a></td><td>I-squared</td><td/><td/><td/><td/><td/><td/><td>STATO: I-squared = The quantity called I2, describes the percentage of total variation across studies that is due to heterogeneity rather than chance. I2 can be readily calculated from basic results obtained from a typical meta-analysis as I2 = 100%×(Q - df)/Q, where Q is Cochran's heterogeneity statistic and df the degrees of freedom. Negative values of I2 are put equal to zero so that I2 lies between 0% and 100%. A value of 0% indicates no observed heterogeneity, and larger values show increasing heterogeneity. Unlike Cochran's Q, it does not inherently depend upon the number of studies considered. A confidence interval for I² is constructed using either i) the iterative non-central chi-squared distribution method of Hedges and Piggott (2001); or ii) the test-based method of Higgins and Thompson (2002). The non-central chi-square method is currently the method of choice (Higgins, personal communication, 2006) – it is computed if the 'exact' option is selected. (STATO:0000420)</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000072<a name="181513-TBD.580000072"> </a></td><td>Tau squared</td><td/><td/><td/><td/><td/><td/><td>STATO: Tau-squared is an estimate of the between-study variance in a random-effects meta-analysis. The square root of this number (i.e. tau) is the estimated standard deviation of underlying effects across studies. (STATO:0000421)</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000073<a name="181513-TBD.580000073"> </a></td><td>Hypothesis Testing Measure</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000074<a name="181513-TBD.580000074"> </a></td><td>Chi square for independence</td><td/><td/><td/><td/><td/><td/><td>STATO: Chi-squared statistic is a statistic computed from observations and used to produce a p-value in statistical test when compared to a Chi-Squared distribution. (STATO:0000030)</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000075<a name="181513-TBD.580000075"> </a></td><td>Chi square for trend</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000076<a name="181513-TBD.580000076"> </a></td><td>p-value</td><td/><td/><td/><td/><td/><td/><td>STATO: A quantitative confidence value that represents the probability of obtaining a result at least as extreme as that actually obtained, assuming that the actual value was the result of chance alone. (OBI:0000175)</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:p-value-one-sided<a name="181513-TBD.58p-value-one-sided"> </a></td><td>p value for one-sided test</td><td>p value for one-sided test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:p-value-two-sided<a name="181513-TBD.58p-value-two-sided"> </a></td><td>p value for two-sided test</td><td>p value for two-sided test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000077<a name="181513-TBD.580000077"> </a></td><td>Z-score</td><td/><td/><td/><td/><td/><td/><td>STATO: A z-score (also known as z-value, standard score, or normal score) is a measure of the divergence of an individual experimental result from the most probable result, the mean. Z is expressed in terms of the number of standard deviations from the mean value. (STATO:0000104) Z-statistic is a statistic computed from observations and used to produce a p-value when compared to a Standard Normal Distribution in a statistical test called the Z-test. (STATO:0000376)</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000078<a name="181513-TBD.580000078"> </a></td><td>T-score</td><td/><td/><td/><td/><td/><td/><td>STATO: t-statistic is a statistic computed from observations and used to produce a p-value in statistical test when compared to a Student's t distribution. (STATO:0000176)</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000079<a name="181513-TBD.580000079"> </a></td><td>Predicted Data Item</td><td>Modifier Code: add to any other Statistic Type code to note predicted form (not observed form) of that statistic type</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000179<a name="181513-TBD.580000179"> </a></td><td>Absolute Value</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000291<a name="181513-STATO.580000291"> </a></td><td>Quantile</td><td/><td/><td/><td/><td/><td/><td>STATO-a quantile is a data item which corresponds to specific elements x in the range of a variate X. the k-th n-tile P_k is that value of x, say x_k, which corresponds to a cumulative frequency of Nk/n (Kenney and Keeping 1962). If n=4, the quantity is called a quartile, and if n=100, it is called a percentile.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000293<a name="181513-STATO.580000293"> </a></td><td>Percentile</td><td/><td/><td/><td/><td/><td/><td>STATO-a percentile is a quantile which splits data into sections accrued of 1% of data, so the first percentile delineates 1% of the data, the second quartile delineates 2% of the data and the 99th percentile, 99 % of the data</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000292<a name="181513-STATO.580000292"> </a></td><td>Decile</td><td/><td/><td/><td/><td/><td/><td>STATO-a decile is a quantile where n=10 and which splits data into sections accrued of 10% of data, so the first decile delineates 10% of the data, the second decile delineates 20% of the data and the nineth decile, 90 % of the data</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000152<a name="181513-STATO.580000152"> </a></td><td>Quartile</td><td/><td/><td/><td/><td/><td/><td>STATO-a quartile is a quantile which splits data into sections accrued of 25% of data, so the first quartile delineates 25% of the data, the second quartile delineates 50% of the data and the third quartile, 75 % of the data</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBDTBD<a name="181513-TBDTBD"> </a></td><td>Measurement Value</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBDTBD2<a name="181513-TBDTBD2"> </a></td><td>Duration</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBDTBD3<a name="181513-TBDTBD3"> </a></td><td>Time-to-Event</td><td>Event-free duration</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">TBD:0000080<a name="181513-TBD.580000080"> </a></td><td>Hypothesis test attribute</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:beta<a name="181513-TBD.58beta"> </a></td><td>Beta</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:power<a name="181513-TBD.58power"> </a></td><td>Power</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000081<a name="181513-TBD.580000081"> </a></td><td>alpha setting</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000084<a name="181513-TBD.580000084"> </a></td><td>alpha setting with subtype unspecified</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000085<a name="181513-TBD.580000085"> </a></td><td>individual test alpha without multiple testing adjustment</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000086<a name="181513-TBD.580000086"> </a></td><td>overall alpha with multiple testing</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000087<a name="181513-TBD.580000087"> </a></td><td>individual test alpha with multiple testing adjustment</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000286<a name="181513-STATO.580000286"> </a></td><td>one-tailed test</td><td/><td/><td/><td/><td/><td/><td>STATO: one tailed test (one sided test) = a one-tailed test is a statistical test which, assuming an unskewed probability distribution, allocates all of the significance level to evaluate only one hypothesis to explain a difference.
The one-tailed test provides more power to detect an effect in one direction by not testing the effect in the other direction.
one-tailed test should be preceded by two-tailed test in order to avoid missing out on detecting alternate effect explaining an observed difference.</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000287<a name="181513-STATO.580000287"> </a></td><td>two-tailed test</td><td/><td/><td/><td/><td/><td/><td>STATO: two tailed test (two sided test) = a two tailed test is a statistical test which assess the null hypothesis of absence of difference assuming a symmetric (not skewed) underlying probability distribution by allocating half of the significance level selected to each of the direction of change which could explain a difference (for example, a difference can be an excess or a loss).</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:checkIfInSTATOtesting-margin<a name="181513-TBD.58checkIfInSTATOtesting-margin"> </a></td><td>Hypothesis testing margin</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:null-hypothesis<a name="181513-TBD.58null-hypothesis"> </a></td><td>null hypothesis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:alternative-hypothesis<a name="181513-TBD.58alternative-hypothesis"> </a></td><td>alternative hypothesis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">STATO:0000107<a name="181513-STATO.580000107"> </a></td><td>Statistical Model</td><td>A set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data.</td><td>A statistical model describes how one or more random variables are related to one or more other variables. A statistical model often relates to the generation of sample data from a larger population. "Generative model" is a term used by the machine learning community.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Harold Lehmann</td><td>2023-06-12 vote 5-0 by Brian S. Alper, Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey</td><td>2023-05-22 vote 3-1 by Jesus Lopez-Alcalde, Sunu Alice Cherian, Janice Tufte, Harold Lehmann
2023-06-05 vote 5-1 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann</td><td>2023-05-22 comments:
Definition: A mathematical model that reflects a set of statistical assumptions with regards to the process governing the generation of sample data from a larger population.
Since we now have Statistical Model Characteristics as a separate hierarchy, might we want to refer to that hierarchy in the Comment for Application. ("There are many potential components to a statistical model. Those components are represented by the SEVCO hierarchy beginning with...")
2023-06-05 comment: The comment for application needs to be improved - it is difficult to read and the sentences are not grammatically correct.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000090<a name="181513-TBD.580000090"> </a></td><td>Fixed-effect model</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000091<a name="181513-TBD.580000091"> </a></td><td>Random-effects model</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000464<a name="181513-STATO.580000464"> </a></td><td>Generalized Linear Mixed Model</td><td/><td/><td/><td/><td/><td/><td>STATO: linear mixed model (LMM) = "A lnear mixed model is a mixed model containing both fixed effects and random effects and in which factors and covariates are assumed to have a linear relationship to the dependent variable. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units (longitudinal study), or where measurements are made on clusters of related statistical units. Because of their advantage in dealing with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures ANOVA.
Fixed-effects factors are generally considered to be the variables whose values of interest are all represented in the data file.
Random-effects factors are variables whose values correspond to unwanted variation. They are useful when trying to understand variability in the dependent variable which was not anticipated and exceeds what was expected.
Linear mixed models also allow to specify specific interactions between factors, and allow the evaluation of the various linear effect that a particular combination of factor levels may have on a response variable.
Finally, linear mixed models allow to specify variance components in order to describe the relation between various random effects levels."</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000093<a name="181513-TBD.580000093"> </a></td><td>GLMM with probit link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000094<a name="181513-TBD.580000094"> </a></td><td>GLMM with logit link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000095<a name="181513-TBD.580000095"> </a></td><td>GLMM with identity link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000096<a name="181513-TBD.580000096"> </a></td><td>GLMM with log link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000097<a name="181513-TBD.580000097"> </a></td><td>GLMM with generalized logit link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000098<a name="181513-TBD.580000098"> </a></td><td>GLMM with subtype unspecified</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000099<a name="181513-TBD.580000099"> </a></td><td>GLM</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000100<a name="181513-TBD.580000100"> </a></td><td>GLM with probit link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000101<a name="181513-TBD.580000101"> </a></td><td>GLM with logit link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>TBD:0000099 and TBD:0000106</td><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000102<a name="181513-TBD.580000102"> </a></td><td>GLM with identity link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>TBD:0000099 and TBD:0000106</td><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000103<a name="181513-TBD.580000103"> </a></td><td>GLM with log link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000104<a name="181513-TBD.580000104"> </a></td><td>GLM with generalized logit link</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000105<a name="181513-TBD.580000105"> </a></td><td>GLM with subtype unspecified</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">TBD:0000121<a name="181513-TBD.580000121"> </a></td><td>Data Transformation</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000122<a name="181513-TBD.580000122"> </a></td><td>Data imputation</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000125<a name="181513-TBD.580000125"> </a></td><td>Zero-cell adjustment with constant</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000126<a name="181513-TBD.580000126"> </a></td><td>Zero-cell adjustment with continuity correction</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000123<a name="181513-TBD.580000123"> </a></td><td>Meta-analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000127<a name="181513-TBD.580000127"> </a></td><td>Meta-analysis with fixed-effect model</td><td/><td/><td/><td/><td/><td/><td>STATO: STATO_0000082: fixed effect model = a fixed effect model is a statistical model which represents the observed quantities in terms of explanatory variables that are treated as if the quantities were non-random.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000129<a name="181513-TBD.580000129"> </a></td><td>Meta-analysis using inverse variance method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000130<a name="181513-TBD.580000130"> </a></td><td>Meta-analysis using Mantel-Haenszel method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000131<a name="181513-TBD.580000131"> </a></td><td>Meta-analysis using Peto method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000128<a name="181513-TBD.580000128"> </a></td><td>Meta-analysis with random-effects model</td><td/><td/><td/><td/><td/><td/><td>STATO: STATO_0000099: random effect model (variance components model) = a random effect(s) model, also called a variance components model, is a kind of hierarchical linear model. It assumes that the dataset being analysed consists of a hierarchy of different populations whose differences relate to that hierarchy.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000132<a name="181513-TBD.580000132"> </a></td><td>Meta-analysis using Dersimonian-Laird method</td><td/><td/><td/><td/><td/><td/><td>STATO: STATO_0000429: DerSimonian-Laird estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis. The estimator is used in simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000133<a name="181513-TBD.580000133"> </a></td><td>Meta-analysis using Paule-Mandel method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000134<a name="181513-TBD.580000134"> </a></td><td>Meta-analysis using Restricted Maximum Likelihood method</td><td/><td/><td/><td/><td/><td/><td>STATO: STATO_0000427: restricted maximum likelihood estimation (REML) = restricted maximum likelihood estimation is a kind of maximum likelihood estimation data transformation which estimates the variance components of random-effects in univariate and multivariate meta-analysis. in contrast to 'maximum likelihood estimation', reml can produce unbiased estimates of variance and covariance parameters.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000135<a name="181513-TBD.580000135"> </a></td><td>Meta-analysis using Maximum Likelihood method</td><td/><td/><td/><td/><td/><td/><td>STATO: STATO_0000428: maximum likelihood estimation = "maximum likelihood estimation (MLE) is a method of estimating the parameters of a statistical model, given observations. MLE attempts to find the parameter values that maximize the likelihood function, given the observations.
The method of maximum likelihood is based on the likelihood function, {displaystyle {\mathcal {L}}(\theta \,;x)} {\displaystyle {\mathcal {L}}(\theta \,;x)}. We are given a statistical model, i.e. a family of distributions {\displaystyle \{f(\cdot \,;\theta )\mid \theta \in \Theta \}} {\displaystyle \{f(\cdot \,;\theta )\mid \theta \in \Theta \}}, where {\displaystyle \theta } \theta denotes the (possibly multi-dimensional) parameter for the model. The method of maximum likelihood finds the values of the model parameter, {\displaystyle \theta } \theta , that maximize the likelihood function, {\displaystyle {\mathcal {L}}(\theta \,;x)} {\displaystyle {\mathcal {L}}(\theta \,;x)}. I"</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000136<a name="181513-TBD.580000136"> </a></td><td>Meta-analysis using Empirical Bayes method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000137<a name="181513-TBD.580000137"> </a></td><td>Meta-analysis using Hunter-Schmidt method</td><td/><td/><td/><td/><td/><td/><td>STATO: STATO_0000426: Hunter-Schmidt estimator = Hunter-Schmidt estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000430<a name="181513-STATO.580000430"> </a></td><td>Meta-analysis using Hartung-Knapp-Sidik-Jonkman method</td><td/><td/><td/><td/><td/><td/><td>STATO: a random effect meta analysis procedure defined by Hartung and Knapp and by Sidik and Jonkman which performs better than DerSimonian and Laird approach, especially when there is heterogeneity and the number of studies in the meta-analysis is small.
also STATO_0000425 Sidik-Jonkman estimator = Sidik-Jonkman estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000139<a name="181513-TBD.580000139"> </a></td><td>Meta-analysis using modified Knapp-Hartung method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000140<a name="181513-TBD.580000140"> </a></td><td>Meta-analysis using Hedges method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000124<a name="181513-TBD.580000124"> </a></td><td>Statistical hypothesis test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000141<a name="181513-TBD.580000141"> </a></td><td>Between group comparison statistical test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000146<a name="181513-TBD.580000146"> </a></td><td>ANOVA</td><td/><td/><td/><td/><td/><td/><td>STATO: uses OBI_0200201: ANOVA or analysis of variance is a data transformation in which a statistical test of whether the means of several groups are all equal.</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000150<a name="181513-TBD.580000150"> </a></td><td>Multivariate ANOVA</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> STATO:0000048<a name="181513-STATO.580000048"> </a></td><td>Multiway ANOVA</td><td>child term ?? 3-way ANOVA</td><td/><td/><td/><td/><td/><td>STATO: Multi-way anova is an analysis of variance where the difference groups being compared are associated to the factor levels of more than 2 independent variables. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data.</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> STATO:0000044<a name="181513-STATO.580000044"> </a></td><td>One-way ANOVA</td><td/><td/><td/><td/><td/><td/><td>STATO: one-way ANOVA (one factor ANOVA) = one-way anova is an analysis of variance where the different groups being compared are associated with the factor levels of only one independent variable. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data.</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000153<a name="181513-TBD.580000153"> </a></td><td>Repeated measure ANOVA</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> STATO:0000045<a name="181513-STATO.580000045"> </a></td><td>Two-way ANOVA</td><td>child terms ?? 2-way ANOVA without replication ?? 2-way ANOVA with replication</td><td/><td/><td/><td/><td/><td>STATO: two-way ANOVA (two factor ANOVA) = two-way anova is an analysis of variance where the different groups being compared are associated the factor levels of exatly 2 independent variables. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000147<a name="181513-TBD.580000147"> </a></td><td>Non-parametric test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> STATO:0000094<a name="181513-STATO.580000094"> </a></td><td>Kruskal Wallis test</td><td/><td/><td/><td/><td/><td/><td>STATO: Kruskal Wallis test (rank-sum test for the comparison of multiple (more than 2) samples.; H test) = "The Kruskal–Wallis test is a null hypothesis statistical testing objective which allows multiple (n>=2) groups (or conditions or treatments) to be compared, without making the assumption that values are normally distributed. The Kruskal–Wallis test is the non-parametric equivalent of the independent samples ANOVA.
The Kruskal–Wallis test is most commonly used when there is one nominal variable and one measurement variable, and the measurement variable does not meet the normality assumption of an anova."</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000156<a name="181513-TBD.580000156"> </a></td><td>Log rank test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> STATO:0000076<a name="181513-STATO.580000076"> </a></td><td>Mann-Whitney U-test</td><td/><td/><td/><td/><td/><td/><td>STATO: "The Mann-Whitney U-test is a null hypothesis statistical testing procedure which allows two groups (or conditions or treatments) to be compared without making the assumption that values are normally distributed.
The Mann-Whitney test is the non-parametric equivalent of the t-test for independent samples"</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> STATO:0000433<a name="181513-STATO.580000433"> </a></td><td>McNemar test</td><td/><td/><td/><td/><td/><td/><td>STATO: McNemar test (McNemar's Chi-squared Test for Count Data; test of the marginal homogeneity of a contingency table; within-subjects chi-squared test) = "McNemar's test is a statistical test used on paired nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whether the row and column marginal frequencies are equal (that is, whether there is ""marginal homogeneity""). It is named after Quinn McNemar, who introduced it in 1947.
An application of the test in genetics is the transmission disequilibrium test for detecting linkage disequilibrium"</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000159<a name="181513-TBD.580000159"> </a></td><td>Sign test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000160<a name="181513-TBD.580000160"> </a></td><td>Friedman test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000148<a name="181513-TBD.580000148"> </a></td><td>Two sample t-test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> STATO:0000303<a name="181513-STATO.580000303"> </a></td><td>Two sample t-test with equal variance</td><td/><td/><td/><td/><td/><td/><td>STATO: two sample t-test with equal variance (t-test for independent means assuming equal variance; two sample t-test) = two sample t-test is a null hypothesis statistical test which is used to reject or accept the hypothesis of absence of difference between the means over 2 randomly sampled populations.
It uses a t-distribution for the test and assumes that the variables in the population are normally distributed and with equal variances.</td><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> STATO:0000304<a name="181513-STATO.580000304"> </a></td><td>Two sample t-test with unequal variance</td><td/><td/><td/><td/><td/><td/><td>STATO: two sample t-test with unequal variance (t-test for independent means assuming unequal variance; Welsh t-test) = Welch t-test is a two sample t-test used when the variances of the 2 populations/samples are thought to be unequal (homoskedasticity hypothesis not verified). In this version of the two-sample t-test, the denominator used to form the t-statistics, does not rely on a 'pooled variance' estimate.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000052<a name="181513-STATO.580000052"> </a></td><td>Z test for between group comparison</td><td/><td/><td/><td/><td/><td/><td>STATO: Z-test is a statistical test which evaluate the null hypothesis that the means of 2 populations are equal and returns a p-value.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:ANCOVA<a name="181513-TBD.58ANCOVA"> </a></td><td>ANCOVA</td><td>analysis of covariance (ANCOVA)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000142<a name="181513-TBD.580000142"> </a></td><td>Chi square test</td><td/><td/><td/><td/><td/><td/><td>STATO: from OBI_0200200: The chi-square test is a data transformation with the objective of statistical hypothesis testing, in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true, or any in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-square distribution as closely as desired by making the sample size large enough.</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000163<a name="181513-TBD.580000163"> </a></td><td>Chi square test for homogeneity</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000074<a name="181513-STATO.580000074"> </a></td><td>Mantel-Haenszel method</td><td/><td/><td/><td/><td/><td/><td>STATO: Cochran-Mantel-Haenzel test for repeated tests of independence (CHM test; Mantel–Haenszel test) = "Cochran-Mantel-Haenzel test for repeated tests of independence is a statitiscal test which allows the comparison of two groups on a dichotomous/categorical response. It is used when the effect of the explanatory variable on the response variable is influenced by covariates that can be controlled. It is often used in observational studies where random assignment of subjects to different treatments cannot be controlled, but influencing covariates can.
The null hypothesis is that the two nominal variables that are tested within each repetition are independent of each other. So there are 3 variables to consider: two categorical variables to be tested for independence of each other, and the third variable identifies the repeats."</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000165<a name="181513-TBD.580000165"> </a></td><td>Pearson’s Chi square test of goodness of fit</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000166<a name="181513-TBD.580000166"> </a></td><td>Pearson’s Chi square test of goodness of independence between categorical variables</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>5</td><td style="white-space:nowrap"> TBD:0000167<a name="181513-TBD.580000167"> </a></td><td>Yate’s corrected Chi-Squared test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000143<a name="181513-TBD.580000143"> </a></td><td>Single-sample reference comparison statistical test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000302<a name="181513-STATO.580000302"> </a></td><td>One sample t-test</td><td/><td/><td/><td/><td/><td/><td>STATO: "one sample t-test is a kind of Student's t-test which evaluates if a given sample can be reasonably assumed to be taken from the population.
The test compares the sample statistic (m) to the population parameter (M).
The one sample t-test is the small sample analog of the z test, which is suitable for large samples."</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:0000169<a name="181513-TBD.580000169"> </a></td><td>Z test for single-sample</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000144<a name="181513-TBD.580000144"> </a></td><td>Test of association between categorical variables</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000148<a name="181513-STATO.580000148"> </a></td><td>Cochran-Armitage test for trend</td><td/><td/><td/><td/><td/><td/><td>STATO: "The Cochran-Armitage test (CATT) s a statistical test used in categorical data analysis when the aim is to assess for the presence of an association between a dichotomous variable (variable with two categories) and a polychotomous variable (a variable with k categories).
The two-level variable represents the response, and the other represents an explanatory variable with ordered levels. The null hypothesis is the hypothesis of no trend, which means that the binomial proportion is the same for all levels of the explanatory variable
For example, doses of a treatment can be ordered as 'low', 'medium', and 'high', and we may suspect that the treatment benefit cannot become smaller as the dose increases. The trend test is often used as a genotype-based test for case-control genetic association studies."</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000073<a name="181513-STATO.580000073"> </a></td><td>Fisher’s exact test</td><td/><td/><td/><td/><td/><td/><td>STATO: Fisher's exact test is a statistical test used to determine if there are nonrandom associations between two categorical variables.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000145<a name="181513-TBD.580000145"> </a></td><td>Within subject comparison statistical test</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000095<a name="181513-STATO.580000095"> </a></td><td>Paired t-test</td><td/><td/><td/><td/><td/><td/><td>STATO: paired t-test (t-test for dependent means) = paired t-test is a statistical test which is specifically designed to analysis differences between paired observations in the case of studies realizing repeated measures design with only 2 repeated measurements per subject (before and after treatment for example)</td><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> STATO:0000092<a name="181513-STATO.580000092"> </a></td><td>Wilcoxon signed rank test</td><td/><td/><td/><td/><td/><td/><td>STATO: "The Wilcoxon signed rank test is a statistical test which tests the null hypothesis that the median difference between pairs of observations is zero. This is the non-parametric analogue to the paired t-test, and should be used if the distribution of differences between pairs may be non-normally distributed.
The procedure involves a ranking, hence the name. The absolute value of the differences between observations are ranked from smallest to largest, with the smallest difference getting a rank of 1, then next larger difference getting a rank of 2, etc. Ties are given average ranks. The ranks of all differences in one direction are summed, and the ranks of all differences in the other direction are summed. The smaller of these two sums is the test statistic, W (sometimes symbolized Ts). Unlike most test statistics, smaller values of W are less likely under the null hypothesis."</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000Log<a name="181513-TBD.580000Log"> </a></td><td>Logarithm</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">TBD:model-characteristics<a name="181513-TBD.58model-characteristics"> </a></td><td>Statistical Model Characteristic</td><td>An aspect, attribute, or feature of a statistical model.</td><td>A statistical model is defined as a set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data.</td><td>Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal</td><td>2023-06-05 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000118<a name="181513-TBD.580000118"> </a></td><td>Statistical model goal</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000119<a name="181513-TBD.580000119"> </a></td><td>Adjustment for clustering</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000120<a name="181513-TBD.580000120"> </a></td><td>Adjustment for covariates</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:ITTA<a name="181513-TBD.58ITTA"> </a></td><td>Intention-to-treat analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:PPA<a name="181513-TBD.58PPA"> </a></td><td>Per-protocol analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:sample-size<a name="181513-TBD.58sample-size"> </a></td><td>Sample size estimation</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:net-effect-analysis<a name="181513-TBD.58net-effect-analysis"> </a></td><td>Net effect analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:OutcomeSetNetEffect<a name="181513-TBD.58OutcomeSetNetEffect"> </a></td><td>Set of outcomes (for a net effect analysis)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>4</td><td style="white-space:nowrap"> TBD:net-effect-contribution-analysis<a name="181513-TBD.58net-effect-contribution-analysis"> </a></td><td>Net effect contribution analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:statistical-model-assumption<a name="181513-TBD.58statistical-model-assumption"> </a></td><td>Statistical model assumption</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:assumption001<a name="181513-TBD.58assumption001"> </a></td><td>Data distribution assumption of normal distribution</td><td>Assumption that the observed data in each comparison group follows a normal distribution.</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:assumption002<a name="181513-TBD.58assumption002"> </a></td><td>Data distribution assumption of equal standard deviations</td><td>Assumption that the observed data across comparison groups have the same standard deviation.</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:assumption003<a name="181513-TBD.58assumption003"> </a></td><td>Data distribution assumption of asymptotic approximation</td><td>Assumption that there is sufficient data across the distribution to permit using an approximation that is [asymptotic].</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:statistical-model-assumption-assessment<a name="181513-TBD.58statistical-model-assumption-assessment"> </a></td><td>Statistical model assumption assessment</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:assumption-assessment-001<a name="181513-TBD.58assumption-assessment-001"> </a></td><td>all the expected counts in the cells of the contingency table meet or exceed a threshold</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:unclassified-terms<a name="181513-TBD.58unclassified-terms"> </a></td><td>Statistical model characteristic not yet classified</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:participant-inclusion-criteria-for-analysis<a name="181513-TBD.58participant-inclusion-criteria-for-analysis"> </a></td><td>participant inclusion criteria for analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:data-inclusion-criteria-for-analysis<a name="181513-TBD.58data-inclusion-criteria-for-analysis"> </a></td><td>data inclusion criteria for analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:handling-of-missing-endpoint-data<a name="181513-TBD.58handling-of-missing-endpoint-data"> </a></td><td>handling of missing endpoint data</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:single-imputation-by-LOCF<a name="181513-TBD.58single-imputation-by-LOCF"> </a></td><td>single imputation by last-observation-carried-forward (LOCF)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:participant-inclusion-criteria-for-secondary-analysis<a name="181513-TBD.58participant-inclusion-criteria-for-secondary-analysis"> </a></td><td>participant inclusion criteria for secondary analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:data-inclusion-criteria-for-secondary-analysis<a name="181513-TBD.58data-inclusion-criteria-for-secondary-analysis"> </a></td><td>data inclusion criteria for secondary analysis</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:statistical-software-package<a name="181513-TBD.58statistical-software-package"> </a></td><td>statistical software package</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:sample-size-per-group<a name="181513-TBD.58sample-size-per-group"> </a></td><td>sample size per group</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:prospective-sample-permutation-testing<a name="181513-TBD.58prospective-sample-permutation-testing"> </a></td><td>prospective sample permutation testing</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:number-of-permutations-sampled<a name="181513-TBD.58number-of-permutations-sampled"> </a></td><td>number of permutations sampled</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:primary-analytic-method<a name="181513-TBD.58primary-analytic-method"> </a></td><td>primary analytic method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:identify-source-of-interaction<a name="181513-TBD.58identify-source-of-interaction"> </a></td><td>identify source(s) of significant interaction</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:rank-based-analytic-method<a name="181513-TBD.58rank-based-analytic-method"> </a></td><td>rank-based analytic method</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">TBD:model-component<a name="181513-TBD.58model-component"> </a></td><td>Statistical Model Component</td><td>A part of a statistical model.</td><td>A statistical model is defined as a set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data.
Statistical model components include graphical structures (e.g. directed acyclic graph), equations (e.g. regression model form), components of equations (e.g. covariate term), and distributional assumptions (e.g. regression error distribution).</td><td>Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Harold Lehmann</td><td>2023-06-05 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000088<a name="181513-TBD.580000088"> </a></td><td>Covariate term</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> STATO:0000469<a name="181513-STATO.580000469"> </a></td><td>Interaction term</td><td/><td/><td/><td/><td/><td/><td>STATO: model interaction effect term = a model interaction effect term is a model term which accounts for variation explained by the combined effects of the factor levels of more than one (usually 2) independent variables.</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000106<a name="181513-TBD.580000106"> </a></td><td>Regression Model Form</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000107<a name="181513-TBD.580000107"> </a></td><td>Log Linear Regression</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000108<a name="181513-TBD.580000108"> </a></td><td>Polynomial Regression</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000109<a name="181513-TBD.580000109"> </a></td><td>Cox Proportional Hazards</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>1</td><td style="white-space:nowrap">TBD:PDA<a name="181513-TBD.58PDA"> </a></td><td>probability distribution attribute</td><td>An aspect, characteristic, or feature of a probability distribution.</td><td>A probability distribution is represented by a combination of probability distribution attributes.</td><td>Brian S. Alper, Harold Lehmann, Muhammad Afzal</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann,: Eric Harvey, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000110<a name="181513-TBD.580000110"> </a></td><td>probability distribution class</td><td>A probability distribution attribute that communicates how the likelihood of a specified outcome is calculated.</td><td>The probability distribution class defines the assumed model. Parametric probability distribution classes are determined by parameters.</td><td>Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000111<a name="181513-TBD.580000111"> </a></td><td>normal distribution</td><td>A probability distribution class in which instances are unimodal, symmetric, and defined by two parameters, mean and standard deviation.</td><td>Normal distribution is commonly used to approximate the sampling distribution of quantities estimated from samples.
Variance is the square of standard deviation. Variance is sometimes used instead of standard deviation as a parameter for defining a normal distribution.
Standard normal distribution is a special case of normal distribution with a mean = 0, variance = 1, and kurtosis = 3.
All normal distributions have skewness = 0.</td><td>Philippe Rocca-Serra, Ken Wilkins, Joanne Dehnbostel, Khalid Shahin, Brian S. Alper, Harold Lehmann</td><td>2023-08-07 vote 5-0 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann</td><td/><td/><td>Term IRI: http://purl.obolibrary.org/obo/STATO_0000227
Definition: A normal distribution is a continuous probability distribution described by a probability distribution function described here: http://mathworld.wolfram.com/NormalDistribution.html</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000438<a name="181513-STATO.580000438"> </a></td><td>log normal distribution</td><td>A probability distribution class in which the logarithm transformed values of a variable follow a normal distribution. Instances of the log normal distribution class are unimodal and skewed. Variables can only be non-negative real values.</td><td>Log normal distribution is commonly used to approximate the distribution of times and costs.
The mean of a log normal distribution is the geometric mean of the log transformed values.
Log transformed means the natural log of values replace those values.
Normal distribution is defined as a probability distribution class in which instances are unimodal, symmetric, and defined by two parameters, mean and standard deviation.</td><td>Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper, Khalid Shahin</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan</td><td>2023-08-07 vote 4-1 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann</td><td>2023-08-07 comment: (tweak to the definition):
A probability distribution class in which the logarithm transformed values of a variable follow a normal distribution. Instances of the log normal distribution class are unimodal and skewed.
STATO_0000438</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000160<a name="181513-STATO.580000160"> </a></td><td>exponential distribution</td><td>A probability distribution class defined by a single parameter, rate. Instances of the exponential distribution class are unimodal and skewed. Variables can only be non-negative real values.</td><td>Exponential distribution is commonly used to represent the distribution of independent events occurring at the same rate over time.
The mean and standard deviation of an exponential distribution are each the reciprocal of the rate.</td><td>Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper, Khalid Shahin</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan</td><td>2023-08-07 vote 4-1 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann</td><td>2023-08-07 comment: (tweak to the definition):
A probability distribution class defined by a single parameter, rate and commonly used to represent the distribution of independent events occurring at the same rate over time. Instances of the exponential distribution class are unimodal, skewed,
STATO_0000160</td><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000149<a name="181513-STATO.580000149"> </a></td><td>binomial distribution</td><td>A probability distribution class defined by two parameters: the number of independent trials, n, and the probability of success, p. Variables can only be dichotomous values.</td><td>Binomial distribution is commonly used to approximate the probability of a dichotomous state (presence/absence, success/failure, true/false).
The mean of a binomial distribution is the number of independent trials, n, multiplied by the probability of success, p. n * p
The variance of a binomial distribution is the number of independent trials, n, multiplied by the probability of success, p, multiplied by the probability of failure, 1-p. n * p * q where q = 1 - p</td><td>Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Brian S. Alper</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan</td><td/><td/><td>STATO: binomial logistic regression for analysis of dichotomous dependent variable = binomial logistic regression model is a model which attempts to explain data distribution associated with *dichotomous* response/dependent variable in terms of values assumed by the independent variable uses a function of predictor/independent variable(s): the function used in this instance of regression modeling is logistic function.
also STATO_0000276: binomial distribution = The binomial distribution is a discrete probability distribution which describes the probability of k successes in n draws with replacement from a finite population of size N.
The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N.
The binomial distribution gives the discrete probability distribution of obtaining exactly n successes out of N Bernoulli trials (where the result of each Bernoulli trial is true with probability p and false with probability q=1-p )
notation: B(n,p)
The mean is N*p
The variance is N*p*q</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000109<a name="181513-STATO.580000109"> </a></td><td>multinomial distribution</td><td>A probability distribution class defined by multiple parameters: the number of independent trials, n, the number of categories, k, and k-1 probabilities of success. Variables can only be polychotomous values.</td><td>Multinomial distribution is commonly used to approximate the probability of a categorical outcome across a discrete number of mutually exclusive possible categories. A classic example is rolling a six-sided die.
For *n* independent trials, the expected (mean) number of times category *i* will appear is *n* multiplied by the probability of success, *p_i*. *n* * *p_i*
The variance of that expectation is *n* multiplied by *p_i* multiplied by the probability of failure, 1-*p_i*</td><td>Harold Lehmann, Joanne Dehnbostel, Brian S. Alper</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan</td><td/><td/><td>STATO: multinomial logistic regression for analysis of dichotomous dependent variable = multinomial logistic regression model is a model which attempts to explain data distribution associated with *polychotomous* response/dependent variable in terms of values assumed by the independent variable uses a function of predictor/independent variable(s): the function used in this instance of regression modeling is logistic function.
also multinomial distribution (STATO_0000103) = the multinomial distribution is a probability distribution which gives the probability of any particular combination of numbers of successes for various categories defined in the context of n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability.</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000051<a name="181513-STATO.580000051"> </a></td><td>Poisson distribution</td><td>A probability distribution class defined by one parameter: a non-negative real number, λ. Random variables following a Poisson distribution can only have non-negative integer values.</td><td>Poisson distribution is commonly used to approximate the number (count) of events occurring within a given time interval or given spatial region.
The expected value of a Poisson-distributed random variable is equal to λ and so is its variance.</td><td>Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Muhammad Afzal</td><td>2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan BUT definition changed based on comment</td><td>2023-10-02 comment: The other definitions include something about what is called the "support" (binary, polychotomous). Here, we should say, to be consistent, "Variables can take on only non-negative integral values."</td><td>STATO: STATO_0000051 is Poisson distribution = "Poisson distribution is a probability distribution used to model the number of events occurring within a given time interval. It is defined by a real number (λ) and an integer k representing the number of events and a function.
The expected value of a Poisson-distributed random variable is equal to λ and so is its variance."</td><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> STATO:0000283<a name="181513-STATO.580000283"> </a></td><td>negative binomial distribution</td><td>A probability distribution class for discrete data of the number of successes in a sequence of Bernoulli trials before a specified number (denoted r) of failures occur.</td><td>The negative binomial distribution, also known as the Pascal distribution, gives the probability of r-1 successes and x failures in x+r-1 trials, and success on the (x+r)th trial. Pólya distribution is a variation of negative binomial distribution used for all real numbers, not just non-negative integers.</td><td>Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Brian S. Alper</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan</td><td/><td/><td>STATO: STATO_0000283: negative binomial distribution (Pascal distribution; Pólya distribution) = negative binomial probability distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number of failures (denoted r) occur. The negative binomial distribution, also known as the Pascal distribution or Pólya distribution, gives the probability of r-1 successes and x failures in x+r-1 trials, and success on the (x+r)th trial.</td><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:mu<a name="181513-TBD.58mu"> </a></td><td>distribution mean</td><td>A probability distribution attribute that represents the expected value of a variable that has that distribution.</td><td>For a normal distribution, the distribution parameter mean (also called μ or mu) coincides with the mean of the distribution.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal</td><td>2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan</td><td>2023-06-12 vote 2-2 by Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey</td><td>2023-06-12 comments: mu represents population mean. It is a measure of central tendency that represents the average value of a variable within an entire population.
To avoid any ambiguity, rather than mu I would use the alternative term μ.
As reported by Wikipedia:
In Ancient Greek, the name of the letter was written μῦ and pronounced [mŷː], but in Modern Greek, the letter is spelled μι and pronounced [mi]. In polytonic orthography, it is written with an acute accent: μί.</td><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000051<a name="181513-TBD.580000051"> </a></td><td>distribution standard deviation</td><td>A probability distribution attribute that is the square root of the distribution variance.</td><td>A distribution variance is defined as a probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value.
For a normal distribution, the distribution parameter standard deviation (also called σ or sigma) coincides with the standard deviation of the distribution. Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins</td><td>2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde</td><td>2023-06-12 vote 3-1 by Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey</td><td>2023-06-12 comment: sigma represents population standard deviation, It is a measure of the dispersion or spread of data points within an entire population.</td><td/><td/><td/><td/><td>Measure of Dispersion</td><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:model-parameter<a name="181513-TBD.58model-parameter"> </a></td><td>probability distribution parameter</td><td>A member of a set of quantities that unambiguously defines a probability distribution function.</td><td>Parameters serve different roles in defining distributions. Location parameters define the position along the range of possible values. Shape and scale parameters define the dispersion around the expected value.
When the probability distribution parameters have values, the set of values defines a particular probability distribution function.
When a statistic applies to a specific set of data, the specific set of data is called a sample and the statistic is called the sample statistic. Likewise, when a probability distribution parameter applies to the group from which a sample may be derived, the group is called a population and the probability distribution parameter is called a population parameter.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel</td><td>2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey</td><td>2023-06-12 vote 5-0 by Brian S. Alper, Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey BUT the term then changed in committee to grapple with sub-terms</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:mean-normal<a name="181513-TBD.58mean-normal"> </a></td><td>Mean as normal-distribution parameter</td><td>A probability distribution parameter for a normal distribution that provides the location of the distribution.</td><td>This parameter is generally denoted as μ or mu.</td><td>Harold Lehmann, Brian S. Alper, Kenneth Wilkins</td><td>2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:variance-normal<a name="181513-TBD.58variance-normal"> </a></td><td>Variance as normal-distribution parameter</td><td>A probability distribution parameter for a normal distribution that provides the dispersion of the distribution.</td><td>This parameter is generally denoted as σ^2 or sigma-squared.</td><td>Harold Lehmann, Brian S. Alper, Kenneth Wilkins</td><td>2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td style="white-space:nowrap"> TBD:0000053<a name="181513-TBD.580000053"> </a></td><td>distribution variance</td><td>A probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value.</td><td>For a normal distribution, the distribution parameter variance (also called σ^2 or sigma-squared) coincides with the variance of the distribution.</td><td>Brian S. Alper, Harold Lehmann, Kenneth Wilkins</td><td>2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>3</td><td style="white-space:nowrap"> TBD:0000056<a name="181513-TBD.580000056"> </a></td><td>variance of the sampling distribution</td><td>A distribution variance in which the distribution is a sampling distribution of a given statistic.</td><td>Distribution variance is defined as a probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value.
A sampling distribution is a distribution of values for the given statistic derived from a set of random independent samples from the same population. The samples may be theoretical or actual.</td><td>Brian S. Alper, Kenneth Wilkins, Harold Lehmann</td><td>2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey</td><td/><td/><td/><td/><td/><td/><td>Measure of Dispersion</td><td/></tr></table></div>
</text>
<extension
url="http://hl7.org/fhir/StructureDefinition/structuredefinition-wg">
<valueCode value="cds"/>
</extension>
<url value="https://fevir.net/resources/CodeSystem/181513"/>
<identifier>
<type>
<coding>
<system value="http://terminology.hl7.org/CodeSystem/v2-0203"/>
<code value="ACSN"/>
<display value="Accession ID"/>
</coding>
<text value="FEvIR Object Identifier"/>
</type>
<system value="https://fevir.net"/>
<value value="181513"/>
<assigner>
<display value="Computable Publishing LLC"/>
</assigner>
</identifier>
<version value="1.0.0-ballot"/>
<name value="Sevco_example_for_ebmonfhir_ig"/>
<title
value="Scientific Evidence Code System (SEVCO) -- EXAMPLE VERSION for EBMonFHIR Implementation Guide"/>
<status value="active"/>
<experimental value="true"/>
<date value="2022-11-04T13:57:51.647Z"/>
<publisher value="HL7 International / Clinical Decision Support"/>
<contact>
<name value="HL7 International / Clinical Decision Support"/>
<telecom>
<system value="url"/>
<value value="http://www.hl7.org/Special/committees/dss"/>
</telecom>
</contact>
<description
value="This code system was copied as a snapshot from the version being used for active development of the Scientific Evidence Code System (SEVCO). This code system is not yet released for expected use and may not be stable. This resource may be used for supporting the examples in the EBMonFHIR Implementation Guide, and published versions of the code system (when ready) will be published as separate resources with stable identifiers."/>
<jurisdiction>
<coding>
<system value="http://unstats.un.org/unsd/methods/m49/m49.htm"/>
<code value="001"/>
<display value="World"/>
</coding>
</jurisdiction>
<purpose
value="Support of examples in the EBMonFHIR Implementation Guide, prior to final publication of the EBMonFHIR Implementation Guide"/>
<copyright
value="https://creativecommons.org/licenses/by-sa/4.0/ copyright holder is Scientific Knowledge Accelerator Foundation"/>
<author>
<name value="Brian S. Alper"/>
</author>
<author>
<name value="Joanne Dehnbostel"/>
</author>
<author>
<name value="Khalid Shahin"/>
</author>
<relatedArtifact>
<type value="cite-as"/>
<citation
value="Scientific Evidence Code System (SEVCO) -- EXAMPLE VERSION for EBMonFHIR Implementation Guide [CodeSystem]. Contributors: Brian S. Alper, Joanne Dehnbostel, Khalid Shahin [Authors/Creators]. In: Fast Evidence Interoperability Resources (FEvIR) Platform, FOI 181513. Revised 2022-11-04. Available at: https://fevir.net/resources/CodeSystem/181513. Computable resource at: https://fevir.net/resources/CodeSystem/181513."/>
</relatedArtifact>
<caseSensitive value="false"/>
<hierarchyMeaning value="is-a"/>
<compositional value="false"/>
<versionNeeded value="true"/>
<content value="complete"/>
<property>
<code value="comment"/>
<description value="Comment for application"/>
<type value="string"/>
</property>
<property>
<code value="editors"/>
<description value="Term/Definition Editors"/>
<type value="string"/>
</property>
<property>
<code value="approval"/>
<description value="Expert Working Group Agreement"/>
<type value="string"/>
</property>
<property>
<code value="negative-vote"/>
<description value="Expert Working Group Disagreement"/>
<type value="string"/>
</property>
<property>
<code value="expert-comments"/>
<description value="Expert Working Group Comments"/>
<type value="string"/>
</property>
<property>
<code value="external-definitions"/>
<description value="Externally Mapped Definitions"/>
<type value="string"/>
</property>
<property>
<code value="open-for-voting"/>
<description value="Open for Voting"/>
<type value="dateTime"/>
</property>
<property>
<code value="change-for-vote"/>
<description value="Proposed Change for Future Vote"/>
<type value="string"/>
</property>
<property>
<code value="multiple-parents"/>
<description value="Has more than one parent term (IS-A relationship)"/>
<type value="string"/>
</property>
<property>
<code value="statistical-purpose"/>
<description value="Statistical purpose"/>
<type value="string"/>
</property>
<concept>
<code value="SEVCO:01000"/>
<display value="study design"/>
<definition
value="A plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="research design"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="experimental design"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Asiyah Lin, Mario Tristan, Neeraj Ojha"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 4/26/2021: Eric Harvey, Bhagvan Kommadi, KM Saif-Ur-Rahman, Paola Rosati, Jesús López-Alcalde, Tatyana Shamliyan, Sorana D. Bolboacă, Asiyah Lin, Eric Au"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-04-12 Vote 9-2 on "Study design=A plan specification for how and what kinds of data are gathered or used to generate or test a hypothesis", Bhagvan Kommadi, Jesús López-Alcalde, Sorana D. Bolboacă, Tatyana Shamliyan, Asiyah Lin, Philippe Rocca-Serra, Eric Au, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Eric Harvey
2021-04-06 vote 8-1 on "Study Design = A plan specification for how and what kinds of data will be gathered as part of an investigation to generate or test a hypothesis" by Tatyana Shamliyan, Paola Rosati, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Eric Harvey, KM Saif-Ur-Rahman, Asiyah Lin, Brian S. Alper"/>
</property>
<concept>
<code value="SEVCO:01001"/>
<display value="interventional research"/>
<definition
value="A study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by the investigator to evaluate a response in the dependent variable (an effect or outcome)."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="interventional study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="interventional study design"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="interventional primary research"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="international method of research"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="experimental research"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="experimental study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Mario Tristan, Joanne Dehnbostel, Harold Lehmann, Khalid Shahin, Brian S. Alper"/>
</property>
<property>
<code value="approval"/>
<valueString
value="12/12 as of 5/31/2021: Eric Harvey, Bhagvan Kommadi, Brian Alper, Sebastien Bailly, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Paola Rosati, Sorana D. Bolboacă, Asiyah Lin, Leo Orozco, Erfan Shamsoddin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-05-17 vote 6-2 on "Interventional research = In a prospective study, an independent variable is manipulated or assigned by the investigator to evaluate a response or outcome (the dependent variable)." by Eric Harvey, Bhagvan Kommadi, Paola Rosati, KM Saif-Ur-Rahman, Ahmad Sofi-Mahmudi, Jesus Lopez-Alcalde, Sorana D. Bolboacă, Harold Lehmann, 2021-05-24 vote 10-1 on Interventional research="A study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by the investigator to evaluate a response in the dependent variable (an effect or outcome)." by Alejandro Piscoya, Philippe Rocca-Serra, KM Saif-Ur-Rahman, Eric Harvey, Harold Lehmann, Bhagvan Kommadi, Sorana D. Bolboacă, Jesús López-Alcalde, Paola Rosati, Tatyana Shamliyan, Brian Alper"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="I would avoid the term prospective study, as this term is ambiguous. Suggested change to "A study in whichi the independent variable is prospectively manipulated or assigned by the invesigator…" Manipulate = to control, manipulate or influence suggestion to delete "the dependent variable" which mixes language of analysis vs. design with "response" 5-24-2021 No major disagreement with the definition but uneasy to have 'intervention study' as (unspecified) synonym as doing so convey that a plan (the study design) is the same as the execution of the plan (the study). The same applies to 'Primary research...) I think that we need to clarify the goals: Experiments examine cause-and-effect relationship by measuring outcomes when a particular factor (exposure, intervention, independent variable) is manipulated and controlled during and after experiment (inference). I think that we should clarify the subjects of experiments: consent people or animals"/>
</property>
<property>
<code value="comment"/>
<valueString
value="We acknowledge that interventional study design and interventional study may not be exact synonyms of interventional research, but interventional research could be used to encompass both design and implementation of the design"/>
</property>
<concept>
<code value="SEVCO:01003"/>
<display value="randomized assignment"/>
<definition
value="An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by random chance to separate groups."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="interventional research with randomized assignment"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="randomized trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="randomized controlled trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RCT"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte"/>
</property>
<concept>
<code value="SEVCO:01006"/>
<display value="simple randomization"/>
<definition
value="A randomized assignment in which each participant has the same prespecified likelihood of being assigned to a group as all other participants, independent of the assignment of any other participant."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="unrestricted randomization"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:01007"/>
<display value="stratified randomization"/>
<definition
value="A randomized assignment in which participants are stratified into groups based on prognostic variables and then randomized into balanced treatment groups"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 7/19/2021: Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:01008"/>
<display value="block randomization"/>
<definition
value="A randomized assignment in which a pre-specified number of subjects is assigned to a block containing the same pre-specified ratio of group assignments in random order."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 7/26/2021: Mario Tristan, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Leo Orozco, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-07-19 vote 7-1 on "A randomized assignment in which a pre-specified number of subjects is assigned to a block containing the same pre-specified number of balanced group assignments in random order" by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="I think I'm hung up on the word "balanced". Does allocation in block design need to be balanced? Couldn't a block design allocate subjects to treatment arms in a 2:1, or other "unbalanced" ratio?"/>
</property>
</concept>
<concept>
<code value="SEVCO:01009"/>
<display value="adaptive randomization"/>
<definition
value="A randomized assignment in which a participant’s group assignment probability is adjusted based on any factor such that the likelihood of assignment is not the same for all participants."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Mario Tristan, Kenneth Wilkins, Erfan Shamsoddin, Ellen Jepson"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-07-19 vote 7-1 on "A randomized assignment in which a participant’s group assignment probability is adjusted based on any factor such that the likelihood of assignment is not the same for all participants." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte, 2021-07-26 vote 6-1 by Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="I deem this kind of adaptation could determine conflict of interests or a new kind of bias. I disagree with adding an adaptive randomization as a new term 7-26-21 comment: Again, why and for what you wish to maintain this term? I think the term adaptive randomization risks a severe selection bias. In ethical terms, I deem there is no justification to proceed with such a methodology in clinical trials."/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:01005"/>
<display value="non-randomized assignment"/>
<definition
value="An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by methods other than random chance to separate groups."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="interventional research with non-randomized assignment"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Michael Panzer, Janice Tufte, Erfan Shamsoddin, Ellen Jepson, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-07-19 vote 6-2 on "An interventional study design in which an independent variable (an exposure or intervention) is prospectively assigned or modified by methods other than random chance to separate groups." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana D. Bolboacă, Janice Tufte, 2021-07-26 vote 6-1 by Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="In this case, if the patients choose which is the arm they want to be in it would beok to insert this term. I presumetherefore that if the choice is made bythe researchers they offer a clearjustification for it in the protocol As written, this category would include all quasi-randomized designs. If this is the intent, fine. If this was not the intent, perhaps we could change "..randomized.." to "..randomized or quasi-randomized.." 7-26-21 comment: We usually have started the definitions by saying "A xxx assignment that..." (see previous ones in this page). That is, we define the assigment. However, for"Non-Randomized Assignment" we start by saying "An interventional study design..." I propose to describe the "assignment" (avoid starting by defining the study design itself)"/>
</property>
<concept>
<code value="SEVCO:01004"/>
<display value="quasi-randomized assignment"/>
<definition
value="An interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Michael Panzer, Janice Tufte, Erfan Shamsoddin, Ellen Jepson, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 7/26/2021: Mario Tristan, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Leo Orozco, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-07-19 vote 6-2 on "An interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation." by Alejandro Piscoya, Harold Lehmann, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Sorana d. Bolboaca, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="Which is the difference between thisquasi-randomized assignment and theadaptive randomization? It is unclearwhy we should insert these two terms inthe glossary I would specify in the definition thatquasi-randomisation is a non-randommethod of allocation"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Quasi-random methods of allocation include allocation by alternate order of entry, date of birth, day of the week, month of the year, or medical record number"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:01029"/>
<display value="clinical trial"/>
<definition
value="Interventional research in which one or more healthcare-related actions (i.e., a diagnostic, prognostic, therapeutic, preventive or screening method or intervention) is evaluated for effects on health-related biomedical or behavioral processes and/or outcomes."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-11-30 vote 7-1 by Alejandro Piscoya, Mario Tristan, Robin Ann Yurk, Muhammad Afzal, Paola Rosati, Paul Whaley, Janice Tufte, Jesus Lopez-Alcalde
2021-12-07 vote 4-1 by Mario Tristan, Robin Ann Yurk, Janice Tufte, Joanne Dehnbostel, CP Ooi"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-11-30 comments: (We should include the classical definition for Phase lV Field Trials of Health Interventions: A Toolbox. 3rd edition.
Smith PG, Morrow RH, Ross DA, editors.
Oxford (UK): OUP Oxford; 2015 Jun 1.https://www.ncbi.nlm.nih.gov/books/NBK305508/), Instead of "methods" I would use the term "interventions". I also miss the term "prognostic" as they are not diagnostic or screening. Besides, it would be important to highlight that the clinical trial is done in humans
2021-12-07 comment: A clinical trial is a type of research that studies new tests and treatments and evaluates their effects on human health outcomes. The medical intervention can be drugs, cells and other biological products, surgical procedures, radiological procedures, devices, behavioural treatments and preventive care."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Some definitions for "clinical trial" include human subject research for effects on human health outcomes. The term "human" was not added to this definition because a study design with animal subjects for effects on animal health outcomes to inform veterinary care would be considered a clinical trial. However, a study design with animal subjects to inform human health outcomes would not be considered a clinical trial."/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="NIH Clinical Trial Definition = A research study[1] in which one or more human subjects[2] are prospectively assigned[3] to one or more interventions[4] (which may include placebo or other control) to evaluate the effects of those interventions on health-related biomedical or behavioral outcomes.[5]
[4]An intervention is defined as a manipulation of the subject or subject’s environment for the purpose of modifying one or more health-related biomedical or behavioral processes and/or endpoints. Examples include: drugs/small molecules/compounds; biologics; devices; procedures (e.g., surgical techniques); delivery systems (e.g., telemedicine, face-to-face interviews); strategies to change health-related behavior (e.g., diet, cognitive therapy, exercise, development of new habits); treatment strategies; prevention strategies; and, diagnostic strategies.
from https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-015.html"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-14 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Robin Ann Yurk, Janice Tufte, Paul Whaley, Brian S. Alper"/>
</property>
<concept>
<code value="SEVCO:01041"/>
<display value="pragmatic clinical trial"/>
<definition
value="A clinical trial conducted under conditions of routine clinical practice."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="pragmatic trial"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="NCIt: Pragmatic Trial = A study designed to test the effectiveness of an intervention in a broad routine clinical practice. Term used to describe a clinical study designed to examine the benefits of a product under real world conditions.
UMLS: Works about randomized clinical trials that compare interventions in clinical settings and which look at a range of effectiveness outcomes and impacts.
CDISC Glossary: pragmatic trial = Term used to describe a clinical study designed to examine the benefits of a product under real world conditions.
EDDA: pragmatic clinical trial = Randomized clinical trials that compare interventions in clinical settings and which look at a range of effectiveness outcomes and impacts. [MeSH_2015]
SCO: pragmatic trial = A study designed to test the effectiveness of an intervention in a broad routine clinical practice.
"Pragmatic trials are designed to evaluate the effectiveness of interventions in real-life routine practice conditions, whereas explanatory trials aim to test whether an intervention works under optimal situations. The pragmatic trial, on the other hand, is designed to test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. The research question under investigation is whether an intervention actually works in real life." (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181997/)"/>
</property>
<property>
<code value="comment"/>
<valueString
value=""Pragmatic trials are designed to evaluate the effectiveness of interventions in real-life routine practice conditions, whereas explanatory trials aim to test whether an intervention works under optimal situations. The pragmatic trial, on the other hand, is designed to test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. The research question under investigation is whether an intervention actually works in real life." (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181997/)"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-11-30 vote 5-1 by Alejandro Piscoya, Robin Ann Yurk, Muhammad Afzal, Paul Whaley, Janice Tufte, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-11-30 comments: (The definition in the current form is fine however the last part may be thought like;
where "everyday" means day-to-day clinical practice wherein the conditions are not modified for the conduct of the research.), Suggested alternative: = A clinical trial designed to test the effects of an intervention under everyday conditions, where "everyday conditions" means clinical conditions are not modified for the conduct of the research"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-07 vote 5-0 by Mario Tristan, Robin Ann Yurk, Janice Tufte, CP Ooi, Joanne Dehnbostel"/>
</property>
</concept>
<concept>
<code value="SEVCO:01038"/>
<display value="expanded access study"/>
<definition
value="A clinical trial that provides a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="compassionate use trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="compassionate use study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="expanded access trial"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="from CTO:
Expanded Access Study
Studies that provide a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study. Expanded Access studies include individual-patient IND, treatment IND, compassionate use, emergency use or continued access. An investigational drug product (including biological product) available through expanded access for patients who do not qualify for enrollment in a clinical trial. Expanded Access includes all expanded access types under section 561 of the Federal Food, Drug, and Cosmetic Act: (1) for individual patients, including emergency use; (2) for intermediate-size patient populations; and (3) under a treatment IND or treatment protocol.
from NCIt:
Expanded Access Study
Studies that provide a means for obtaining an experimental drug or device for patients who are not adequately treated by existing therapy, who do not meet the eligibility criteria for enrollment, or who are otherwise unable to participate in another clinical study. Expanded Access studies include individual-patient IND, treatment IND, compassionate use, emergency use or continued access.
also Compassionate Treatment (compassionate use trial, expanded access trial, pre-approval access)
Providing experimental therapies to very sick individuals even though they don't meet the critera for inclusion in a trial.
A way to provide an investigational therapy to a patient who is not eligible to receive that therapy in a clinical trial, but who has a serious or life-threatening illness for which other treatments are not available. Compassionate use trials allow patients to receive promising but not yet fully studied or approved cancer therapies when no other treatment option exists.
A potential pathway for a patient with an immediately life-threatening condition or serious disease or condition to gain access to an investigational medical product (drug, biologic, or medical device) for treatment outside of clinical trials when no comparable or satisfactory alternative therapy options are available. NOTE: The intent is treatment, as opposed to research. Individual, Intermediate-size, and Widespread Use Expanded Access, also Emergency IND, are all programs administered under FDA guidelines. Additionally, the US Right-to-Try Act, which is independent of FDA, expands access. [FDA Expanded Access: Information for Physicians]
from EDDA:
compassionate use trial (expanded access trial, compassionate treatment)
Providing experimental therapies to very sick individuals even though they don't meet the critera for inclusion in a trial. [NCI 2014_12E]
Providing an investigational therapy to a patient who is not eligible to receive that therapy in a clinical trial, but who has a serious or life-threatening illness for which other treatments are not available. Compassionate use trials allow patients to receive promising but not yet fully studied or approved therapies when no other treatment option exists. Also called expanded access trial. [MeSH 2014_2014_02_10]
shared as a comment: Expanded access is the use of an investigational new drug, biologics, and medical devices used to diagnose, monitor, or treat patients with serious diseases or conditions for which there are no comparable or satisfactory therapy options available outside of clinical trials. (USA FDA)"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Expanded Access studies include individual-patient investigational new drug (IND), treatment IND, compassionate use, emergency use or continued access."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Brian S. Alper, Paola Rosati, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-15 comment: Define IND acronym under comment for application under individual patient IND, treatment IND"/>
</property>
</concept>
<concept>
<code value="SEVCO:01030"/>
<display value="phase 1 trial"/>
<definition
value="A clinical trial to gather initial evidence in humans to support further investigation of an intervention."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase I trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 1 study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase I clinical trial"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes:
§ 312.21 Phases of an investigation.
An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows:
....
Phase 1.
(1) Phase 1 includes the initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid, Phase 2 studies. The total number of subjects and patients included in Phase 1 studies varies with the drug, but is generally in the range of 20 to 80.
(2) Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes.
from CTO:
Phase I trial (phase I study, early-stage clinical trial, phase I protocol, phase I clinical trial, trial phase 1)
A clinical research protocol designed to test a new biomedical intervention in a small group of people for the first time. A Phase I trial can be to establish the toxicity of a new treatment with escalating intensity of the treatment administered and/or to determine the side effects of a new treatment for a particular indication in subjects. Includes initial studies to determine the metabolism and pharmacologic actions of drugs in humans, the side effects associated with increasing doses, and to gain early evidence of effectiveness; may include healthy participants and/or patients.
The initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. NOTE: These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid, Phase 2 studies. The total number of subjects and patients included in Phase I studies varies with the drug, but is generally in the range of 20 to 80. Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
The first step in testing a new treatment in humans. These studies test the best way to give a new treatment (for example, by mouth, intravenous infusion, or injection) and the best dose. The dose is usually increased a little at a time in order to find the highest dose that does not cause harmful side effects. Because little is known about the possible risks and benefits of the treatments being tested, phase I trials usually include only a small number of patients who have not been helped by other treatments.
The initial introduction of an investigational new drug into humans. Phase 1 studies are typically closely monitored and may be conducted in patients or normal volunteer subjects. NOTE: These studies are designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on effectiveness. During Phase 1, sufficient information about the drug's pharmacokinetics and pharmacological effects should be obtained to permit the design of well-controlled, scientifically valid Phase 2 studies. The total number of subjects and patients included in Phase 1 studies varies with the drug, but is generally in the range of 20 to 80. Phase 1 studies also include studies of drug metabolism, structure-activity relationships, and mechanism of action in humans, as well as studies in which investigational drugs are used as research tools to explore biological phenomena or disease processes. [after FDA CDER handbook, ICH E8]
from SCO:
phase I trial not independently defined
from NCIt:
same as CTO
from OCRe:
A Phase 1 trial includes initial studies to determine the metabolism and pharmacologic actions of drugs in humans, the side effects associated with increasing doses, and to gain early evidence of effectiveness; may include healthy participants and/or patients.
from EDDA:
A clinical research protocol designed to test a new biomedical intervention in a small group of people for the first time. A Phase I trial can be to establish the toxicity of a new treatment with escalating intensity of the treatment administered and/or to determine the side effects of a new treatment for a particular indication in subjects. [NCI 2014_12E]
Studies performed to evaluate the safety of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques in healthy subjects and to determine the safe dosage range (if appropriate). These tests also are used to determine pharmacologic and pharmacokinetic properties (toxicity, metabolism, absorption, elimination, and preferred route of administration). They involve a small number of persons and usually last about 1 year. This concept includes phase I studies conducted both in the U.S. and in other countries. [MeSH 2014_2014_02_10]
from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021
4.3.1 Human Pharmacology
The protection of study participants should always be the first priority when designing early
clinical studies, especially for the initial administration of an investigational product to humans
(usually referred to as phase 1). These studies may be conducted in healthy volunteer
participants or in a selected population of patients who have the condition or the disease,
depending on drug properties and the objectives of the development programme.
These studies typically address one or a combination of the following aspects:
4.3.1.1 Estimation of Initial Safety and Tolerability
The initial and subsequent administration of a drug to humans is usually intended to determine
the tolerability of the dose range expected to be evaluated in later clinical studies and to
determine the nature of adverse reactions that can be expected. These studies typically include
both single and multiple dose administration.
4.3.1.2 Pharmacokinetics
Characterisation of a drug's absorption, distribution, metabolism, and excretion continues
throughout the development programme, but the preliminary characterisation is an essential
early goal. Pharmacokinetic studies are particularly important to assess the clearance of the
drug and to anticipate possible accumulation of parent drug or metabolites, interactions with
metabolic enzymes and transporters, and potential drug-drug interactions. Some
pharmacokinetic studies are commonly conducted in later phases to answer more specialised
questions. For orally administered drugs, the study of food effects on bioavailability is
important to inform the dosing instructions in relation to food. Obtaining pharmacokinetic
information in sub-populations with potentially different metabolism or excretion, such as
patients with renal or hepatic impairment, geriatric patients, children, and ethnic subgroups
should be considered (ICH E4 Dose-Response Studies, E7 Clinical Trials in Geriatric
Population, E11, and E5, respectively).
4.3.1.3 Pharmacodynamics & Early Measurement of Drug Activity
Depending on the drug and the endpoint of interest, pharmacodynamic studies and studies
relating drug levels to response (PK/PD studies) may be conducted in healthy volunteer
participants or in patients with the condition or disease. If there is an appropriate measure,
pharmacodynamic data can provide early estimates of activity and efficacy and may guide the
dosage and dose regimen in later studies.
from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf
3.1.3.1 Phase I (Most typical kind of study: Human Pharmacology)
Phase I starts with the initial administration of an investigational new drug into humans.
Although human pharmacology studies are typically identified with Phase I, they may also be
indicated at other points in the development sequence. Studies in this phase of development
usually have non-therapeutic objectives and may be conducted in healthy volunteer subjects
or certain types of patients, e.g. patients with mild hypertension. Drugs with significant
potential toxicity, e.g. cytotoxic drugs, are usually studied in patients. Studies in this phase
can be open, baseline controlled or may use randomisation and blinding, to improve the
validity of observations.
Studies conducted in Phase I typically involve one or a combination of the following aspects:
a) Estimation of Initial Safety and Tolerability
The initial and subsequent administration of an investigational new drug into humans is
usually intended to determine the tolerability of the dose range expected to be needed for later
clinical studies and to determine the nature of adverse reactions that can be expected. These
studies typically include both single and multiple dose administration.
b) Pharmacokinetics
Characterisation of a drug's absorption, distribution, metabolism, and excretion continues
throughout the development plan. Their preliminary characterisation is an important goal of
Phase I. Pharmacokinetics may be assessed via separate studies or as a part of efficacy, safety
and tolerance studies. Pharmacokinetic studies are particularly important to assess the
clearance of the drug and to anticipate possible accumulation of parent drug or metabolites
and potential drug-drug interactions. Some pharmacokinetic studies are commonly conducted
in later phases to answer more specialised questions. For many orally administered drugs,
especially modified release products, the study of food effects on bioavailability is important.
Obtaining pharmacokinetic information in sub-populations such as patients with impaired
elimination (renal or hepatic failure), the elderly, children, women and ethnic subgroups
should be considered. Drug-drug interaction studies are important for many drugs; these are
generally performed in phases beyond Phase I but studies in animals and in vitro studies of
metabolism and potential interactions may lead to doing such studies earlier.
c) Assessment of Pharmacodynamics
Depending on the drug and the endpoint studied, pharmacodynamic studies and studies
relating drug blood levels to response (PK/PD studies) may be conducted in healthy volunteer
subjects or in patients with the target disease. In patients, if there is an appropriate measure,
pharmacodynamic data can provide early estimates of activity and potential efficacy and may
guide the dosage and dose regimen in later studies.
d) Early Measurement of Drug Activity
Preliminary studies of activity or potential therapeutic benefit may be conducted in Phase I as
a secondary objective. Such studies are generally performed in later phases but may be
appropriate when drug activity is readily measurable with a short duration of drug exposure in
patients at this early stage."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Phase 1 trials are often the first step in testing a new treatment in humans and may include safety assessment, measurement of metabolism and pharmacologic actions of a drug in humans, or the side effects associated with increasing doses. Phase 1 studies often include between 20 and 80 subjects, and often involve healthy subjects."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, janice tufte, Paola Rosati, C P Ooi, Paul Whaley, Joanne Dehnbostel"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-04 comments: Perhaps adding the following may improve the clarity "It may include testing the best way to give a new treatment (for example, by mouth, infusion into a vein, or injection)".
"providing the initial investigation" sounds a bit vague compared to the other trial phase definitions. Also, can a trial really "provide an investigation"? Maybe suggest changing to "in which xxx is investigated", where "xxx" is a tighter definition of what "the initial" is referring to.
2022-01-11 comment: I would suggest not adding how many subjects are typically involved, maybe state that these usually have very small sample sizes. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-11 vote 7-0 by Harold Lehmann, Jesus Lopez-Alcalde, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk"/>
</property>
<concept>
<code value="SEVCO:01031"/>
<display value="exploratory investigational new drug study"/>
<definition
value="A clinical trial that is conducted early in phase 1, involves very limited human exposure, and has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies)."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 0 study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="pre-clinical trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="exploratory IND study"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="the original source at https://www.fda.gov/regulatory-information/search-fda-guidance-documents/exploratory-ind-studies
FDA GUIDANCE DOCUMENT
Exploratory IND Studies
Guidance for Industry, Investigators, and Reviewers
JANUARY 2006
investigational new drug (IND)
For the purposes of this guidance the phrase exploratory IND study is intended to describe a clinical trial that
is conducted early in phase 1,
involves very limited human exposure, and
has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies).
Such exploratory IND studies are conducted prior to the traditional dose escalation, safety, and tolerance studies that ordinarily initiate a clinical drug development program. The duration of dosing in an exploratory IND study is expected to be limited (e.g., 7 days).
from CTO:
Early Phase I clinical trial (Phase 0 trial, Phase 0 clinical trial, Pre-Clinical Trial)
A clinical trial that is at an Early Phase i or Phase 0, which is designed to use an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited. [def-source: NCI] Exploratory trials, involving very limited human exposure, with no therapeutic or diagnostic intent (e.g., screening studies, microdose studies). (Formerly listed as "Phase 0") A clinical trial that is at Early Phase 1 or Phase 0
from SCO:
not included
from NCIt:
Preferred Name: Exploratory Investigational New Drug Study
Definition: A type of clinical trial that involves low dosage and short duration of drug exposure for a limited number of study participants with the intent of gathering preliminary data on the mechanism of action, pharmacodynamics, pharmacokinetics, or bioavailability of promising therapeutic candidate agents in human subjects.
CDISC-GLOSS Definition: A clinical study that is conducted early in Phase 1; involves very limited human exposure and has no therapeutic or diagnostic intent (e.g., screening studies, microdose studies) [FDA Guidance for industry, investigators, and Reviewers: exploratory IND studies, January 2006] See also Phase 0.
First-in-Human Study = A type of phase 1 clinical trial in which the test product is administered to human beings for the first time.
Phase 0 Trial = Pre-Clinical Trial = A clinical trial that uses an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited.
First-in-human trials, in a small number of subjects, that are conducted before Phase 1 trials and are intended to assess new candidate therapeutic and imaging agents. The study agent is administered at a low dose for a limited time, and there is no therapeutic or diagnostic intent. NOTE: FDA Guidance for Industry, Investigators, and Reviewers: Exploratory IND Studies, January 2006 classifies such studies as Phase 1. NOTE: A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations). [Improving the Quality of Cancer Clinical Trials: Workshop summary-Proceedings of the National Cancer Policy Forum Workshop, improving the Quality of Cancer Clinical Trials (Washington, DC, Oct 2007)] (CDISC glossary)
First-in-human trials, in a small number of subjects, that are conducted before Phase 1 trials and are intended to assess new candidate therapeutic and imaging agents. The study agent is administered at a low dose for a limited time, and there is no therapeutic or diagnostic intent. NOTE: FDA Guidance for Industry, Investigators, and Reviewers: Exploratory IND Studies, January 2006 classifies such studies as Phase 1. NOTE: A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations). [Improving the Quality of Cancer Clinical Trials: Workshop summary-Proceedings of the National Cancer Policy Forum Workshop, improving the Quality of Cancer Clinical Trials (Washington, DC, Oct 2007)] )
from OCRe:
Phase 0 = A Phase 0 trial is an exploratory trial involving very limited human exposure, with no therapeutic or diagnostic intent (e.g., screening study, microdose study).
from EDDA:
pre-clinical trial = phase 0 trial = A clinical trial that uses an investigational agent that is available only in very limited quantities and which has never previously given to humans or for which there is extremely limited human experience. Phase 0 clinical trials are intended to enable researchers to understand the path of the drug in the body and its efficacy. Adverse event reporting in Phase 0 trials is expedited. [NCI 2014_12E]"/>
</property>
<property>
<code value="comment"/>
<valueString
value="According to the original FDA guidance, such exploratory IND studies are conducted prior to the traditional dose escalation, safety, and tolerance studies that ordinarily initiate a clinical drug development program. The duration of dosing in an exploratory IND study is expected to be limited (e.g., 7 days).
A type of clinical trial that involves low dosage and short duration of drug exposure for a limited number of study participants with the intent of gathering preliminary data on the mechanism of action, pharmacodynamics, pharmacokinetics, or bioavailability of promising therapeutic candidate agents in human subjects.
Less official terms (phase 0 trial, pre-clinical trial) have been used to describe a clinical trial that uses an investigational agent that has never previously given to humans or for which there is extremely limited human experience. A Phase 0 study might not include any drug delivery but may be an exploration of human material from a study (e.g., tissue samples or biomarker determinations)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Olga Vovk"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-25 vote 8-1 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-25 comments: I had to read a couple of times and check the hierarchy to appreciate this definition, but I agree.
For a later version of SEVCO, we probably should put citations ("original FDA guidance") into the documentation.
concern over the use of the term 'phase 1' in the definition and the presence of an alternative term 'phase 0 study`."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-01 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:01032"/>
<display value="phase 1/phase 2 trial"/>
<definition
value="A clinical trial with a component meeting the definition of phase 1 trial and a component meeting the definition of phase 2 trial."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase I/II trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 1-2 trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 1/2 trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="trial phase 1-2"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="trial phase 1/2"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="from CTO:
phase I/II trial (trial phase 1/2, trial phase 1-2)
Trials that are a combination of phases 1 and 2. A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol.
A class of clinical study that combines elements characteristic of traditional Phase I and Phase II trials. See also Phase I, Phase II.
A trial to study the safety, dosage levels, and response to a new treatment.
from SCO:
phase I/II trial (trial phase 1/2, trial phase 1-2)
A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol.
from NCIt:
same as CTO
from OCRe:
not included
from EDDA:
phase I/II trial (trial phase 1/2, trial phase 1-2)
A class of clinical study that combines elements characteristic of traditional Phase I and Phase II trials. See also Phase I, Phase II. [NCIT_14.08d] [Contributing_Source_CDISC]
A clinical research protocol designed to study the safety, dosage levels and response to new treatment. Phase I/II trials combine a Phase I and a Phase II trial of the same treatment into a single protocol. [NCIT_14.08d]
A trial to study the safety, dosage levels, and response to a new treatment. [NCIT_14.08d]"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-18 vote 3-2 by Harold Lehmann, Paul Harris, Robin Ann Yurk, raradhikaag@gmail.com, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A phase 1 trial is a clinical trial to gather initial evidence in humans to support further investigation of an intervention.
A phase 2 trial is a clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-18 comments: Does it matter that the Term has Arabic numerals and the Definition, Roman?
Consider adding a comment for application to improve definition interpretation with individual term definitions for Phase I/Phase 2 trial
Not sure I quite understand what the "separate sets of design parameters with" phrase means here?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-25 vote 9-0 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="SEVCO:01033"/>
<display value="phase 2 trial"/>
<definition
value="A clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase II trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 2 study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="exploratory clinical study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase II clinical trial"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes:
§ 312.21 Phases of an investigation.
An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows:
....
Phase 2. Phase 2 includes the controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects.
from CTO:
Phase II trial
A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. Includes controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in participants with the disease or condition under study and to determine the common short-term side effects and risks.
Phase 2. Controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. NOTE: Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
A study to test whether a new treatment has an anticancer effect (for example, whether it shrinks a tumor or improves blood test results) and whether it works against a certain type of cancer.
Controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks associated with the drug. NOTE: Phase 2 studies are typically well controlled, closely monitored, and conducted in a relatively small number of patients, usually involving no more than several hundred subjects. [after FDA CDER handbook, ICH E8]
from SCO:
phase II trial not independently defined
from NCIt:
same as CTO
from OCRe:
A Phase 2 trial includes controlled clinical studies conducted to evaluate the effectiveness of the drug for a particular indication or indications in patients with the disease or condition under study and to determine the common short-term side effects and risks.
from EDDA:
A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. [NCI 2014_12E] Studies that are usually controlled to assess the effectiveness and dosage (if appropriate) of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques. These studies are performed on several hundred volunteers, including a limited number of patients with the target disease or disorder, and last about two years. This concept includes phase II studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10]
A clinical research protocol designed to study a biomedical or behavioral intervention in a larger group of people (several hundred), to evaluate the drug's effectiveness for a particular indication in patients with the disease or condition under study, and to determine the common short-term side effects and risks associated with the intervention. [NCI 2014_12E]
Studies that are usually controlled to assess the effectiveness and dosage (if appropriate) of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques. These studies are performed on several hundred volunteers, including a limited number of patients with the target disease or disorder, and last about two years. This concept includes phase II studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10]
from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021
After initial clinical studies provide sufficient information on safety, clinical pharmacology and
dose, exploratory and confirmatory studies (usually referred to as phases 2 and 3, respectively)
are conducted to further evaluate both the safety and efficacy of the drug.
Exploratory studies are designed to investigate safety and efficacy in a selected population of
patients for whom the drug is intended. Additionally, these studies aim to refine the effective
dose(s) and regimen, refine the definition of the targeted population, provide a more robust
safety profile for the drug, and include evaluation of potential study endpoints for subsequent
studies. Exploratory studies may provide information on the identification and determination
of factors that affect the treatment effect and, possibly combined with modelling and
simulation, serve to support the design of later confirmatory studies.
from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf
3.1.3.2 Phase II (Most typical kind of study: Therapeutic Exploratory)
Phase II is usually considered to start with the initiation of studies in which the primary
objective is to explore therapeutic efficacy in patients.
Initial therapeutic exploratory studies may use a variety of study designs, including
concurrent controls and comparisons with baseline status. Subsequent trials are usually
randomised and concurrently controlled to evaluate the efficacy of the drug and its safety for
a particular therapeutic indication. Studies in Phase II are typically conducted in a group of
patients who are selected by relatively narrow criteria, leading to a relatively homogeneous
population and are closely monitored.
An important goal for this phase is to determine the dose(s) and regimen for Phase III trials.
Early studies in this phase often utilise dose escalation designs (see ICH E4) to give an early
estimate of dose response and later studies may confirm the dose response relationship for the
indication in question by using recognised parallel dose-response designs (could also be
deferred to phase III). Confirmatory dose response studies may be conducted in Phase II or
left for Phase III. Doses used in Phase II are usually but not always less than the highest doses
used in Phase†I.
Additional objectives of clinical trials conducted in Phase II may include evaluation of
potential study endpoints, therapeutic regimens (including concomitant medications) and
target populations (e.g. mild versus severe disease) for further study in Phase II or III. These
objectives may be served by exploratory analyses, examining subsets of data and by including
multiple endpoints in trials."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Phase 2 trials are typically controlled clinical studies conducted to evaluate the effectiveness of the intervention for a particular indication and to determine the common short-term side effects and risks associated with the intervention. Phase 2 trials may have a goal of determining the dose(s) or regimen(s) for Phase 3 trials. Phase 2 studies usually include no more than several hundred subjects."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, janice tufte, Paola Rosati, C P Ooi, Joanne Dehnbostel, Paul Whaley"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-01-04 comments: The first part of the definition is ok. In the second part, I would suggest to change with "An insufficient evidence for the intervention tested or the desired patients' number failure could occur thus impeding regulatory approval for clinical use"
Comment Suggestion to add to comment for term from extracted from notes-3.1.3.2: An important goal for this phase is to determine the dose(s) and regimen for Phase III trials.
Early studies in this phase often utilize dose escalation designs (see ICH E4) to give an early estimate of dose response and later studies may confirm the dose response relationship for the indication in question by using recognized parallel dose-response designs (could also be deferred to phase III)
Minor change - the phrasing is a little awkward, suggest "gather evidence about the effectiveness and safety of an intervention in patients with the disease or condition under study, but not sufficient...".
I am not sure the comment for application is fully consistent with the definitions (what about safety?).
2022-01-11 comment: I would suggest not adding how many subjects are typically involved, maybe state that these usually have small sample sizes. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-11 vote 7-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk"/>
</property>
</concept>
<concept>
<code value="SEVCO:01034"/>
<display value="phase 2/phase 3 trial"/>
<definition
value="A clinical trial with a component meeting the definition of phase 2 trial and a component meeting the definition of phase 3 trial."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase II/III trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 2-3 trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 2/3 trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="trial phase 2-3"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="trial phase 2/3"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="from CTO:
phase II/III trial (trial phase 2/3, trial phase 2-3)
Trials that are a combination of phases 2 and 3. A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials.
A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen.
A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials.
from SCO:
not included
from NCIt:
phase II/III trial (trial phase 2/3, trial phase 2-3)
A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials.
A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen.
A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials.
from OCRe:
not included
from EDDA:
phase II/III trial (trial phase 2/3, trial phase 2-3)
A type of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. [NCIT_14.08d]
A class of clinical study that combines elements characteristic of traditional Phase II and Phase III trials. [NCIT_14.08d] [Contributing_Source_CDISC]
A trial to study response to a new treatment and the effectiveness of the treatment compared with the standard treatment regimen. [NCIT_14.08d]
"Designs that combine phase II and III functions (ie, phase II/III designs) have separate sets of design parameters that correspond to their phase II and III components." -- Korn EL et al. Design Issues in Randomized Phase II/III Trials. J Clin Oncol 2012 https://ascopubs.org/doi/full/10.1200/JCO.2011.38.5732. https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC3295562&blobtype=pdf"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-18 vote 2-3 by Harold Lehmann, Paul Harris, Robin Ann Yurk, raradhikaag@gmail.com, Paul Whaley
2022-01-25 vote 9-1 by Harold Lehmann, Alejandro Piscoya, Janice Tufte, Paola Rosati, Robin Ann Yurk, Philippe Rocca-Serra, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde, Paul Whaley
2022-02-01 vote 4-1 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-18 comments: Phase 2/3 trials determine efficacy of a new biomedical intervention i.e. whether it works as intended in a larger group of study participants, and monitor adverse effects so that the intervention may be used safely.
Consider adding a comment for application to improve definition interpretation with individual term definitions for Phase 2/Phase 3 trial
Not sure I quite understand what the "separate sets of design parameters with" phrase means here?
2022-01-25 comment: As already pointed out, to me these definitions seem incongruent and lack specification of the outcomes used, namely core clinical outcomes relevant for patients. Are pahse 2 and phase 3 trials designed to gather evidence of 'effectiveness' and safety or 'efficacy' and monitor adverse effects of a new biomedical intervention? For what outcome? The three sentences proposed in the comment for application of this code seem overlapping the two terms (i.e. is it still efficacy the right term used for trials or is it effectiveness, commonly used for prospective observational studies?). I think it is important to justify why the two terms are used for clinical trial designs.
2022-02-01 comment: To me this definition has no clear meaning. As your are working and struggling so hard to define and clarify the scientific evidence code system, I wish to participate to the meeting to discuss with you this tricky definition. If you agree, please, let me know."/>
</property>
<property>
<code value="comment"/>
<valueString
value="A phase 2 trial is a clinical trial to gather evidence of effectiveness and safety for an intervention in patients with the disease or condition under study, but not intended to provide an adequate basis for regulatory approval for clinical use.
A phase 3 trial is a clinical trial to gather the evidence of effectiveness and safety of an intervention, intended to provide an adequate basis for regulatory approval for clinical use."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-08 vote 7-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper, Paul Whaley, Sunu Alice Cherian"/>
</property>
</concept>
<concept>
<code value="SEVCO:01035"/>
<display value="phase 3 trial"/>
<definition
value="A clinical trial to gather the evidence of effectiveness and safety of an intervention, intended to provide an adequate basis for regulatory approval for clinical use."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase III trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 3 study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="confirmatory clinical study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase III clinical trial"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="https://www.ecfr.gov/current/title-21/chapter-I/subchapter-D/part-312/subpart-B/section-312.21 is the US Code of Federal Regulations Title 21 (Food and Drugs) Chapter I Subchapter D Part 312 Subpart B § 312.21 and includes:
§ 312.21 Phases of an investigation.
An IND may be submitted for one or more phases of an investigation. The clinical investigation of a previously untested drug is generally divided into three phases. Although in general the phases are conducted sequentially, they may overlap. These three phases of an investigation are a[sic] follows:
....
Phase 3. Phase 3 studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather the additional information about effectiveness and safety that is needed to evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. Phase 3 studies usually include from several hundred to several thousand subjects.
from CTO:
Phase III trial
Includes trials conducted after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather additional information to evaluate the overall benefit-risk relationship of the drug. A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo.
Phase 3. Studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather the additional information about effectiveness and safety that is needed to confirm efficacy and evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. NOTE: Phase 3 studies usually include from several hundred to several thousand subjects. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
A study to compare the results of people taking a new treatment with the results of people taking the standard treatment (for example, which group has better survival rates or fewer side effects). In most cases, studies move into phase III only after a treatment seems to work in phases I and II. Phase III trials may include hundreds of people.
Studies are expanded controlled and uncontrolled trials. They are performed after preliminary evidence suggesting effectiveness of the drug has been obtained and are intended to gather the additional information about effectiveness and safety that is needed to confirm efficacy and evaluate the overall benefit-risk relationship of the drug and to provide an adequate basis for physician labeling. NOTE: Phase 3 studies usually include from several hundred to several thousand subjects. [after FDA CDER handbook, ICH E8]
from SCO:
A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo.
from NCIt:
Phase III trial (Phase III Clinical Trial; Phase III Trial; phase 3; Trial Phase 3; PHASE III TRIAL; phase III trial; Phase III Trials; 3; Phase 3 Study; Clinical Trials, Phase III; Phase III Study; Phase III Protocol)
A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo.
from OCRe:
A Phase 3 trial includes expanded controlled and uncontrolled trials after preliminary evidence suggesting effectiveness of the drug has been obtained, and are intended to gather additional information to evaluate the overall benefit-risk relationship of the drug and provide an adequate basis for physician labeling.
from EDDA:
Comparative studies to verify the effectiveness of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques determined in phase II studies. During these trials, patients are monitored closely by physicians to identify any adverse reactions from long-term use. These studies are performed on groups of patients large enough to identify clinically significant responses and usually last about three years. This concept includes phase III studies conducted in both the U.S. and in other countries. [MeSH 2014_2014_02_10]
A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo. [NCI 2014_12E]
from INTERNATIONAL COUNCIL FOR HARMONISATION OF TECHNICAL REQUIREMENTS FOR PHARMACEUTICALS FOR HUMAN USE (ICH HARMONISED GUIDELINE) GENERAL CONSIDERATIONS FOR CLINICAL STUDIES E8(R1) https://database.ich.org/sites/default/files/E8-R1_Guideline_Step4_2021_1006.pdf Adopted on 6 October 2021
After initial clinical studies provide sufficient information on safety, clinical pharmacology and
dose, exploratory and confirmatory studies (usually referred to as phases 2 and 3, respectively)
are conducted to further evaluate both the safety and efficacy of the drug.
Confirmatory studies are designed to confirm the preliminary evidence accumulated in earlier clinical studies that a drug is safe and effective for use for the intended indication and recipient population. These studies are often intended to provide an adequate basis for marketing approval, and to support adequate instructions for use of the drug and official product information. They aim to evaluate the drug in participants with or at risk of the condition or disease who represent those who will receive the drug once approved. This may include investigating subgroups of patients with frequently occurring or potentially relevant comorbidities (e.g., cardiovascular disease, diabetes, hepatic and renal impairment) to characterise the safe and effective use of the drug in patients with these conditions.
Confirmatory studies may evaluate the efficacy and safety of more than one dose or the use of the drug in different stages of disease or in combination with one or more other drugs. If the intent is to administer a drug for a long period of time, then studies involving extended exposure to the drug should be conducted (ICH E1 Clinical Safety for Drugs used in Long-Term Treatment). Irrespective of the intended duration of administration, the duration of effect of the drug will also inform the duration of follow-up.
Study endpoints selected for confirmatory studies should be clinically relevant and reflect disease burden or be of adequate surrogacy for predicting disease burden or sequelae.
from March 1998 https://www.ema.europa.eu/en/documents/scientific-guideline/ich-e-8-general-considerations-clinical-trials-step-5_en.pdf
3.1.3.3 Phase III (Most typical kind of study: Therapeutic Confirmatory)
Phase III usually is considered to begin with the initiation of studies in which the primary objective is to demonstrate, or confirm therapeutic benefit.
Studies in Phase III are designed to confirm the preliminary evidence accumulated in Phase II that a drug is safe and effective for use in the intended indication and recipient population. These studies are intended to provide an adequate basis for marketing approval. Studies in Phase III may also further explore the dose-response relationship, or explore the drug's use in wider populations, in different stages of disease, or in combination with another drug. For drugs intended to be administered for long periods, trials involving extended exposure to the drug are ordinarily conducted in Phase III, although they may be started in Phase II (see ICH E1). ICH E1 and ICH E7 describe the overall clinical safety database considerations for chronically administered drugs and drugs used in the elderly. These studies carried out in Phase III complete the information needed to support adequate instructions for use of the drug (official product information)."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Phase 3 trials are typically conducted after preliminary evidence suggests effectiveness and usually have the primary objective to demonstrate or confirm therapeutic benefit compared to placebo or a standard treatment. Phase 3 studies usually include from several hundred to several thousand subjects. Study endpoints for phase 3 trials should be clinically relevant or of adequate surrogacy for predicting clinical effects."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Janice Tufte, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-21 vote 2-2 by Robin Ann Yurk, C P Ooi, Janice Tufte, Paul Whaley
2022-01-04 vote 5-2 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Paola Rosati, C P Ooi, Joanne Dehnbostel, Paul Whaley
2022-01-11 vote 6-1 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, janice tufte, Paul Whaley, Andrew Beck, Robin Ann Yurk"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-21 comments: Note: consider adding the following comments from comments from previous reviewers to improve interpretation.
3.1.3.3 “Phase III usually is considered to begin with the initiation of studies in which the primary objective is to demonstrate, or confirm therapeutic benefit.”
EDDA: “Comparative studies to verify the effectiveness of diagnostic, therapeutic, or prophylactic drugs, devices, or techniques determined in phase II studies…. A clinical research protocol designed to investigate the efficacy of the biomedical or behavioral intervention in large groups of human subjects (from several hundred to several thousand), to confirm efficacy, to monitor adverse reactions to the new medication or treatment regimen with respect to long-term use and by comparing the intervention to other standard or experimental interventions as well as to a placebo”
..................
I think the pieces are there but the phrasing is difficult to parse.
2022-01-04 comments: Perhaps adding "compared with a standard treatment" may improve the clarity.
Minor change - the phrasing is a little awkward, suggest "gather evidence about the effectiveness and safety of an intervention that is needed...".
2022-01-11 comments: I would suggest not adding how many subjects are typically involved. Unfortunately, sample sizes have decreased over time. https://bmjopen.bmj.com/content/11/12/e053377
Minor change for consistency with other trial definitions: "A clinical trial to gather evidence of effectiveness and safety of an intervention, that is intended to provide an adequate basis for regulatory approval for clinical use.""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-18 vote 6-0 by Harold Lehmann, Paul Harris, Robin Ann Yurk, Paola Rosati, raradhikaag@gmail.com, Paul Whaley"/>
</property>
</concept>
<concept>
<code value="SEVCO:01036"/>
<display value="post-marketing study"/>
<definition
value="A clinical trial to gather additional evidence of effectiveness and safety of an intervention for an already approved clinical use."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase IV trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 4 trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="phase 4 study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="trial phase 4"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="post-approval study"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="from CTO:
Phase IV Trial (Phase IV Study, Phase IV clinical trial, phase 4 study, phase 4 trial, trial phase 4)
Studies of FDA-approved drugs to delineate additional information including the drug's risks, benefits, and optimal use. A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed.
After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial.
Post approval studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [after FDA CDER handbook, ICH E8]
Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
from SCO:
not included
from NCIt:
Phase IV Trial (Phase IV Study, Phase IV clinical trial, phase 4 study, phase 4 trial, trial phase 4)
A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed.
After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial.
Post approval studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [after FDA CDER handbook, ICH E8]
Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] (CDISC glossary)
from OCRe:
A Phase 4 study monitors FDA-approved drug to delineate additional information including the drug's risks, benefits, and optimal use.
from EDDA:
A randomized, controlled trial that is designed to evaluate the long-term safety and efficacy of a drug for a given indication. Often they are designed to study side effects that may have become apparent after the phase III study was completed. [NCIT_14.08d]
After a treatment has been approved and is being marketed, it is studied in a phase IV trial to evaluate side effects that were not apparent in the phase III trial. Thousands of people are involved in a phase IV trial. [NCIT_14.08d]
Phase 4. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval. NOTE: These studies could include, but would not be limited to, studying different doses or schedules of administration than were used in Phase 2 studies, use of the drug in other patient populations or other stages of the disease, or use of the drug over a longer period of time. [After FDA CDER Handbook, ICH E8] [Contributing Source_CDISC] [NCIT_14.08d]"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Post-marketing studies (phase IV trials) are often used to evaluate adverse effects that were not apparent in phase III trials, and may involve thousands of patients. Postmarketing (Phase 4) studies to delineate additional information about the drug's risks, benefits, and optimal use that may be requested by regulatory authorities in conjunction with marketing approval."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Paola Rosati, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Joanne Dehnbostel"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-15 comment: Maybe add hyphen between "already" and "approved""/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:01002"/>
<display value="observational research"/>
<definition
value="A study design in which the independent variables (exposures or interventions) are not prospectively assigned or modified by the investigator."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Observational Study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Observational Study Design"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Observational primary research"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="observational method of research"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Non-interventional research"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="observational study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="observational study design"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="observational primary research"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="non-interventional research"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Mario Tristan, Joanne Dehnbostel, Harold Lehmann, Khalid Shahin, Brian S. Alper"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/7/2021: Asiyah Lin, KM Saif-Ur-Rahman, Harold Lehmann, Sebastien Bailly, Bhagvan Kommadi, Mario Tristan, Leo Orozco, Ahmad Sofi-Mahmudi"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-05-17 vote 5-3 on "Observational research = In a prospective or retrospective study, an independent variable is measured but not manipulated by the investigator to evaluate a response or outcome (the dependent variable)." by Eric Harvey, Bhagvan Kommadi, Paola Rosati, KM Saif-Ur-Rahman, Ahmad Sofi-Mahmudi, Jesus Lopez-Alcalde, Sorana D. Bolboacă, Harold Lehmann, 2021-05-24 vote 8-3 on Observational research="A study design in which the variables (exposures, interventions, and outcomes) are not prospectively assigned or modified by the investigator." by Alejandro Piscoya, Philippe Rocca-Serra, KM Saif-Ur-Rahman, Eric Harvey, Harold Lehmann, Bhagvan Kommadi, Sorana D. Bolboacă, Jesús López-Alcalde, Paola Rosati, Tatyana Shamliyan, Brian Alper, , 2021-05-31 vote 11-1 on Observational research="A study design in which the independent variables (exposures or interventions) are not prospectively assigned or modified by the investigator." by Eric Harvey, Bhagvan Kommadi, Brian Alper, Sebastien Bailly, Alejandro Piscoya, Harold Lehmann, KM Saif-Ur-Rahman, Paola Rosati, Sorana D. Bolboacă, Asiyah Lin, Leo Orozco, Erfan Shamsoddin"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="I dislike the term "manipulated" in the definition -- suggest change to: In a prospective or retrospective study, without any specific intervention assigned to participants, an investigator observes and measures an intervention or procedure (the independent variable) to assess or learn more about an effect or outcome (the dependent variable). "In a prospective or retrospective study, an independent variable (a predictor) is obeserved or measured by the investigator to evaluate a response or an outcome (the dependent variable)." I would delete "in a prospective or retrospective study" as it could be ambispective 5-24-2021 similar comment about the synonyms assigned the class (conflating plan/design) with the object realised by executing a plan I think that the outcomes are never assigned or modified by the investigator (they are measured). Thus, to be consistent with the definition of interve…suggest to remove "outcomes" from ( ) (is there a semantic difference between "are not" and "none is"?) I suggest to clarify the goal as drawing causal inferences from the observed association between exposure and outcomes 5-31-2021 comment The suggested definition is a non-interventional study definition. Not sure if a non-interventional is fully equivalent to observational studies"/>
</property>
<property>
<code value="comment"/>
<valueString
value="We acknowledge that observational study design and observational study may not be exact synonyms of observational research, but observational research could be used to encompass both design and implementation of the design. In the context of coding study design factors, observational research is commonly used to denote non-interventional research."/>
</property>
<concept>
<code value="SEVCO:01037"/>
<display value="post-marketing surveillance study"/>
<definition
value="An observational study to identify adverse events related to the use of an approved clinical intervention."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Postmarketing evaluation study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="postmarketing evaluation study"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="from CTO:
not included
from SCO:
not included
from NCIt:
Postmarketing Surveillance
Programs to identify adverse events that did not appear during the drug approval process.
Ongoing safety monitoring of marketed drugs. See also Phase 4 studies, Phase 5 studies.
also Phase V Trial (phase 5, trial phase 5)
Postmarketing surveillance is sometimes referred to as Phase V. See outcomes research.
from OCRe:
not included
from EDDA:
postmarketing evaluation study (post-marketing product surveillance)
Surveillance of drugs, devices, appliances, etc., for efficacy or adverse effects, after they have been released for general sale. [MeSH 2014_2014_02_10]"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-15 vote 10-0 by Paul Whaley, Andrew Beck, Brian S. Alper, Paola Rosati, Janice Tufte, Jesus Lopez-Alcalde, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Joanne Dehnbostel"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-15 comments: Alternative terms could be: Post-marketing evaluation study,
(Do we need to connect the "approval" to an indication?"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:01010"/>
<display value="Comparative study design"/>
<definition
value="A study design in which two or more groups are compared."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Comparative research"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Ellen Jepson, Kenneth Wilkins, Mario Tristan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya"/>
</property>
<concept>
<code value="SEVCO:01011"/>
<display value="Parallel cohort design"/>
<definition
value="A comparative study design in which the groups are compared concurrently and participants are expected to remain in the groups being compared for the entire duration of participation in the study."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Concurrent cohort study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Ellen Jepson, Kenneth Wilkins, Mario Tristan, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 8/9/2021: Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya"/>
</property>
</concept>
<concept>
<code value="SEVCO:01012"/>
<display value="Crossover cohort design"/>
<definition
value="A comparative study design in which participants receive two or more alternative exposures during separate periods of time."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Crossover study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Crossover trial"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="8/9 as of 8/9/2021: voting on "A comparative study design in which participants receive two or more alternative exposures during separate periods of time." by Erfan Shamsoddin, Paola Rosati, Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Joanne Dehnbostel, Alejandro Piscoya"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="8/9/21 comment: It's not clear from this definition that each group of participants receives the same 2 or more exposures, but not in the same time sequence"/>
</property>
<concept>
<code value="SEVCO:01024"/>
<display value="Controlled crossover cohort design"/>
<definition
value="A crossover cohort design in which two or more cohorts have different orders of exposures."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
</concept>
<concept>
<code value="SEVCO:01025"/>
<display value="Single-arm crossover design"/>
<definition
value="A crossover cohort design in which all participants are in a single cohort with the same order of exposures."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:01013"/>
<display value="Case control design"/>
<definition
value="A comparative study design in which the groups being compared are defined by outcome presence (case) or absence (control)."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
</concept>
<concept>
<code value="SEVCO:01014"/>
<display value="Matching for comparison"/>
<definition
value="A comparative study design in which individual participants in different groups being compared are paired or matched into sets based on selected attributes for within-set analysis."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Matched study design"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
<concept>
<code value="SEVCO:01020"/>
<display value="Family study design"/>
<definition
value="A matched study design in which related or non-related family members are compared."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Familial study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Family study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Family based study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan"/>
</property>
<concept>
<code value="SEVCO:01021"/>
<display value="Twin study design"/>
<definition
value="A family study design in which twin siblings are compared."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:01015"/>
<display value="Cluster as unit of allocation"/>
<definition
value="A comparative study design in which participants are allocated to exposures (interventions) by their membership in groups (called clusters) rather than by individualized assignments."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Clustering for comparison"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:01023"/>
<display value="Non-comparative study design"/>
<definition
value="A study design with no comparisons between groups with different exposures and no comparisons between groups with different outcomes."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Non-comparative research"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Descriptive study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
<concept>
<code value="SEVCO:01016"/>
<display value="Uncontrolled cohort design"/>
<definition
value="A non-comparative study design in which two or more participants are evaluated in a single group (or cohort)."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Single cohort design"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Case series design"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Non-controlled cohort design"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
</concept>
<concept>
<code value="SEVCO:01017"/>
<display value="Case report"/>
<definition
value="A non-comparative study design in which a single participant is evaluated."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Case study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Michael Panzer"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 8/23/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Leo Orozco, Alejandro Piscoya"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:01022"/>
<display value="Population-based design"/>
<definition
value="A study design in which the unit of observation is a population or community."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Population study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Harold Lehmann"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The term ‘population-based study’ is generally used for an observational comparative study design in which populations are compared."/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 10/18/2021: Cheow Peng Ooi, Janice Tufte, Robin Ann Yurk, Eric Harvey, Joanne Dehnbostel"/>
</property>
<concept>
<code value="SEVCO:01044"/>
<display value="Ecological design"/>
<definition
value="A study design in which the unit of observation is a population or community defined by social relationships or physical surroundings."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Ecologic study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 10/18/2021: Cheow Peng Ooi, Janice Tufte, Robin Ann Yurk, Eric Harvey, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="6 to 1 in 2021-09-20 vote with 7 participants (Ecological design = A comparative study design in which populations are compared. An ecologic study is a non individual-human study in which the unit of observation is a population or community.) - Robin Ann Yurk, Janice Tufte, Eric Harvey, Jesus Lopez-Alcalde, Mario Tristan, Sorana D Bolboaca, Paola Rosati, 8 to 1 vote on 2021-09-27 with 9 participants (Ecological design [Population-based design, Ecologic study, Population study] = A comparative study design in which populations are compared. An ecologic study is a non-individual study in which the unit of observation is a population or community.) - Jesus Lopez-Alcalde, Asiyah Lin, Eric Harvey, Bhagvan Kommadi, Alejandro Piscoya, Robin Ann Yurk, Mario Tristan, Paola Rosati, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-09-20 comment: I miss here the explicit declaration that ecological studies are observational. A cluster trial can randomise communities and is not an ecological study. Besides, and I may be worng, but an ecological study may include non-humans, for example, ecological study of air contamination levels in Spain compared to Italy. 2021-09-27 comment: The differences of ecologic studies and other population based studies are not reflected. consider adding "Variables in an ecologic analysis may be aggregate measures, environmental measures, or global measures.""/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00998"/>
<display value="Study Design Process"/>
<definition
value="A specification of a sequence of actions for a component or part of a study design."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Study Design Component"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Study design is defined as a plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-22 vote 5-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk"/>
</property>
<concept>
<code value="SEVCO:01027"/>
<display value="Cross sectional data collection"/>
<definition
value="A study design process in which data is collected at a single point in time."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Cross-sectional study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="The word "feature" was added to the definition on March 7, 2022 to match the change in hierarchical terms."/>
</property>
</concept>
<concept>
<code value="SEVCO:01028"/>
<display value="Longitudinal data collection"/>
<definition
value="A study design process in which data is collected at two or more points in time."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Longitudinal study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="The word "feature" was added to the definition on March 7, 2022 to match the change in hierarchical terms."/>
</property>
<concept>
<code value="SEVCO:01018"/>
<display value="Time series design"/>
<definition
value="A longitudinal data collection which includes a set of time-ordered observations."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Multiple time point comparison"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan"/>
</property>
<concept>
<code value="SEVCO:01019"/>
<display value="Before and after comparison"/>
<definition
value="A time series design which includes comparisons of observations before and after an event or exposure."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Before and After design"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Pre-Post Design"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Before-After Comparison"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 9/12/2021: Eric Harvey, Bhagvan Kommadi, Brian S. Alper, Cheow Peng Ooi, Janice Tufte, Paola Rosati, Robin Ann Yurk, MD, MPH, Mario Tristan"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:01045"/>
<display value="Primary data collection"/>
<definition
value="A study design process in which the data are recorded and collected during the study for the purpose of the same study."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Study-generated data collection"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Paul Whaley, Mario Tristan"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-22 vote 7-1 by Paola Rosati, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A, Rebecca Baker, Robin Ann Yurk, Janice Tufte, Harold Lehmann
2022-03-01 vote 3-3 by Joanne Dehnbostel, Robin Ann Yurk, Paul Whaley, Nisha Mathew, Paola Rosati, Sunu Alice Cherian
2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 5-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk, nelle.stocquart"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-22 comments: Definition: Data recorded and collected during the study.
For parallelism with "secondary data collection," perhaps write, "for the purpose of the current study."2022-03-01 comments:
A data collection technique in which the data are collected and recorded during the study for the purpose of the same study.
For the term definition---I would edit so it reads...A study design in which the data are collected and recorded to answer a new research question.
Data collection is not study design, it can called as a technique A data collection technique in which data is recorded and collected during the study for the purpose of the same study.
2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
"in which" sounds strange for a "feature." ("Color is a feature in which..." does not sound right.) Perhaps a...feature regarding how data are recorded..."?
2022-03-22 comment: Suggest modify definition or create a comment for application so it reads: A Study design method in which the data are collected for original research to answer new research questions."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The study design process includes the source and method for data collection. When the data are collected for original research to answer the original research questions, this is called primary data collection."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-29 vote 6-0 by Paul Whaley, Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Brian S. Alper, Cauê Monaco"/>
</property>
</concept>
<concept>
<code value="SEVCO:01026"/>
<display value="Real world data collection"/>
<definition
value="A study design process in which the study data are obtained from a source of data collected during a routine process in the natural environment rather than using a process designed or controlled by the researcher."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RWD collection"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Paul Whaley, Mario Tristan"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-22 vote 7-1 by Paola Rosati, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A, Rebecca Baker, Robin Ann Yurk, Janice Tufte, Harold Lehmann
2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-2 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk, nelle.stocquart
2022-03-29 vote 4-1 by Paul Whaley, Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco
2022-04-05 vote 6-0 by Cauê Monaco, Harold Lehmann, Mario Tristan, Robin Ann Yurk, Jesus Lopez-Alcalde, Nisha Mathew
THEN THE TERM CHANGED to Real World Data Collection
2022-04-19 vote 3-1 by Cauê Monaco, Robin Ann Yurk, Jesus Lopez-Alcalde, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-22 comment: Definition: Data gathered from studies, surveys, experiments that have been done by other people for other studies
2022-03-15 comment: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
2022-03-22 comments: The proposed definition only details the source of the data but not the data collection itself. I propose: "A study design process in which the data are collected from data collected for a purpose other than the current study".
Suggest modify definition or create a comment for application so it reads. A study design method in which the previously collected data is used to answer new and additional research questions. Some example of the types of studies are retrospective study etc..
2022-03-29 comment: In the comment for application--suggestion. Delete phrase When data are collected. I would combined sentence When data are used in the form of analysis and interpretation from original research to answer additional research questions separate from the original research.
2022-04-12 comments: For Term definition: Suggest revising definition to A study design process in which the study data are obtained from data collected for recording data for business purposes.
Comment for application: Add this statement, There are different categories of research such as business research, marketing research, insurance research etc.
"data are obtained from data collected" may be changed to "data are obtained from a source for data collection"
2022-04-19 comment: Suggest edit the term definition. The alternative term and comment for application are fine. There are different kinds of research business research that can be classified as real world data. The term definition should read....A study design in which the study data processes are obtained from a natural environment rather than controlled research."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Real world data collection occurs when the study uses data obtained from a source that was not created for research as a primary purpose. A study can involve both primary data collection (with some data collected by a process created for the purpose of the study investigation) and real world data collection (with some data collected from a process created for a routine business or operational purpose). If a study involves both primary data collection and real world data collection, both terms can be applied."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-06 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte"/>
</property>
<concept>
<code value="SEVCO:01039"/>
<display value="Real world data collection from healthcare records"/>
<definition
value="Real world data collection from data obtained routinely for a purpose of recording healthcare delivery in a record controlled by a healthcare professional."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RWD collection from clinical care records"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RWD from health care records"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RWD from healthcare delivery records"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Real world data collection from medical records"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 3-1 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco
2022-04-05 vote 6-1 by Cauê Monaco, Paola Rosati, Harold Lehmann, Mario Tristan, Robin Ann Yurk, Jesus Lopez-Alcalde, Nisha Mathew"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
would proposed, "...for the purpose..." (as in primary data collection)
Seems like we should add that the original data is then used for a secondary research purpose in the definition, not only explain in alternative terms
2022-03-22 comment: ídem: Proposal: "A study design process in which the data are collected from data collected for a purpose of recording healthcare delivery in a record controlled by a healthcare professional."2022-03-29 comment: "medical records" and "health records" seem to be much more widely used expressions than "healthcare delivery records"2022-04-05 comments: Suggest make a comment or distinction in the term definition that the primary data collected is categorized as real world data for the purpose of delivering professional healthcare services. The data set can be used for secondary data collection."/>
</property>
<property>
<code value="comment"/>
<valueString
value="This term is used when the original data collection (primary data collection) is done for the purpose of delivering professional healthcare services. The secondary use of this data (sometimes called 'real world data') for research is then called secondary data collection."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Robin Ann Yurk, Muhammad Afzal"/>
</property>
</concept>
<concept>
<code value="SEVCO:01050"/>
<display
value="Real world data collection from personal health records"/>
<definition
value="Real world data collection from data obtained routinely for a purpose of recording data related to personal health in a record controlled by the person, guardian, or caretaker."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Real world data collection from personal care records"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RWD collection from personal medical records"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper
THEN TERM CHANGED 2022-04-05"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
Might suggest "the purpose," again
add in the definition that the original ddata is then used for a seconday purpose
2022-03-22 comment: dem: Proposal: "A study design process in which the data are collected from data collected for a purpose of recording data related to personal health in a record controlled by the person, guardian, or caretaker.""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal"/>
</property>
</concept>
<concept>
<code value="SEVCO:01040"/>
<display
value="Real world data collection from healthcare financing records"/>
<definition
value="Real world data collection from data obtained routinely for a purpose of recording healthcare financing."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RWD collection from healthcare financing records"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper
THEN TERM CHANGED 2022-04-05"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
"the purpose"add original financial data is then used for secondary analysis etc
2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of recording healthcare financing""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal"/>
</property>
</concept>
<concept>
<code value="SEVCO:01048"/>
<display value="Real world data collection from testing procedures"/>
<definition
value="Real world data collection from data obtained routinely for a purpose of testing, such as diagnostic testing or screening examination."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RWD collection from testing procedures"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 4-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper
THEN TERM CHANGED 2022-04-05"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
"the purpose"and then used for secondary research purposes
2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of testing, such as diagnostic testing or screening examination""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal"/>
</property>
<concept>
<code value="SEVCO:01046"/>
<display
value="Real world data collection from monitoring procedures"/>
<definition
value="Real world data collection from data obtained routinely for a purpose of repeated testing."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RWD collection from monitoring procedures"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart
2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper
THEN TERM CHANGED 2022-04-05"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
"the purpose"2022-03-22 comment: Ídem. "A study design process in which the data are collected from data collected for a purpose of repeated testing.""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-19 vote 5-0 by Cauê Monaco, Jesus Lopez-Alcalde, Harold Lehmann, Brian S. Alper, Muhammad Afzal"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:01049"/>
<display value="Secondary data collection from prior research"/>
<definition
value="A study design process in which the data are collected from data obtained during a different study than the current study."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
2022-03-22 comments: Ídem. "A study design process in which the data are collected from data collected during a different study than the current study"When does this recording happen?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-29 vote 5-0 by Mario Tristan, Paul Whaley, Cauê Monaco, Joanne Dehnbostel, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:01042"/>
<display value="Secondary data collection from a registry"/>
<definition
value="A study design process in which the data are collected from a system organized to obtain and maintain uniform data for discovery and analysis, and this system is organized prior to the current study."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Ilkka Kunnamo"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value=""For the purposes of this guide, a patient registry is an organized system that uses observational study methods to collect uniform data (clinical and other) to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serves one or more predetermined scientific, clinical, or policy purposes" -- in https://effectivehealthcare.ahrq.gov/sites/default/files/pdf/registries-guide-3rd-edition_research.pdf"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 3-2 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann
2022-03-22 vote 4-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: the term discovery is not suitable. Can we have some other term?
ok, if 'study design feature' is understood as 'part specified by study design plan/protocol', but sounds like a process
2022-03-22 comments: Ídem. "A study design process in which the data are collected from data collected in a system organized to obtain and maintain uniform data for discovery and analysis"The definition needs to be more, When did this happen? Before the study starts?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-29 vote 5-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="SEVCO:01051"/>
<display value="Multisite data collection"/>
<definition
value="A study design process in which data are collected from two or more geographic locations."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-06 vote 6-1 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-04-26 comment: As stated, this term has too much overlap with "Multicentric" Why do we need this term?"/>
</property>
<property>
<code value="comment"/>
<valueString
value="For studies conducted across multiple contexts (administrative or logistical) that are distinct from geographic locations, potentially introducing greater variability beyond multisite data collection, use the term Multicentric."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-10 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:01086"/>
<display value="Quantitative analysis"/>
<definition
value="A study design process in which data are analyzed with mathematical or statistical methods and formulas."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-17 vote 8-1 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde
2022-05-24 vote 5-1 by Robin Ann Yurk, nelle.stocquart@kce.fgov.be, Eric M Harvey, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-17 comments: Suggest including examples of quantitative analysis so as to improve your definition as there are many categories of quantitative methods: ie survey methods, logistic regression,...etc
Quantitative and qualitative have categorical results I believe
2022-05-24 comment: An analytic approach using statistical methods and formulas to report the data for interpretation
2022-05-26 comment: I would leave the description of a qualitative analysis out of the comment for application"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The distinction of quantitative vs. qualitative analysis refers to whether mathematical processing is involved, whether or not the analysis includes numerical variables. Processing a categorical variable (e.g. values of happy, sad, or jealous as a response to "How are you feeling?") to produce numerical results (e.g. 30% happy, 50% sad, 20% surprised) would be classified as a Quantitative analysis."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:01087"/>
<display value="Qualitative analysis"/>
<definition
value="A study design process in which data are analyzed, without primary reliance on mathematical or statistical techniques, by coding and organizing data to provide interpretation or understanding of experiences or hypotheses."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-17 vote 5-4 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde
2022-05-24 vote 4-1 by Robin Ann Yurk, Eric M Harvey, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-17 comments: What about ordinal data such as low/medium/high? I think I would view that as qualitative.
Suggest revise definition to include examples of the analysis methods as alternative terms or comment for applications: ie focus groups. There are many new software tools which apply quantitative methods to qualitative studies.
Quantitative and qualitative have categorical results I believe
Disagree - (Sorry, maybe you already know my comment here ;>). From my experience, qualitative analysis produces more than descriptive or categorical results, and uses a range of essential complex methodologies for producing unattainable results from trials. Some methods are inductive, others are deductive, or a mix of both. This modify the results achievable. For example, phenomenology from categorical codes produces new understanding of people's lived experiences (deemed robust, even from a small but convenient sample of people), whereas grounded theory, from descriptive and categorical data results, discovers or creates novel theories, crucial for subsequent research scrutiny, even for a trial.
I would suggest to define qualitative analysis differently = A study design process in which data, analysed and coded to produce descriptive and categorical results, lead to new understanding of people's lived experiences or new theories, unattainable from quantitative studies, essential for future trials.
In my opinion, descriptive numerical results come from quantitative analysis also (for example, incidence of SARS-COV2 per 100.000 habitants). I am not an expert in qualitative research but I guess it tackles phenomenons which can be observed but not measured.
2022-05-24 comment: Qualitative analysis provide a description or summary to understand exploratory experiences and patterns, themes in the data which can provide the framework for additional data interpretation through other analysis such as quantitative analysis. An example of a qualitative method is focus groups. Technology exists such as natural language processing or other software to report the analysis.
2022-05-26 comment: I would leave the description for a quantitative analysis out of the definition. I would also delete the example of feelings as this can be quantified through satisfaction research which is a quantitative analysis. I would give an example of focus groups or nature language processing. The method involves identifying themes in narrative text."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The distinction of quantitative vs. qualitative analysis refers to whether mathematical processing is involved, whether or not the analysis includes numerical variables. Processing a categorical variable (e.g. values of happy, sad, or jealous as a response to "How are you feeling?") to produce numerical results (e.g. 30% happy, 50% sad, 20% surprised) would be classified as a Quantitative analysis. Processing the transcripts of interviews to categorize phrases and report themes identified across interviews would be classified as a Qualitative analysis. Qualitative analysis techniques may include phenomenology development from categorical codes, and may result in discovery or creation of theories that are unattainable through quantitative analysis."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:01060"/>
<display value="Blinding of study participants"/>
<definition
value="A study design process in which study participants are not informed of their intervention assignment."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Masking of study participants"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Masking of study participants involves actions to conceal information that could lead to their awareness of their intervention assignment, such as provision of placebo or simulated interventions that mimic the target interventions."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-23 vote 6-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey, Robin Ann Yurk"/>
</property>
</concept>
<concept>
<code value="SEVCO:01061"/>
<display value="Blinding of intervention providers"/>
<definition
value="A study design process in which the people administering the intervention are not informed of the intervention assignment."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Masking of intervention providers"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Blinding of intervention deliverers"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Masking of intervention deliverers"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Masking of intervention providers involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants, such as provision of placebo interventions that mimic the target interventions.
The terms 'double-blinding' and 'triple-blinding' are not clearly and consistently defined terms but typically suggest blinding of intervention providers."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-23 vote 6-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey, Robin Ann Yurk"/>
</property>
</concept>
<concept>
<code value="SEVCO:01062"/>
<display value="Blinding of outcome assessors"/>
<definition
value="A study design process in which the people determining the outcome are not informed of the intervention assignment."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Masking of outcome assessors"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Masking of outcome assessors involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants to minimize the influence of such awareness on the determination of outcome measurement values.
The terms 'triple-blinding' and 'quadruple-blinding' are not clearly and consistently defined terms but may suggest blinding of outcome assessors."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-23 vote 5-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:01063"/>
<display value="Blinding of data analysts"/>
<definition
value="A study design process in which the people managing or processing the data and statistical analysis are not informed of the intervention assignment."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Masking of data analysts"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The term 'data analysts' is meant to include any person who works with the data at any point between data collection and the reporting of analyzed results.
Masking of data analysts involves actions to conceal information that could lead to their awareness of the intervention assigned to individual study participants, such as noninformative labeling used to represent the study groups.
The terms 'triple-blinding' and 'quadruple-blinding' are not clearly and consistently defined terms but may suggest blinding of data analysts."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Janice Tufte, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-23 vote 5-0 by Mario Tristan, Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:01064"/>
<display value="Allocation concealment"/>
<definition
value="A study design process in which all parties influencing study enrollment and allocation to study groups are unaware of the group assignment for the study participant at the time of enrollment and allocation."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Blinding of randomization assignment"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Masking of randomization assignment"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Allocation concealment occurs before and during the enrollment process and refers to limiting awareness of assignment during the process of recruitment and assignment to groups. Other blinding and masking terms refer to limiting awareness of the assignment during and after enrollment."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-30 vote 8-0 by Janice Tufte, nisha mathew,: Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Cauê Monaco, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00999"/>
<display value="Study Design Feature"/>
<definition value="An aspect or characteristic of a study design."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Study Design Characteristic"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Study Design Aspect"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Janice Tufte"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Study design is defined as a plan specification for how and what kinds of data will be gathered as part of an investigation which may produce testable explanations, conclusions and predictions or test a hypothesis."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: I agree with the definition but feel this is meta-vocabulary that helps us talk about elements of study design that are not part of the code system itself. So I don't know if it should be included in the code system as a code, or if we should be considering some other means for defining these terms (e.g. in documentation or guidance about SEVCO).
not a fan of the synonym "study design factor" as it could cause confusion with 'Study Factor", Independent Variable. How different Study Design is from Study Protocol? "Study design planned process" could cover the following subtypes
For the comment for application include ...as a technical plan specification....
2022-03-22 comment: The definition of "Study design" seems to exclude the "statistical analysis". Am I right?"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 7-0 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte, Robin Ann Yurk (but then the definition changed with the creation of Study Design Process)
2022-03-22 vote 5-1 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, nelle.stocquart, Robin Ann Yurk"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-29 vote 8-0 by Paul Whaley, Mario Tristan, Jesus Lopez-Alcalde, Cauê Monaco, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk, nelle.stocquart"/>
</property>
<concept>
<code value="SEVCO:01043"/>
<display value="Multicentric"/>
<definition
value="A study design feature in which two or more institutions are responsible for the conduct of the study."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Multi-institutional study"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Multicenter trial"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Multicenter study"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="comment"/>
<valueString
value="This term may be used for studies conducted across multiple contexts (administrative or logistical) that are distinct from geographic locations, potentially introducing greater variability beyond multisite data collection."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-15 vote 5-1 by Mario Tristan, Paul Whaley, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Robin Ann Yurk
2022-04-26 vote 3-0 by Eric M Harvey, Robin Ann Yurk, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-15 comments: Suggest add to multiple contexts (research)
a multicenter study is_a study. 'multicentric' would be a subtype of study_design_feature.
a concern here is that the current definition conflates 2 entities: a study and a characteristic of that study.
At the end of the day, it depends on how the modeling will be made, e.g (study) (has_some_study_design_feature) (type of study_design_feature)
Or should it be "Multicenter data collection" ?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-06 vote 6-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:01052"/>
<display value="Includes patient-reported outcome"/>
<definition
value="A study design feature in which one or more outcomes are reported directly from the patient without interpretation by a clinician or researcher."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="PRO"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes patient-reported outcomes"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Examples of patient-reported outcomes include symptoms, pain, quality of life, satisfaction with care, adherence to treatment, and perceived value of treatment. Data collection methods including surveys and interviews may obtain patient-reported outcomes. Reports derived from wearable devices would not typically include patient-reported outcomes. Such data may be coded with 'Real world data collection from monitoring procedures' (SEVCO:01046)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="The U.S. Food and Drug Administration (FDA) defines a patient-reported outcome (PRO) as “any report of the status of a patient’s health condition that comes directly from the patient, without interpretation of the patient’s response by a clinician or anyone else [1].” -- from https://dcricollab.dcri.duke.edu/sites/NIHKR/KR/PRO%20Resource%20Chapter.pdf"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-04-26 vote 3-0 by Eric M Harvey, Robin Ann Yurk, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-04-26 comment: Suggest adding to the comment for application: data methods to collect Patient Reported Outcomes such as survey data.
2022-05-06 comments: Perhaps direct the reader to "Patient generated health data" or whatever else is the SEVCO term for "wearables" or other data sources (e.g., bluetooth scale).
Suggest adding to the comment for application: data methods to collect Patient Reported Outcomes such as survey data.
2022-06-07 preferred term changed from "Patient-reported outcome" to "Includes patient-reported outcome" to maintain consistency with sibling concepts"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-06 vote 7-0 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:01053"/>
<display value="Includes patient-centered outcome"/>
<definition
value="A study design feature in which one or more measures are outcomes that patients directly care about, i.e. outcomes that are directly related to patients' experience of their life."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes patient-oriented outcome"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes patient-important outcome"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes patient-relevant outcome"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes clinical outcome"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes patient-centered outcome measure"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes clinical outcome measure"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="In healthcare research, outcomes are effects on patients or populations, including changes to health status, behavior, or knowledge as well as patient satisfaction and quality of life. A patient-centered outcome qualifies the type of outcome as that which patients directly care about, i.e. outcomes that are directly related to patients' experience of their life. Examples of patient-centered outcomes include mortality, morbidity, symptoms, and quality of life. Some use 'clinical outcome' as synonymous with 'patient-centered outcome' while some use 'clinical outcome' to represent outcomes that would assessed as part of healthcare practice."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-06 vote 5-2 by Mario Tristan, Robin Ann Yurk, Eric M Harvey, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte
2022-05-17 vote 7-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-04-26 comment: Suggest Adding to comment for application: Population Statistics such as mortality, morbidity. Development of clinical outcomes is based on using a framework such as the Donabedian model: Structure, Process, Outcomes where outcomes have some relationship to structural or process measures in clinical care.....
2022-05-06 comment: I understand the goal of "quantity or quality of life," but I think it's too abstract--and limiting ("Quantity of life" is limited to life expectancy). I haven't reviewed other definitions, but the flavor is, "outcomes that patients care about." ("Function" is left off the list of "examples", albeit there is a large overlap with "morbidity," "symptoms," and "quality of life.") (See the Comments for Application for Surrogate Outcome!)
2022-05-17 comments: The definition seems to be in the comment: "A clinical outcome qualifies the type of outcome as that which patients directly care about." The definition as proposed doesn't really make sense to me.
The definition, alternative terms and comment for application are correct. However, it is more specific to patient reported outcomes. Clinical outcomes are more broad and also includes: physiologic measures, condition specific measures.....etc. Clinical outcomes can be structural, process or outcomes in the donabedian framework and or combined as composite outcomes.
While patient centered outcomes are typically considered clinical outcomes, they also indicate the observed outcomes by the clinician but not so much by the patient.
2022-05-26 comment: Suggest revise term definition so it is more inclusive or all healthcare or clinical outcomes, such as mortality, morbidity, physiologic measures, symptoms, experiences. The term is not a study design but a measure.
For example:
A healthcare measure which captures results from healthcare populations, settings structures, processes, and patients directly related to their care with healthcare settings, people, providers, and interventions.
Insert other alternative terms: Morbidity, Mortality, Symptoms, Experience of Care, Health Status, Quality of life. Suggest delete Patient Oriented Outcome, Patient Important Outcome, Patient Relevant Outcome, Patient Centered OUtcome, Includes clinical outcomes,"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:01054"/>
<display value="Includes disease-oriented outcome"/>
<definition
value="A study design feature in which one or more measures are outcomes that relate to a health or illness condition but are not outcomes which patients directly care about."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes disease-oriented outcome measure"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes surrogate outcome"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes surrogate outcome measure"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="In healthcare research, outcomes are effects on patients or populations, including changes to health status, behavior, or knowledge as well as patient satisfaction and quality of life. A patient-centered outcome qualifies the type of outcome as that which patients directly care about. Examples of patient-centered outcomes include mortality, morbidity, symptoms, and quality of life. A disease-oriented outcome qualifies the type of outcome as that which patients do not directly care about. Examples of disease-oriented outcomes include laboratory test measurements, imaging study findings, and calculated risk estimates. In this context, disease-oriented outcomes may be used as surrogate outcomes or proxy outcomes for ultimate effects on patient-centered outcomes, but do not provide direct evidence of effects on patient-centered outcomes."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-06 vote 4-2 by Mario Tristan, Robin Ann Yurk, nisha mathew, Paola Rosati, Harold Lehmann, Janice Tufte
2022-05-17 vote 6-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-06 comments: I would refer to whatever "clinical outcome" is defined as: "...indirect measures of clinical outcomes". The Comment for Application is redefining Clinical Outcome. I would spend that space pointing out that some surrogates are predictive (e.g., cholesterol levels, for MIs) and others are after the fact (e.g., sales of orange juice for treating the flu).
Suggestion--look at the wikipedia definition, then explore other mapping definitions. The current term definition and comment for application need improvement.
"In clinical trials, a surrogate endpoint is a measure of effect of a specific treatment that may correlate with a real clinical endpoint but does not necessarily have a guaranteed relationship. The National Institutes of Health defines surrogate endpoint as "a biomarker intended to substitute for a clinical endpoint". wikipedia...
2022-05-17 comments: Maybe edit to "An indirect measure of quantity or quality of life, presumed or believed to have an effect on clinical outcomes."I would focus on revising and define surrogate first and then include a broad definition, not just specific to clinical outcomes.
2022-05-26 comment: Surrogate Outcome is a proxy measure for capturing the outcome of interest.
Alternative Terms: delete disease oriented and surrogate outcome measure. Suggest add: Proxy Outcome Measure.
Comment for application: Delete first 3 sentences.
Edit the last sentence so it reads: A surrogate outcome is a measure which captures an approximate measure. Examples of surrogate outcomes includes survey measures rating scales for a child by the parent or teacher. Geriatric rating scales from paid or professional caregivers for a seriously ill or geriatric patient are other examples."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:01085"/>
<display value="Includes process measure"/>
<definition
value="A study design feature in which one or more outcomes are actions or behaviors of a healthcare professional or care team."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes process outcome"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Includes process outcome measure"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Kenneth Wilkins"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-17 vote 7-1 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde
2022-05-17 vote 8-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, nelle.stocquart@kce.fgov.be, Mario Tristan
2022-05-31 vote 8-2 by Paul Whaley, Harold Lehmann, Robin Ann Yurk, Paola Rosati, Eric Harvey, raradhikaag@gmail.com, Cauê Monaco, Jesus Lopez-Alcalde, nelle.stocquart@kce.fgov.be, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-17 comment: Process Measure is included in the donabedian framework of structure, process, outcomes. Do you want to define just for healthcare process measure versus keep the definition broad to include such as a series of steps or tasks providing a measurement pathway for any industry and the examples in healthcare processes are....
2022-05-24 comments: repeat 2022-05-17 comment plus:
you need to provide more info, it is not clear as such
2022-05-31 comments:
you need to provide more info, it is not clear as such
Add comment for application with examples: A process measures captures the steps to care such as
Lab test orders, Referrals....The literature defines a process measure in the donabedian framework of structure, process, outcomes."/>
</property>
<property>
<code value="comment"/>
<valueString
value="A process outcome measure is a measure of change in actions or behaviors conducted in the process of healthcare delivery or clinical care, such as obtaining laboratory tests or referrals for follow-up care."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-07 vote 5-0 by Brian S. Alper, Paola Rosati, Eric M Harvey, Mario Tristan, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:01089"/>
<display value="Study Goal"/>
<definition
value="A study design feature specifying the intent of the study."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Study Aim"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-21 comment: Another alternate term could be "Study Objective""/>
</property>
<concept>
<code value="SEVCO:01096"/>
<display value="Evaluation Goal"/>
<definition
value="A study goal to assess the efficiency, effectiveness, and impact of a given program, process, person or piece of equipment."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Evaluation Study Goal"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Evaluation Aim"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="Medical Subject Heading (MESH): this heading is used as a Publication Type; for original report of the conduct or results of a specific evaluation study; a different heading EVALUATION STUDIES AS TOPIC is used for general design, methodology, economics, etc. of evaluation studies
Scope Note
Works consisting of studies determining the effectiveness or utility of processes, personnel, and equipment. https://meshb.nlm.nih.gov/record/ui?ui=D023362"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Intended to include all forms of evaluation study. (Child concepts for program, process, personnel and equipment evaluations may be added later.)"/>
</property>
<property>
<code value="editors"/>
<valueString value="Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-12 vote 6-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:01097"/>
<display value="Derivation Goal"/>
<definition
value="A study goal with the intent to generate a predictive algorithm."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Derivation Study Goal"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:01098"/>
<display value="Validation Goal"/>
<definition
value="A study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Validation Study Goal"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="https://meshb.nlm.nih.gov/record/ui?ui=D023361
MeSH Heading: Validation Study
Annotation:
This heading is used as a Publication Type for original report of the conduct or results of a specific validation study. A different heading VALIDATION STUDIES AS TOPIC is used for general design, methodology, economics, etc. of validation studies. CATALOGER: Do not use
Scope Note:
Works consisting of research using processes by which the reliability and relevance of a procedure for a specific purpose are established.
Entry Term(s):
Validation Studies"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-07-19 vote 8-1 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-19 comment: "Relevance" is a value judgment that is not the hallmark of a validation study. (It requires elicitation of this judgment from experts or potential users.)
Accuracy, while difficult to measure, is certainly a validation aspiration (goal). Thus, validation of instruments assesses their sensitivity and specificity (measures of "accuracy").
Perhaps a broader goal is "performance", which would include accuracy but also applicability across sites or other external contexts.
Also, typo: "*from* the source used...""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-26 vote 6-0 by Jesus Lopez-Alcalde, Harold Lehmann, Paola Rosati, Eric Harvey, Janice Tufte, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="SEVCO:01088"/>
<display value="Comparison Goal"/>
<definition
value="A study design feature in which the study intent is to compare two or more interventions or exposures."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Comparative Analysis Goal"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment.
Year introduced: 2018
Do not include MeSH terms found below this term in the MeSH hierarchy.
Tree Number(s): V03.175.250.500.500.125
MeSH Unique ID: D000073843
Entry Terms:
Non-Inferiority Trial
Noninferiority Trial
Superiority Trial
Equivalence Clinical Trial"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal"/>
</property>
<concept>
<code value="SEVCO:01091"/>
<display value="Comparative Effectiveness Goal"/>
<definition
value="A study design feature in which the study intent is to compare two or more interventions with respect to benefits and/or harms."/>
<property>
<code value="comment"/>
<valueString
value="In 2009, the Institute of Medicine committee defined comparative effectiveness research (CER) as: "Comparative effectiveness research is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.""/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Defining comparative effectiveness research (CER) was the first order of business for the Institute of Medicine Committee on Initial Priorities for CER. The Institute of Medicine committee approached the task of defining CER by identifying the common theme in the 6 extant definitions. The definition follows: "Comparative effectiveness research is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels." https://pubmed.ncbi.nlm.nih.gov/20473202/"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-21 vote 5-0 by Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey, Muhammad Afzal"/>
</property>
<concept>
<code value="SEVCO:01090"/>
<display value="Comparative Efficacy Goal"/>
<definition
value="A study design feature in which the study intent is to compare two or more interventions with respect to effectiveness in ideal conditions."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-28 vote 8-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Harold Lehmann, Muhammad Afzal, Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-28 comment: what does "in ideal conditions" really mean? is it necessary ?"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Efficacy is defined as effectiveness in ideal conditions. In this context, an efficacy goal is a type of effectiveness goal. Efficacy is used to distinguish the context from effectiveness in 'real-world' settings."/>
</property>
</concept>
<concept>
<code value="SEVCO:01092"/>
<display value="Comparative Safety Goal"/>
<definition
value="A study design feature in which the study intent is to compare two or more interventions with respect to harms."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-28 vote 8-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Harold Lehmann, Muhammad Afzal, Janice Tufte, Louis Leff, Mario Tristan, Eric M Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-28 comment: no need to be "in ideal conditions" ? see related comment on. "comparative efficacy goal" class textual definition"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:01093"/>
<display value="Equivalence Goal"/>
<definition
value="A study goal with the intent to compare two or more interventions or exposures and determine that any difference in effects is within a prespecified range representing absence of a meaningful difference."/>
<property>
<code value="external-definitions"/>
<valueString
value="MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment.
Year introduced: 2018
Do not include MeSH terms found below this term in the MeSH hierarchy.
Tree Number(s): V03.175.250.500.500.125
MeSH Unique ID: D000073843
Entry Terms:
Non-Inferiority Trial
Noninferiority Trial
Superiority Trial
Equivalence Clinical Trial"/>
</property>
<property>
<code value="comment"/>
<valueString
value="An Equivalence Goal is only applicable with a Comparative study design.
The prespecified range representing absence of a meaningful difference may be defined with an equivalence margin."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-07-12 vote 4-2 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Janice Tufte, Harold Lehmann, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-12 comments: harmonize the definition to match the pattern used for the other terms. e.g. "evaluation goal" is a study goal in which the objective is to assess the efficience, effectivement and impact of a given process, process, person or piece of equipment'
so Equivalence Goal is a study goal in which the study intent is to compare two or more interventions or exposures and determine that any difference in effects is within a prespecified range representing absence of a meaningful difference
I think this definition is unclear. Is the equivalence goal an aim of a study? Which kind of study? My understanding is: Given a prespecified range (of results?) showing an absence of a meaningful (for which kind of subjects/previous research?) difference between two interventions/exposures, the equivalence goal assesses that there is no difference in effects. Is this the meaning of this definition? Which kind of study could give a valid result in terms of equivalence? An RCT?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:01094"/>
<display value="Non-inferiority Goal"/>
<definition
value="A study goal with the intent to compare two or more interventions or exposures and determine that any difference in effects is below a prespecified value representing a threshold between a meaningful difference and absence of a meaningful difference."/>
<property>
<code value="external-definitions"/>
<valueString
value="MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment.
Year introduced: 2018
Do not include MeSH terms found below this term in the MeSH hierarchy.
Tree Number(s): V03.175.250.500.500.125
MeSH Unique ID: D000073843
Entry Terms:
Non-Inferiority Trial
Noninferiority Trial
Superiority Trial
Equivalence Clinical Trial"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A Non-inferiority Goal is only applicable with a Comparative study design.
The threshold between a meaningful difference and absence of a meaningful difference may be called a non-inferiority margin."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-07-12 vote 3-2 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-12 comments: so Non-Inferiorty Goal is a study goal in which....
I have the same doubts already given for the equivalence goal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-19 vote 8-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:01095"/>
<display value="Superiority Goal"/>
<definition
value="A study goal with the intent to compare two or more interventions or exposures and detect a difference in effects."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Superiority study goal"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="MeSH term "Equivalence Trial" https://www.ncbi.nlm.nih.gov/mesh/2023172 Trial that aims to show a new treatment is no better and no worse than the standard treatment.
Year introduced: 2018
Do not include MeSH terms found below this term in the MeSH hierarchy.
Tree Number(s): V03.175.250.500.500.125
MeSH Unique ID: D000073843
Entry Terms:
Non-Inferiority Trial
Noninferiority Trial
Superiority Trial
Equivalence Clinical Trial"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A Superiority Goal is only applicable with a Comparative study design.
A superiority study goal may be exploratory (to detect a difference) or confirmatory (to establish that a difference exists with a degree of certainty).
A superiority goal is not the opposite of a non-inferiority goal. A superiority goal uses a threshold of zero difference while an inferiority goal uses a threshold of a meaningful difference.
Some superiority comparisons are conducted following determination of non-inferiority.
Placebo-controlled trials are typically superiority studies.
Superiority, as commonly used, is 'statistical superiority,' with null used as the threshold of effect. An approach representing 'clinical superiority' would use the non-inferiority margin as the threshold of effect."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-07-12 vote 4-1 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Paola Rosati, Harold Lehmann, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-12 comment: so Superiority Goal is a study goal in which...
2022-07-19 comment: alter definition to "...and detect *meaningful* difference in effects" (in order to be consistent with Equivalence and Non-inferiority Study Goals')"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-19 vote 9-0 by Paola Rosati, Cauê Monaco, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:01100"/>
<display value="Allocation ratio"/>
<definition
value="A study design feature describing the intended relative proportion of assignment across groups."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The allocation ratio may be expressed as Treatment:Control, e.g., 2:1, or, in the case of two treatment arms and one control, e.g. 2:2:1."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-04-10 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Eric Harvey, Janice Tufte, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-04-10 comment Perhaps add to Comment for Application something like, "The allocation ratio is usually expressed as Treatment:Control, e.g., 2:1 or 2:2:1, in the case of two treatment arms.""/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00001"/>
<display value="Bias"/>
<definition
value="A systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation])."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="False certainty"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Philippe Rocca-Serra, Joanne Dehnbostel, Mario Tristan, Harold Lehmann; Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 2021-02-26: , Harold Lehmann, Khalid Shahin, Eric Harvey, Jesús López-Alcalde, Joanne Dehnbostel, Muhammad Afzal, Paola Rosati, Eric Au, 5/5 for second sentence as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte"/>
</property>
<concept>
<code value="SEVCO:00002"/>
<display value="Selection Bias"/>
<definition
value="A bias resulting from methods used to select subjects or data, factors that influence initial study participation, or differences between the study sample and the population of interest"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 3/5/2021 Eric Au, Alejandro Piscoya, Mario Tristan, Brian Alper, Zbys Fedorowicz, Bhagvan Kommadi, Eric Harvey, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Selection bias can occur before the study starts (inherent in the study protocol) or after the study starts (during study execution)."/>
</property>
<concept>
<code value="SEVCO:00003"/>
<display value="Participant Selection Bias"/>
<definition
value="A selection bias resulting from methods used to select participating subjects, factors that influence initial study participation, or differences between the study participants and the population of interest"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-03-08 vote 7-2 on "A selection bias where key characteristics of the participants differ systematically from the population of interest." by Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel, 2021-03-19 vote 10-1 on "A bias resulting from methods used to select participating subjects, factors that influence initial study participation, or differences between the study participants and the population of interest" by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin"/>
</property>
<concept>
<code value="SEVCO:00004"/>
<display value="Inappropriate selection criteria"/>
<definition
value="A selection bias resulting from inclusion and exclusion criteria used to select participating subjects that could result in differences between the study participants and the population of interest."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Selection bias due to inappropriate selection criteria"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-03-19 vote 9-2 on "A bias resulting from inclusion and exclusion criteria used to select participating subjects that could make the included participants unrepresentative of the population of interest." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Asiyah Yu Lin"/>
</property>
</concept>
<concept>
<code value="SEVCO:00005"/>
<display value="Inappropriate sampling strategy"/>
<definition
value="A selection bias resulting from the sampling frame, sampling procedure, or methods used to recruit participating subjects that could result in differences between the study participants and the population of interest."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Biased sampling strategy"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inappropriate sample frame"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inappropriate sampling frame"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inappropriate sampling procedure"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Selection bias due to inappropriate sampling strategy"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-03-19 vote 9-2 on "A bias resulting from the sample frame, sampling procedure, or methods used to recruit participating subjects that could make the included participants unrepresentative of the population of interest." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra"/>
</property>
<concept>
<code value="SEVCO:00014"/>
<display
value="Inappropriate data source for participant selection"/>
<definition
value="Participant selection bias due to inappropriate data source for sampling frame."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Participant selection bias due to inappropriate data source for sampling frame"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Muhammad Afzal, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 4/12/2021: KM Saif-Ur-Rahman, Bhagvan Kommadi, Joanne Dehnbostel, Paola Rosati, Jesús López-Alcalde, Tatyana Shamliyan"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00006"/>
<display value="Non-representative sample"/>
<definition
value="A selection bias due to differences between the included participants and the population of interest that distorts the research results (estimation of effect, association, or inference), limiting external validity or applicability."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Selection bias due to non-representative sample"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Unrepresentative sample"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Nonrepresentative sample"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 3/22/2021 Harold Lehmann, Eric Harvey, Bhagvan Kommadi, Jesus Lopez-Alcalde, Ahmad Sofi-Mahmudi, Tatyana Shamliyan, Muhammad Afzal, Paola Rosati, Joanne Dehnbostel, Marc Duteau"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-03-19 vote 10-1 on "Differences between the included participants and the population of interest that distorts the research results (estimation of effect, association, or inference), limiting external validity or applicability." by Brian S. Alper, Tatyana Shamliyan, Bhagvan Kommadi, Muhammad Afzal, Khalid Shahin, Harold Lehmann, Philippe Rocca-Serra"/>
</property>
<concept>
<code value="SEVCO:00008"/>
<display value="Inadequate enrollment of eligible subjects"/>
<definition
value="A selection bias in which insufficient enrollment of eligible subjects results in differences (recognized or unrecognized) between the included participants and the population of interest that distorts the research results."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Selection bias due to inadequate enrollment"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Non-representative sample due to inadequate enrollment"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Marc Duteau, Khalid Shahin, Asiyah Yu Lin, Muhammad Afzal, Tatyana Shamliyan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="11/11 as of 3/29/2021: Alejandro Piscoya, Eric Harvey, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Eric Au, Joanne Dehnbostel, Marc Duteau, Brian S. Alper, Jesús López-Alcalde, Tatyana Shamliyan, Paola Rosati"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-03-26 vote 8-2 on "Inadequate enrollment = A selection bias due to a rate of study entry among eligible subjects that is not sufficient for the included sample to be considered representative of the population of interest." by Harold Lehmann, Tatyana Shamliyan, Muhammad Afzal, Eric Au, Paola Rosati, Mario Tristan, Alejandro Piscoya, Bhagvan Kommadi, Jesús López-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00012"/>
<display
value="Non-representative sample due to timing or duration of exposure"/>
<definition
value="A selection bias in which the timing or duration of exposure influences the outcome, and the timing or duration of exposure in the sample does not represent that of the population of interest. This selection bias may occur when the selection for study participation is not coincident with the initiation of the exposure or intervention under investigation."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Mismatch in start of intervention and start of follow-up"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 4/9/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Paola Rosati, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan"/>
</property>
<concept>
<code value="SEVCO:00013"/>
<display value="Depletion of susceptibles"/>
<definition
value="A non-representative sample due to exclusion of susceptible participants who have already had an outcome due to prior exposure. For example, the inclusion of prevalent users of a medication misrepresents the initial adverse effects rate by excluding persons who do not tolerate the medication."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Prevalent user bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Non-representative sample due to depletion of susceptibles"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 4/9/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Paola Rosati, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00009"/>
<display
value="Post-baseline factors influence enrollment selection"/>
<definition
value="A selection bias in which factors observed after study entry, baseline, or start of follow-up influence enrollment"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Participant selection bias due to post-baseline factors"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Marc Duteau, Khalid Shahin, Asiyah Yu Lin, Harold Lehmann, Mario Tristan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan"/>
</property>
<concept>
<code value="SEVCO:00212"/>
<display
value="Participant selection bias due to early study termination"/>
<definition
value="A selection bias due to premature closure of study enrollment."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Early termination bias affecting enrollment"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="'Early termination bias affecting enrollment' is a type of 'Post-baseline factors influence enrollment selection' which is defined as 'A selection bias in which factors observed after study entry, baseline, or start of follow-up influence enrollment.'
To express bias related to making the decision to terminate a study, use 'Early Study Termination Bias'."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Paul Whaley, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-25 vote 7-1 by Muhammad Afzal, Paul Whaley, Mario Tristan, Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-25 comment: Recommend simplifying the term and then add your test to the term definition.
For example edit term to Early Study Termination Bias. Term definition should read. Selection Bias due to premature closing of a study enrollment for the participants...."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-08 vote 12-0 by Muhammad Afzal, Paul Whaley, Mario Tristan, Joanne Dehnbostel, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, nelle.stocquart, nisha mathew, Harold Lehmann, Cauê Monaco"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00010"/>
<display
value="Factor associated with exposure influences enrollment selection"/>
<definition
value="A selection bias in which a factor associated with the exposure under investigation influences study enrollment"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Participant selection bias due to factor associated wiith exposure"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan"/>
</property>
</concept>
<concept>
<code value="SEVCO:00011"/>
<display
value="Factor associated with outcome influences enrollment selection"/>
<definition
value="A selection bias in which a factor associated with the outcome under investigation influences study enrollment"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Participant selection bias due to factor associated wiith outcome"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Khalid Shahin, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="9/9 as of 4/5/2021: Alejandro Piscoya, KM Saif-Ur-Rahman, Bhagvan Kommadi, Eric Harvey, Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesús López-Alcalde, Tatyana Shamliyan"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00015"/>
<display value="Study Selection Bias"/>
<definition
value="A selection bias resulting from factors that influence study selection, from methods used to include or exclude studies for evidence synthesis, or from differences between the study sample and the population of interest"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Muhammad Afzal, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 4/26/2021: Eric Harvey, Bhagvan Kommadi, Harold Lehmann, Mario Tristan, Jesús López-Alcalde, Tatyana Shamliyan"/>
</property>
<concept>
<code value="SEVCO:00262"/>
<display value="Bias in Study Eligibility Criteria"/>
<concept>
<code value="SEVCO:00273"/>
<display value="Study eligibility criteria not prespecified"/>
</concept>
<concept>
<code value="SEVCO:00274"/>
<display
value="Study eligibility criteria not appropriate for review question"/>
</concept>
<concept>
<code value="SEVCO:00275"/>
<display value="Study eligibility criteria ambiguous"/>
</concept>
<concept>
<code value="SEVCO:00276"/>
<display
value="Study eligibility criteria limits for study characteristics not appropriate"/>
</concept>
<concept>
<code value="SEVCO:00277"/>
<display
value="Study eligibility criteria limits for information sources not appropriate"/>
</concept>
</concept>
<concept>
<code value="SEVCO:00263"/>
<display value="Database search sources not appropriate"/>
</concept>
<concept>
<code value="SEVCO:00264"/>
<display value="Non-database search sources inadequate"/>
</concept>
<concept>
<code value="SEVCO:00265"/>
<display value="Search strategy not sensitive"/>
</concept>
<concept>
<code value="SEVCO:00266"/>
<display
value="Search strategy limits for information sources not appropriate"/>
</concept>
<concept>
<code value="SEVCO:00267"/>
<display value="Study eligibility criteria not adhered to"/>
</concept>
<concept>
<code value="SEVCO:00268"/>
<display value="Error in study selection not minimized"/>
</concept>
<concept>
<code value="SEVCO:00269"/>
<display value="Language bias"/>
</concept>
<concept>
<code value="SEVCO:00270"/>
<display value="Geography bias"/>
</concept>
<concept>
<code value="SEVCO:00271"/>
<display value="One-sided reference bias"/>
</concept>
<concept>
<code value="SEVCO:00272"/>
<display value="Publication Bias affecting study selection"/>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00016"/>
<display value="Confounding Covariate Bias"/>
<definition
value="A situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Philippe Rocca-Serra, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-07-14 vote 5-0 by Paul Whaley, Harold Lehmann, Cauê Monaco, Jesus Lopez-Alcalde, Paola Rosati"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-05-07 vote 4-2 on "Comparator Bias = A bias resulting from differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi
2021-05-10 vote 11-1 on "Confounding Covariate Bias = A bias resulting from differences in covariates (variables other than the exposure and outcome) between the groups being compared." by Eric Harvey, KM Saif-Ur-Rahman, Janice Tufte, Bhagvan Kommadi, Paola Rosati, Alejandro Piscoya, Harold Lehmann, Ahmad Sofi-Mahmudi, Eric Au, Jesus Lopez-Alcalde, Tatyana Shamliyan, Joanne Dehnbostel
AGREEMENT VOTE 8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati
On 2023-06-16 the Steering Group corrected a technical error in the definition (between A or B ... corrected to ... between A and B), and added a Comment for Application, so re-opened the term for vote."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="A bias resulting from differences (other than in variables directly involved in the analysis) between the groups being compared. ---led to --- Which differences do you mean between the groups? This definition seems unclear. Defining a Comparator bias means to addresss some possible specific explanation. Or it is preferable to delete this bias. The definition is for selection bias resulting from nonrandom allocation of participants to interventions. Random allocation of trial participants to interentions would reduce this bias. Comprator seletion would not. A bias resulting from differences in covariates (variables other than the exposure and outcome) between the groups being compared -- led to I agree with the definition but I suggest detailing that the covariate is associated to the outcome"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Association of any two variables includes direct associations and indirect associations through each of the variables having direct associations with a third variable."/>
</property>
<concept>
<code value="SEVCO:00032"/>
<display value="Allocation Bias"/>
<definition
value="A confounding covariate bias resulting from methods for assignment of the independent variable by the investigator to evaluate a response or outcome."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-05-07 vote 5-1 on "Comparator Selection Bias = A comparator bias resulting from methods for selection of or allocation to groups for comparative analysis that have the potential to introduce differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi, 2021-05-10 vote 11-1 on "Allocation Bias = A confounding covariate bias resulting from methods for assignment of exposures in an interventional study." by Eric Harvey, KM Saif-Ur-Rahman, Janice Tufte, Bhagvan Kommadi, Paola Rosati, Alejandro Piscoya, Harold Lehmann, Ahmad Sofi-Mahmudi, Eric Au, Jesus Lopez-Alcalde, Tatyana Shamliyan, Joanne Dehnbostel"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="A comparator bias resulting from methods for selection of or allocation to groups for comparative analysis that have the potential to introduce differences (other than in variables directly involved in the analysis) between the groups being compared. -- led to--- Selection of comparators would not reduce differences between compared groups. A confounding covariate bias resulting from methods for assignment of exposures in an interventional study. --led to-- In my opinion, in an interventional study the investigator assigns the intervention, not the exposures. The differences in the covariates results from the methods for the assignment of the intervention. For example not concealed allocation."/>
</property>
<concept>
<code value="SEVCO:00031"/>
<display value="Inadequate allocation concealment"/>
<definition
value="An allocation bias resulting from awareness of the assigned intervention before study enrolment and intervention assignment"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString value="10/10 as of 6/11/2021: Names not captured"/>
</property>
</concept>
<concept>
<code value="SEVCO:00278"/>
<display value="Bias due to non-randomized allocation"/>
<definition
value="An allocation bias resulting from a process of assigning participants or subjects to different groups or conditions which is not random."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Nonrandom allocation bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Non-random allocation bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bias due to non-random allocation"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A confounding covariate bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
Allocation bias is defined as a confounding covariate bias resulting from *methods for assignment* of the independent variable by the investigator to evaluate a response or outcome.
Methods for assignment that are not random may introduce confounding with measured or unmeasured variables.
Non-random methods of generation of an allocation sequence may introduce a confounding covariate bias through associations with one ore more non-random variables related to sequence generation. A non-random allocation sequence may be described as a predictable sequence in mathematical terms.
The SEVCO term [Quasi-Randomized assignment](https://fevir.net/resources/CodeSystem/27270#SEVCO:01004) is defined as an interventional study design with a method of allocation that is not limited to random chance but is intended to produce similar baseline groups for experimentation. Although Quasi-Randomized assignment is "intended to produce similar baseline groups" the term is classified as a type of [Non-randomized assignment](https://fevir.net/resources/CodeSystem/27270#SEVCO:01005). Examples of non-random methods (which may be called 'partially randomized' or 'quasi-random') include every other participant, day of the week, even/odd identification number, birth date, etc."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-05-12 vote 4-1 by Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Jesus Lopez-Alcalde, Harold Lehmann
2023-05-26 vote 5-1 by Harold Lehmann, Jesus Lopez-Alcalde, Sunu Alice Cherian, Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel
2023-06-09 vote 4-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian, Harold Lehmann
2023-06-16 vote 3-2 by Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann, Paola Rosati
2023-07-14 vote 4-1 by Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-05-12 comment: Does this term definition actually define "inadequate" = potentially predictable sequence ? seems like an innapropriate allocation bias perhaps
2023-05-26 comment: Defintion: Methods of allocating study participants to treatment comparison groups that are not random, but are intended to produce similar groups.
Alternative terms: Quasi random allocation
2023-06-09 comment: The comment for application describes "unrecognised associations", but the definition talks about "potentially predictable", which implies exploiting a recognised association to break blinding. I am not sure it can be both of these.
2023-06-16 comments:
Type of bias that arises in research studies when the process of assigning participants or subjects to different groups or conditions is not random.
I think I remember my original concern now - in the definition, the problem is not that the sequence is predictable, it is that the sequence is associated with another variable, thus introducing this other variable as a confounder. Unless it is about the investigator being able to break blinding, in which case the concept of the sequence being predictable is important.
2023-07-14 comment: I think non-random methods are those clearly non-random, such as allocation by provider's preferences. However, quasi-random methods are those that apply a method that attempts to be random but that it isn't. Example: day of the week.
2023-07-28 comment: For consistency, should we call it, "Confounding Bias due to non-randomized allocation"?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-07-28 vote 5-0 by Brian S. Alper, Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00033"/>
<display value="Comparator Selection Bias"/>
<definition
value="A confounding covariate bias resulting from methods used to select participating subjects, or factors that influence study participation, for the comparator group."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Comparison Group Selection Bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Comparator Group Selection Bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Comparison Selection Bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 5/17/2021: Tatyana Shamliyan, Janice Tufte, Mario Tristan, Bhagvan Kommadi, Jesús López-Alcalde, Isaac Fwemba, Eric Harvey, Paola Rosati"/>
</property>
<property>
<code value="comment"/>
<valueString
value="This situation is more commonly related to observational research."/>
</property>
</concept>
<concept>
<code value="SEVCO:00034"/>
<display value="Confounding difference"/>
<definition
value="A confounding covariate bias in which the unequal distribution of a potentially distorting variable is recognized."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Recognized Difference with Potential for Confounding"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Recognized confounding difference"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-05-07 vote 5-1 on "Recognized Difference with Potential for Confounding = A comparator bias resulting from known differences (other than in variables directly involved in the analysis) between the groups being compared." by KM Saif-Ur-Rahman, Harold Lehmann, Alejandro Piscoya, Paola Rosati, Tatyana Shamliyan, Bhagvan Kommadi, , 2021-05-24 vote 6-1 on "A confounding covariate bias in which the unequal distribution of a potentially distorting variable is recognized." by Harold Lehmann, Eric Harvey, KM Saif-Ur-Rahman, Bhagvan Kommadi, janice tufte, Paola Rosati, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="A comparator bias resulting from known differences (other than in variables directly involved in the analysis) between the groups being compared. -- led to-- This defintion seems tricky. If you find any diference between groups that can go astray with analysis you simply address the potention for confounding explicitly in the discussion session of yoru protocol/paper The potnetial for confounding needs to be consideriend in the protocol, and specifically addresssed int eh post-analysis to avoid any further bias. The term comparator bias is misleading since differnece between groups would not be reduced by selecting different comparators. If this is recognized and adjusted for, is it still a bias? Seems that we need to address this circumstance."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The potentially distorting variable is a covariate, and not the exposure or the outcome. Even if adjusted for in the analysis, a risk of bias can be present."/>
</property>
</concept>
<concept>
<code value="SEVCO:00280"/>
<display value="Confounding by time of observation"/>
<definition
value="A confounding covariate bias in which the distorting variable is the time at which the outcome is measured or observed."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Confounding by follow-up time"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="A confounding covariate bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
The time at which the outcome is measured or observed may be absolute (e.g. a specific date) or relative (e.g. 3 months after study enrollment).
To understand "confounding by time of observation" consider the following example:
An observational study is comparing patients with asthma taking Superdrug and patients with asthma not taking Superdrug. The outcome of interest is mortality. The patients taking Superdrug are observed for their full duration of exposure to Superdrug. For comparison, the control group not receiving Superdrug is measured during a 1-year calendar period.
For the mortality outcome comparing Superdrug vs. no Superdrug, the time of observation for the control group is consistently 1 year but for the Superdrug group the time of observation varies for each patient. This comparison is confounded by the time of observation."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian
2023-06-16 vote 4-1 by Paola Rosati, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann
2023-07-14 vote 7-0 by Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco
THEN REOPENED
2023-08-04 due to comment that suggests removing parenthetical from definition"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-06-09 comment: The comment for application is not sufficiently informative. I am also not sure I understand what the definition means - what is the importance of recognition of unequal distribution of follow-up time?
2023-06-16 comments: A confounding that occurs when the relationship between an exposure or intervention and an outcome is confounded by the time at which the outcome is measured or observed.
Alternate terms: time-varying confounding
Comment for application: This occurs when both the exposure and the outcome change over time, and there are other time-dependent factors that influence the outcome
The Comment for Application seems to be repeating the definition of the parent term. I though we usually add details specific to the current term."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Harold Lehmann, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="SEVCO:00281"/>
<display value="Lead time bias"/>
<definition
value="A confounding covariate bias in which the distorting variable is the length of time that the participant has had the condition of interest at study enrollment."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Starting time bias"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
A lead time bias is often manifest as a distortion overestimating the apparent time surviving with a disease caused by bringing forward the time of its diagnosis (https://catalogofbias.org/biases/lead-time-bias/).
Lead time bias is a type of bias that occurs in medical screening or diagnostic tests when the early detection of a disease or condition artificially appears to improve survival or prognosis, even if it does not actually provide a true benefit in terms of overall health outcomes.
Lead time refers to the amount of time between the detection of a disease through early screening or diagnostic testing and the time when the disease would have been clinically detected without screening."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Lead time bias
A distortion overestimating the apparent time surviving with a disease caused by bringing forward the time of its diagnosis
https://catalogofbias.org/biases/lead-time-bias/"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Sunu Alice Cherian
2023-06-16 vote 3-2 by Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann, Paola Rosati
2023-07-14 vote 6-0 by Muhammad Afzal, Joanne Dehnbostel, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann, Cauê Monaco
THEN REOPENED
2023-08-04 due to comment that suggests removing parenthetical from definition"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-06-09 comment: I am not sure I can successfully parse the syntax of the definition. While I think I understand what is meant, I feel it could be phrased more clearly.
2023-06-16 comments: Lead time bias is a type of bias that occurs in medical screening or diagnostic tests when the early detection of a disease or condition artificially appears to improve survival or prognosis, even if it does not actually provide a true benefit in terms of overall health outcomes
Comment for application: Lead time refers to the amount of time between the detection of a disease through early screening or diagnostic testing and the time when the disease would have been clinically detected without screening.
This definition seems difficult to understand: does it convey that lead time bias is related to the potentially distorting variable of the length of time chosen in the study in which some participants could have confounding differences between their diagnosis of the condition of interest and the time of enrolment?
I have some problem in understanding, sorry."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:00282"/>
<display value="Confounding influencing adherence to intervention"/>
<definition
value="A confounding covariate bias in which the distorting variable is associated with deviations from the intended intervention."/>
<property>
<code value="comment"/>
<valueString
value="A confounding covariate bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
For 'Confounding influencing adherence to intervention', the association of the distorting variable and the exposure is specific to deviations from the intended exposure (intended intervention). Deviations from the intended intervention may include deviations from the intervention protocol or lack of adherence. Lack of adherence includes imperfect compliance, cessation of intervention, crossovers to the comparator intervention and switches to another active intervention.
The term 'Confounding influencing adherence to intervention' is distinct from 'Performance Bias' (including 'Nonadherence of participants' or 'Imbalance in deviations from intended interventions') in that an additional variable (the distorting variable or confounding covariate) is acting as a confounder, while the 'Performance Bias' may occur with or without any differences in a third variable."/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="trigger question from ROBINS-I: 1.3. Were intervention discontinuations or switches likely to be related to factors that are prognostic for the outcome?"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-06-02 comment from steering group: need to see the background to ROBINS-I to understand context for this term
2023-07-28 comment: I think the definition is good but the comment for application should specifically address this term and not just duplicate the definition of confounding covariate bias.
2023-08-11 comment: Is this the same as compliance bias, or compliance bias ("https://catalogofbias.org/biases/compliance-bias/") is a subtype of this? If "compliance bias" is a synonim, should be added as such. If not, should be added as a separate term"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian Alper, Joanne Dehnbostel, Harold Lehmann, Paul Whaley, Kenneth Wilkins"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-07-28 vote 3-1 by Eric Harvey, Harold Lehmann, Jesus Lopez-Alcalde, Paul Whaley
2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition
2023-08-11 vote 3-1 by Mario Tristan, Cauê Monaco, Eric Harvey, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-09-29 vote 5-0 by Joanne Dehnbostel, Harold Lehmann, Paul Whaley, Eric Harvey, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="SEVCO:00284"/>
<display value="Confounding by indication"/>
<definition
value="A confounding covariate bias in which the distorting variable is the reason for receiving an exposure."/>
<property>
<code value="external-definitions"/>
<valueString
value="Confounding by indication
A distortion that modifies an association between an exposure and an outcome, caused by the presence of an indication for the exposure that is the true cause of the outcome.
from https://catalogofbias.org/biases/confounding-by-indication/"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
The term 'indication' is derived from the medical community, in which the reason that an intervention is provided is called the indication. A reason for not providing an intervention may be called a 'contraindication' rather than 'indication to not provide'.
For example, people exposed to chemotherapy have higher mortality. This observation can easily be confounded by people exposed to chemotherapy having a higher rate of cancer (as the reason for receiving the chemotherapy)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-05-12 vote 5-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann
BUT THEN TERM CHANGED WITH HIERARCHY CHANGE on 2023-06-30
2023-07-14 vote 2-1 by Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann
2023-07-28 vote 3-1 by Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann
2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-05-12 comment: For Comment for Application, I thought we usually put the definition of the parent term first, and the comments about this child. So I would arrange the current 1, 2, 3 paragraphs as 2, 1, 3.
And I think what is now the first paragraph should start with, "A confounding different bias..."
2023-07-01 comment: I would add to the definition "or lack of". Thus: "A confounding covariate bias in which the confounder (distorting variable) is the reason for (or for lack of) an intended exposure.
2023-07-14 comment: I think the definition is good but the comment for application should specifically address this term in more detail than providing a definition for "indication". It is a complex concept and I am not sure I understand what is happening with this bias.
2023-07-28 comment: I still feel that an example of how the reason for receiving an exposure can end up being a confounder would be helpful."/>
</property>
</concept>
<concept>
<code value="SEVCO:00388"/>
<display value="Confounding by contraindication"/>
<definition
value="A confounding covariate bias in which the distorting variable is the reason for not receiving an exposure."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure or outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
The term 'indication' is derived from the medical community, in which the reason that an intervention is provided is called the indication. A reason for not providing an intervention may be called a 'contraindication' rather than 'indication to not provide'.
For example, people with cancer exposed to surgery for curative resection have lower mortality than other people with cancer. This observation can easily be confounded by people exposed to surgery for curative resection having a lower rate of metastatic cancer (which is a contraindication to such a surgery)."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-07-28 vote 3-1 by Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann
2023-08-04 vote 5-0 by Joanne Dehnbostel, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann, Mario Tristan BUT comment suggests removing parenthetical from definition"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-07-28 comment: I still feel that an example of how the reason for receiving an exposure can end up being a confounder would be helpful."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-08-18 vote 5-0 by Paul Whaley, Eric Harvey, Mario Tristan, Cauê Monaco, Joanne Dehnbostel"/>
</property>
</concept>
<concept>
<code value="SEVCO:00390"/>
<display value="time-varying confounding affected by past exposure"/>
<definition
value="A confounding covariate bias in which the distorting variable is itself influenced by the exposure."/>
<property>
<code value="comment"/>
<valueString
value="Confounding Covariate Bias is defined as a situation in which the effect or association between an exposure and outcome is distorted by another variable. For confounding covariate bias to occur the distorting variable must be (1) associated with the exposure and the outcome, (2) not in the causal pathway between exposure and outcome, and (3) unequally distributed between the groups being compared.
To distinguish "confounding by time of observation" from "time-varying confounding affected by past exposure" consider the following example:
An observational study is comparing patients with asthma taking Superdrug and patients with asthma not taking Superdrug. The outcome of interest is mortality, both for association with the dose of Superdrug and compared to not receiving Superdrug. For comparison, the control group not receiving Superdrug is measured during a 1-year calendar period.
For the mortality outcome comparing Superdrug vs. no Superdrug, the time of observation for the control group is consistently 1 year but for the Superdrug group the time of observation varies for each patient. This comparison is confounded by the time of observation.
For the mortality outcome comparing high-dose vs. low-dose Superdrug, the confounding variable of asthma exacerbation rate is complicated in several ways. First, the asthma exacerbation rate is associated with the outcome (mortality) independent from the effects of Superdrug. Second, the asthma exacerbation rate may influence the exposure (the dose of Superdrug which is increased if frequent asthma exacerbations) and the exposure (higher dose of Superdrug) may influence the confounder (reducing the asthma exacerbation rate). This comparison of high-dose vs. low-dose Superdrug for effects on mortality is distorted by time-varying confounding affected by past exposure."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Harold Lehmann, Caue Monaco"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-09-01 comment (with No vote): This term seems unnecessary. Describes a bias rarely seen."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Harold Lehmann, Mario Tristan"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00017"/>
<display value="Performance Bias"/>
<definition
value="A bias resulting from differences between the received exposure and the intended exposure."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Study Exposure Adherence Bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Intervention Adherence Bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Compliance Bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Performance Adherence Bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-05-24 vote 5-2 on "A bias resulting from differences between the received exposure and the intended exposure. Such differences could be the administration of additional interventions that are inconsistent with the study protocol, or non-adherence by the interventionalists or study participants to their assigned intervention. " by Harold Lehmann, Eric Harvey, KM Saif-Ur-Rahman, Bhagvan Kommadi, janice tufte, Paola Rosati, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="Definition of performance bias should be modified, Performance bias should involve the blinding at participant level and implementer level in definition.I would add that the differences must be present between the study arms In a RCT with an active control (for example drug A vs drug B) both study arms may have had low adherence but if these deviations from the protocol occurred homogeneously accross arms the effect estimate may not be distorted (biased). As a reviewer, I would not penalise this estimate due to high risk of performance bias. So, concerning the definition, I would propose "A bias resulting from differences accross the study arms between the [...]""/>
</property>
<property>
<code value="comment"/>
<valueString
value="Such differences could be the administration of additional interventions that are inconsistent with the study protocol, or non-adherence by the interventionalists or study participants to their assigned intervention. Such differences may occur based on assignment to intervention or may occur due to adherence to intervention."/>
</property>
<concept>
<code value="SEVCO:00035"/>
<display value="Inadequate blinding of participants"/>
<definition
value="A performance bias due to awareness of the allocated intervention by participants"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate masking of participants"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Lack of blinding of participants"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/14/2021: Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco,"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-06-07 vote 7-1 on "Inadequate blinding of participants = A performance bias due to awareness of the allocated intervention by participants" by KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati, 2021-06-11 vote 9-1 on same"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="Need to distinguish blinding of intervention from blinding of allocation Inadequate blinding of participants does not always imply bias. Besides, it can also imply detection bias in patient reported outcomes"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Inadequate blinding of participants is applied when there is awareness of assigned intervention AFTER intervention assignment. If there is awareness BEFORE study enrolment and intervention assignment, this would be Inadequate allocation concealment.
The term "Inadequate blinding of participants" is used to denote the TYPE of bias. Separate terms for the RATING of risk of bias are used to report the likelihood of the presence and influence of the type of bias."/>
</property>
</concept>
<concept>
<code value="SEVCO:00036"/>
<display value="Inadequate blinding of intervention deliverers"/>
<definition
value="A performance bias due to awareness of the allocated intervention by individuals providing or delivering the intervention"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate masking of intervention deliverers"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Lack of blinding of intervention deliverers"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/14/2021: Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco,"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-06-07 vote 7-1 on "Inadequate blinding of participants = A performance bias due to awareness of the allocated intervention by individuals providing or delivering the intervention" by KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati, 2021-06-11 vote 9-1 on same"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="Need to distinguish blinding of intervention from blinding of allocation; Should we use the term interventionalist or interventionist? Inadequate blinding of intervention deliverers does not always imply Performance bias"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Inadequate blinding of intervention deliverers is applied when there is awareness of assigned intervention AFTER intervention assignment. If there is awareness BEFORE study enrolment and intervention assignment, this would be Inadequate allocation concealment.
The term noted here is used to denote the TYPE of bias. Separate terms for the RATING of risk of bias are used to report the likelihood of the presence and influence of the type of bias."/>
</property>
</concept>
<concept>
<code value="SEVCO:00037"/>
<display value="Deviation from study intervention protocol"/>
<definition
value="A performance bias in which the intervention received differs from the intervention specified in the study protocol"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati"/>
</property>
</concept>
<concept>
<code value="SEVCO:00038"/>
<display value="Deviation from standard of care"/>
<definition
value="A performance bias in which the intervention or exposure received differs from the from the usual practice or expected care"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati"/>
</property>
</concept>
<concept>
<code value="SEVCO:00039"/>
<display value="Nonadherence of implementation"/>
<definition
value="A performance bias in which the intervention deliverers do not completely adhere to the expected intervention"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Nonadherence of interventionalist"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString value="interventionist vs. intervention deliverer"/>
</property>
</concept>
<concept>
<code value="SEVCO:00040"/>
<display value="Nonadherence of participants"/>
<definition
value="A performance bias in which the participants do not completely adhere to the expected intervention or exposure"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString value="is known or unknown"/>
</property>
</concept>
<concept>
<code value="SEVCO:00041"/>
<display value="Imbalance in deviations from intended intervention"/>
<definition
value="A performance bias in which the degree of performance bias is unequally distributed between groups being compared"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Asymmetry in adherence between groups"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Erfan Shamsoddin, Bhagvan Kommadi"/>
</property>
<property>
<code value="approval"/>
<valueString
value="8/8 as of 6/7/2021: KM Saif-Ur-Rahman, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00019"/>
<display value="Attrition Bias"/>
<definition
value="A bias due to absence of expected participation or data collection after selection for study inclusion."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Missing data bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Erfan Shamsoddin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="13/13 as of 6/18/2021: Eric Au, Harold Lehmann, Erfan Shamsoddin, Ahmad Sofi-Mahmudi, Mario Tristan, Eric Harvey, Sebastien Bailly, Bhagvan Kommadi, Leo Orozco, Alejandro Piscoya, Jesús López-Alcalde, Janice Tufte, Paola Rosati"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="6/14/2021-06-14 vote 7-1 on "Attrition Bias = A bias due to absence of expected participation or data collection after study enrollment." by, Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco,"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="The phrase "after study enrolment" might be confusing. Does enrolment apply to retrospective observational studies?"/>
</property>
<concept>
<code value="SEVCO:00286"/>
<display value="Attrition bias due to participant attrition"/>
<definition
value="A bias due to absence of expected participation due to participant dropout, withdrawal or non-participation after selection for study inclusion."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-04-28 comment: I would not detail that 20%: it is misleading and not evidence-based"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-05-12 vote 6-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel , Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00287"/>
<display value="Attrition bias due to missing data"/>
<definition
value="A bias due to data loss or absence of data collection from participants after selection for study inclusion."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-04-28 comment: I would not detail that 20%: it is misleading and not evidence-based"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-05-12 vote 6-0 by Muhammad Afzal, Brian S. Alper, Joanne Dehnbostel , Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00386"/>
<display value="Attrition bias due to missing outcome data"/>
<definition
value="An attrition bias due to missing data specific to the dependent variable."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-05-19 5-1 Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Joanne Dehnbostel
2023-06-09 vote 3-1 by Eric Harvey, Cauê Monaco, Paul Whaley, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="The information in the parentheses, "(or data on an independent variable)," is unclear in its intended meaning. To improve clarity, we could revise the definition.
2023-06-09 comment: The definition is too difficult to parse, and probably too similar to the preferred term. The comment for application is also very difficult to read."/>
</property>
<property>
<code value="comment"/>
<valueString
value="In a situation of repeated measures outcomes, attrition bias due to missing outcome data can occur if one or more measurements are missing."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-06-16 vote 5-0 by Joanne Dehnbostel, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:00288"/>
<display value="Attrition bias due to missing exposure data"/>
<definition
value="An attrition bias due to missing data specific to the independent variable(s) of primary interest, such as exposure or intervention."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Attrition bias due to missing intervention data"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-05-19 5-1 Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="The information in the parentheses, "(or data on an independent variable)," is unclear in its intended meaning. To improve clarity, we could revise the definition.
2023-06-09 comment: This needs a comment for application, but the definition is clearer than for "attrition bias due to missing outcome data"."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-06-09 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Harold Lehmann, Jesus Lopez-Alcalde, Sunu Alice Cherian"/>
</property>
<property>
<code value="comment"/>
<valueString
value="If coding a bias related to the classification of exposure, misclassification of exposure may be coded as Exposure Detection Bias, but if the data is excluded from analysis it may then be coded as Attrition bias due to missing exposure data."/>
</property>
</concept>
<concept>
<code value="SEVCO:00289"/>
<display value="Attrition bias due to missing modifier data"/>
<definition
value="An attrition bias due to missing data specific to a confounder or effect modifier"/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The term modifier is intended to be broad, including variables used for modeling interactions, stratification factors to account for effect modification, or other variables such as mediators that need to be accounted for when modeling the relationship between the outcome and exposure."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00387"/>
<display
value="Attrition bias due to missing data about attrition"/>
<definition
value="An attrition bias due to missing data specific to the extent of or reasons for missing data."/>
<property>
<code value="comment"/>
<valueString
value="Attrition bias due to missing data is defined as a bias due to data loss or absence of data collection from participants after selection for study inclusion. Data about the amount of missing data and data about the reasons for missing data are types of data that can also be missing. For example, in a time-to-event study, the reason a participant is censored might be missing and missing such data may interfere with distinguishing informative from non-informative censoring."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-06-16 vote 5-0 by Joanne Dehnbostel, Eric Harvey, Paul Whaley, Sunu Alice Cherian, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-06-16 comments: Time-to-event should be hyphenated
Funnily enough, this came up straight after our call in relation to another bias project I am working on, so I would consider this addition useful!"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00290"/>
<display value="Imbalance in missing data"/>
<definition
value="An attrition bias in which the degree of missing data is unequally distributed between groups being compared."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00291"/>
<display value="Inadequate response rate"/>
<definition
value="An attrition bias in which the reason for absence of data collection is a low response rate to data collection surveys."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-05-12 vote 5-0 by Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00020"/>
<display value="Detection Bias"/>
<definition
value="A bias due to distortions in any process involved in the determination of the recorded values for a variable."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Janice Tufte, Muhammad Afzal; Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="6/14/2021-06-14 vote 7-1 on "Detection Bias = A bias due to distortions in how variable values (data) are determined (measured, classified or ascertained)." by, Eric Harvey, Eric Au, Bhagvan Kommadi, Ahmad Sofi-Mahmudi, Erfan Shamsoddin, Janice Tufte, Joanne Dehnbostel, Leo Orozco
AGREEMENT REACHED 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper --- for DEFINITION OF: A bias due to distortions in how variable values (data) are determined. COMMENT FOR APPLICATION: Determination may include ascertainment or assessment (classification or measurement).
2022-10-14 vote 3-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris
2022-01-21 vote 6-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="We need to state that this bias relates to the "outcome" -- The ROB-1 says the term "outcome assessment" as an alternative for detection bias. The ROBINS-1 says that "Non-differential misclassification is unrelated to the outcome and will usually bias the estimated effect of intervention towards the null". Still though, this leads to inadvertent deviations in the outcome assessment. I would suggest to at least state that this bias relates to outcome assessment. I remember Joanne saying that we will add a few "child concepts" later on and if that is the case here, then it is fine. Nevertheless, the RoB2 suggests not to use these terms to prevent "confusion" and does not actually agree with these sub-classifications (the fist page of the introduction section). Alternative terms according to (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5130591/): "Observer Bias", "Ascertainment Bias", or "Assessment Bias" 8/27/21 comment: Detection bias is no included in the list the more problematic Cochrane ROB1 however (Jørgensen et al. Systematic Reviews (2016) describe alll the domains of ROB1 "frequently implemented in a non-recommended way" The description in general is clear.
2022-10-14 comments: Do we need "Outcome Detection Bias" in addition to "Detection Bias"?
Blinding or masking may be used to reduce the risk of distorted outcome measurement(s).
2022-01-21 comment: I am not sure whether to vote yes or no: I understand the definition because I have been following our discussions and it is consistent with the bias model we have developed, but I worry that this definition may not be consistently understood or applied by a user of SEVCO - I feel there is too much unspoken metaphysical baggage that is coherent and correct but not useful.
2022-01-28 comment: Not perfect but good enough to live with. Could maybe improve on ascertainment component of the comment for application."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Detection of the value of the variable comprises three processes involved in the determination of the recorded values for the variable: ascertainment (providing the opportunity for assessment), assessment (measurement and/or classification), and documentation (recording of data values for analysis)."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-28 vote 9-0 by Mario Tristan, Janice Tufte, Robin Ann Yurk, Brian S. Alper, C P Ooi, Harold Lehmann, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley"/>
</property>
<concept>
<code value="SEVCO:00042"/>
<display value="Outcome Detection Bias"/>
<definition
value="A detection bias due to distortions in how an outcome is determined."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Erfan Shamsoddin, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
<concept>
<code value="SEVCO:00047"/>
<display
value="Cognitive Interpretive Bias for outcome determination"/>
<definition
value="An outcome detection bias due to the subjective nature of human interpretation."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Perception bias for outcome determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Subjective interpretive bias for outcome determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Subjectivity bias for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper, 7/7 for alternative terms on 9/24/21: Janice/Brian/Eric/Paola/Jesus/Bhagvan/Mario"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="8/27/21 comment: This bias is difficult to manage and avoid it."/>
</property>
<concept>
<code value="SEVCO:00048"/>
<display
value="Bias due to lack of masking for outcome determination"/>
<definition
value="A cognitive interpretive bias for outcome determination due to awareness of the participant's status with respect to the exposure of interest."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Bias due to lack of blinding during outcome assessment"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Bias due to lack of masking during outcome assessment"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Awareness bias for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Paul Whaley, Kenneth Wilkins"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="8/27/2021 vote 9-1 on "Lack of blinding during outcome assessment = A cognitive interpretive bias for outcome determination due to the outcome assessor’s awareness of the participant's status with respect to the exposure of interest." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper
earlier term approved 5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-08-27 comment: In my opinion "lack of blinding during outcome assessment" does not always imply bias for outcome determination (for example, for hard outcomes, such as analytic parameters, or all-cause mortality)
2022-03-18 comment: I would consider editing the term definition to ...lack of blinding."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Lack of blinding or masking is not automatically a bias, but if awareness of exposure status systematically distorts the outcome determination then a 'Bias due to lack of masking for outcome determination' exists."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="SEVCO:00049"/>
<display value="Observer bias for outcome determination"/>
<definition
value="A cognitive interpretive bias for outcome determination due to subjective interpretations in the process of observing and recording information."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte,"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="8/27/2021 vote 9-1 on "Observer bias for outcome determination = A cognitive interpretive bias for outcome determination due to subjective interpretations in the process of observing and recording information." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-08-27 comment: This situation seems to be covered by "Lack of blinding for outcome determination" and "Outcome ascertainment bias". I would suggest deleting this term to remove the overlap."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Multiple types of bias can overlap. Observer bias is different than lack of blinding with respect to the exposure. Observer bias is about the influence of the observer's interpretation of what they are observing, whether or not the observer is aware of the participant's exposure."/>
</property>
<concept>
<code value="SEVCO:00052"/>
<display value="Confirmation bias for outcome determination"/>
<definition
value="An observer bias for outcome determination due to previous opinions or knowledge of a subject’s prior exposures or assessments."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Diagnostic suspicion bias for outcome determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Previous opinion bias for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte,"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00050"/>
<display value="Recall bias for outcome determination"/>
<definition
value="A cognitive interpretive bias for outcome determination due to differences in accuracy or completeness of recall of past events or experiences."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="SEVCO:00051"/>
<display value="Apprehension bias for outcome determination"/>
<definition
value="A cognitive interpretive bias for outcome determination due to a study participant's responding or behaving differently when aware of being observed."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Hawthorne effect for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Mario Tristan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte,"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="8/27/2021 vote 8-2 on "Apprehension bias for outcome determination = A cognitive interpretive bias for outcome determination due to study participants’ awareness of being observed resulting in different responses or behaviors." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-08-27 comments: A cognitive interpretive bias for outcome determination due to study participants’ awareness of being observed and resulting in different responses or behaviors. (just a slight rewording - the existing wording doesn't read well to me) This definition seems to refer to performance bias. The key is that [...] results in different responses or behaviours concerning the outcome determination."/>
</property>
</concept>
<concept>
<code value="SEVCO:00053"/>
<display
value="Hypothetical assessment bias for outcome determination"/>
<definition
value="A cognitive interpretive bias for outcome determination due to a difference between an individual’s report of an imagined or hypothetical response from their actual response. The response may be a behavior or valuation."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Subjunctivity bias for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="SEVCO:00054"/>
<display value="Mimicry bias for outcome determination"/>
<definition
value="A cognitive interpretive bias for outcome determination due to a misinterpretation of observations that resemble the outcome."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="SEVCO:00057"/>
<display value="Unacceptability bias for outcome determination"/>
<definition
value="A cognitive interpretive bias for outcome determination due to distortions in response, response values, or recording of responses resulting from perception of the social unacceptability of an outcome."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Unacceptable disease bias for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Ken Wilkins, Lisa Schilling"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 10/1/21: , Joanne Dehnbostel, Brian S. Alper, Eric Harvey, Alejandro Piscoya, Bhagvan Kommadi,"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00058"/>
<display value="Outcome Ascertainment Bias"/>
<definition
value="An outcome detection bias due to distortions in how the data are collected."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Ascertainment Bias for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
<concept>
<code value="SEVCO:00097"/>
<display
value="Nonrepresentative observation period for outcome of interest"/>
<definition
value="An outcome ascertainment bias due to differences in the period used for observation of the outcome and the period for the outcome of interest."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Misaligned follow up period for outcome of interest"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 5-2 on "Inappropriate follow up period for outcome of interest = An outcome ascertainment bias due to differences in the time period used for observation of the outcome and the true time period for outcome occurrence." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Misaligned follow up period for outcome of interest = An outcome ascertainment bias due to differences in the time period used for observation of the outcome and the true time period for outcome occurrence." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: I wonder if we need to address interim analysis validity. What about adding to inappropriate 'unreliable'?; Change word Inappropriate to Different
2021-10-25 comments: It is unclear what do you mean with 'and the true time period for outcome occurrence', On the other hand, I propose using 'period' instead of 'time period'"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/29/2021 vote 6-0 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:00098"/>
<display
value="Nonrepresentative context for outcome ascertainment"/>
<definition
value="An outcome ascertainment bias due to differences in the context in which the outcome is observed and the intended context for the outcome of interest."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Unreliable method for outcome ascertainment = An outcome ascertainment bias due to methods of data collection that result in inconsistent data values." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Undependable method for outcome of interest = An outcome ascertainment bias due methods of data collection that result in inconsistent or incorrect data values." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
10/29/2021 vote 5-1 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte
11/22/2021 vote 6-1
2021-12-03 vote for priort term 7-0 by Philippe Rocca-Serra, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley, Joanne Dehnbostel, C Ooi"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: the word Unreliable is misleading as more applicable to measurement error than bias
2021-10-25 comments: I do not fully understand the difference between the second and the third definitions
2021-10-29 comments: Suggest Incorrect or inconsistent method.
2021-11-22 comments: The term 'inconsistent' may be more appropriate -- steering group discussion to move the "Comment for application" property higher on the page and see if this comment will resolve the concern"/>
</property>
<property>
<code value="comment"/>
<valueString
value="This term is used when the context used for outcome ascertainment is incorrect, insensitive, or nonspecific. If the context (whether representative or not) is applied inconsistently, then use the term "Inconsistency in outcome ascertainment""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="SEVCO:00099"/>
<display value="Inconsistency in outcome ascertainment"/>
<definition
value="An outcome ascertainment bias due to differences within or between groups in how the data are collected."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Imbalance in outcome ascertainment"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Imbalance in application of outcome ascertainment = An outcome ascertainment bias due to differences within or between groups in how the data are collected." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Imbalance in application of outcome ascertainment = An outcome ascertainment bias due to differences within or between groups in how the data are collected." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
10/29/2021 vote on prior term 6-0 by Cheow Peng Ooi, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Eric Harvey, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: Imbalance is misleading as more applicable to measurement error?
2021-10-25 comment: Suggestion, replace imbalance with Variation or Heterogeneity
2021-10-29 comment: Alternative Terms: Variation or Heterogeneity --> converted 2021-10-29 to suggested addition of alternative term "Variation in application of outcome ascertainment" by Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Harold Lehmann, Mario Tristan, Bhagvan Kommadi"/>
</property>
<property>
<code value="comment"/>
<valueString
value="This term is used when the context (whether representative or not) is applied inconsistently. If the context used for outcome ascertainment is incorrect, insensitive, or nonspecific, then use the term "Nonrepresentative context for outcome ascertainment""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-18 vote 5-0 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00059"/>
<display value="Outcome Measurement Bias"/>
<definition
value="An outcome detection bias due to distortions in how the observed outcomes are measured."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Measurement Bias for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal; Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="PRIOR AGREEMENT 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper WITH DEFINTION: An outcome detection bias due to distortions in how the data are measured."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-11 comment: Outcome Measurement Bias has a similar term definition as Outcome Classification Bias. May need to add an additional comment for application from T&O discussion."/>
</property>
<property>
<code value="comment"/>
<valueString
value="If one is addressing a bias in the instruments or processes used to measure the observed outcome, use Outcome Measurement Bias. If one is addressing how the measured outcome is categorized, use Outcome Classification Bias."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra"/>
</property>
<concept>
<code value="SEVCO:00100"/>
<display value="Inappropriate method for outcome measurement"/>
<definition
value="An outcome measurement bias due to use of an incorrect method or protocol."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Incorrect outcome measurement method"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inappropriate outcome measurement method"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Outcome measurement method inappropriate = An outcome measurement bias due to use of an incorrect method or protocol." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 2-2 on "Outcome measurement method inappropriate = An outcome measurement bias due to use of an incorrect method or protocol." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: I would change word inappropriate to different as the bias is from difference in comparison not flaws or errors in scientific methods.
2021-10-25 comments: suggest replace with incorrect method; Should not be 'Inappropriate outcome measurement method' (instead of placing the adjective at the end?)
2022-03-11 Preferred term revised (and alternative term added) to match corresponding changes in Exposure Detection Bias)"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00101"/>
<display
value="Insensitive measure bias for outcome determination"/>
<definition
value="An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate sensitivity for outcome measure"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Use of an inadequately sensitive outcome measure is likely to result in false negative findings."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Insensitive measure bias for outcome determination = An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Insensitive measure bias for outcome determination =An outcome measurement bias due to use of a method that does not reliably detect the outcome when the outcome is present." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: Change word Insensitive to Sensitivity measure bias as double negative in phrase
2021-10-25 comment: False Negative measure Bias or Unreliable measure bias"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00211"/>
<display
value="Nonspecific measure bias for outcome determination"/>
<definition
value="An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate specificity for outcome measure"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Use of an inadequately specific outcome measure is likely to result in false positive findings."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Nonspecific measure bias for outcome determination = An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Nonspecific measure bias for outcome determination = An outcome measurement bias due to use of a method that falsely detects the outcome when the outcome is absent." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: I would change to Specificity measurement bias. Remove word falsely from the definition as it implies problems with scientific methods
2021-10-25 comment: Suggest use False Positive Measure Biac"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7 of 7 on 2021-11-05: Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00103"/>
<display
value="Inappropriate application of method for outcome measurement"/>
<definition
value="An outcome measurement bias due to inappropriate application of the method or protocol."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Outcome measurement conduct inappropriate = An outcome measurement bias due to incorrect application of the method or protocol." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 2-2 on "Outcome measurement conduct inappropriate = An outcome measurement bias due to incorrect application of the method or protocol." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
2021-11-05 vote 6-1 on "Inappropriate outcome measurement conduct = An outcome measurement bias due to incorrect application of the method or protocol." by Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey
2022-03-18 vote 4-1 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Nisha Mathew, Brian S. Alper
2022-03-25 vote 7-1 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Philippe Rocca-Serra, Brian S. Alper, Jesus Lopez-Alcalde, Paul Whaley, Muhammad Afzal"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: I would eliminate this definition - as suggests flawed study design..
2021-10-25 comments: replace inappropriate with incorrect; Should not be 'Inappropriate outcome measurement conduct' (instead of placing the adjective at the end?)
2021-11-05 comment: There is enormous overlap with this term and "Inappropriate outcome measurement method", so this one should be eliminated
As of 2021-11-05 this term is not being prepared for vote. The current ROB tools do not distinguish the inappropriate conduct (used in QUADAS-2) from inadequate method (used in most other ROB tools) in the same tool, so the demand for this term is uncertain and thus not applied for version 1 of the Code System.
On 2022-03-11 we revised this term to match corresponding changes that passed for Exposure Detection Bias.
2022-03-18 comment: Suggest edit alternative term from conduct to process
2022-03-25 comment: Recommend edit term definition so it reads: Outcome Measurement method Bias.
Suggest reviewing your complete taxonomy of terms and identify similarities or duplicate terms and potentially integrating terms by keeping as primary term versus adding to alternate term for prior vote with similar term definition or statements."/>
</property>
<property>
<code value="comment"/>
<valueString
value="An inappropriate application of the method or protocol suggests error is introduced by the process of measurement, as distinct from the method or protocol used for measurement (which would be an Inappropriate method for outcome measurement)."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-08 vote 11-1 (no rationale provided for the negative vote) by Muhammad Afzal, Paul Whaley, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, nelle.stocquart, nisha mathew, Harold Lehmann, Cauê Monaco"/>
</property>
</concept>
<concept>
<code value="SEVCO:00104"/>
<display value="Inconsistency in outcome measurement"/>
<definition
value="An outcome measurement bias due to differences within groups in how the observed outcomes are measured."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley, Robin Ann Yurk, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Imbalance in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Imbalance in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
2021-11-05 vote 6-1 on "Inconsistency in application of outcome measurement = An outcome measurement bias due to differences within or between groups in how the data are measured." by Bhagvan Kommadi, Paola Rosati, Paul Whaley, Janice Tufte, Alejandro Piscoya, Robin Ann Yurk, Eric Harvey
2021-11-22 vote 3-2 on "Inconsistency in application of outcome measurement" = "An outcome measurement bias due to differences within or between groups in how the data are measured."2021-12-10 vote 5-1 by Joanne Dehnbostel, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley
PRIOR AGREEMENT 2021-12-17 vote 6-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Mario Tristan, C P Ooi, Jesus Lopez-Alcalde FOR DEFINITION: An outcome measurement bias due to differences within groups in how the data are measured. AND COMMENT FOR APPLICATION: "How the data are measured" may refer to the methods applied for data measurement or the application of those methods."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: I would eliminate this definition
2021-10-25 comment: Replace Imbalance with Heterogeneity
2021-11-05 comment: This is a specific type of "Inappropriate outcome measurement method" so this term should be moved into that position or eliminated (are we really going to describe all of the inappropriate methods?) [[discussed in COKA WG and noted that ROB2 has separate questions 4.1 and 4.2 for these terms so we need to support that]
2021-11-22 comments: "The wording 'inconsistent method of outcome measurement' may better reflect the definition" and "May be pedantic, but is it data that are measured, or the outcome as a variable (that results in data)? I also wonder if we mean differences within groups - some variation would be expected, but what matters is if the variation results in systematic error in measuring the variable between groups. If we feel that e.g. a study design where two different ways of measuring outcome were implemented within groups, but this did not lead to bias across the exposure and control arms, then I would vote yes (pending clarification of "data")."2021-12-10 comment: It seems to not quite be correctly written. The two choices for definition are differently phrased ("application of methods" / "methods applied")even though I think they are supposed to refer to across groups or within groups, but both refer to within groups, so I am not sure how to interpret this.
2022-01-21 comment: As a comment: Is this term redundant, if the two child terms are the complete set of options for inconsistency in outcome measurement?"/>
</property>
<property>
<code value="comment"/>
<valueString
value=""How the observed outcomes are measured" may refer to the methods applied for measurement or the application of those methods."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-21 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra"/>
</property>
<concept>
<code value="SEVCO:00243"/>
<display
value="Inconsistency in instruments used for outcome measurement"/>
<definition
value="An outcome measurement bias due to differences within groups in the instruments used for measurement."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-17 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde
2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ≅ methods applied)
2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-21 vote 7-0 by Andrew Beck, Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra"/>
</property>
</concept>
<concept>
<code value="SEVCO:00244"/>
<display
value="Inconsistency in processes used for outcome measurement"/>
<definition
value="An outcome measurement bias due to differences within groups in the processes by which the instruments are used for measurement."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-17 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde
2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ≅ methods applied)
2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-21 vote 7-0 by Andrew Beck, Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00240"/>
<display value="Imbalance in outcome measurement"/>
<definition
value="An outcome measurement bias due to differences between groups in how the observed outcomes are measured."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Robin Ann Yurk, Janice Tufte, Harold Lehmann, Mario Tristan, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value=""How the observed outcomes are measured" may refer to the methods applied for measurement or the application of those methods."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-10 vote 5-0 by Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, Paul Whaley but steering group decided to make changes consistent with changes to Inconsistency in outcome measurement.
PRIOR AGREEMENT 2021-12-17 vote 5-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati FOR DEFINITION: An outcome measurement bias due to differences between groups in how the data are measured. WITH COMMENT FOR APPLICATION: "How the data are measured" may refer to the methods applied for data measurement or the application of those methods."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-10 comment: Referring back to my comment on the inconsistency in method, I realise I hadn't read it quite right. In both cases, they maybe aren't quite as easy to parse as would be ideal but I can't think of a better definition. Maybe a use note to refer to how the terms are similar and clarify when one vs. the other should be used?
2022-01-21 comments: The term definition and comment is the same for Inconsistency in outcome measurement bias. Suggest combining the two terms by listing one as an alternative term. (yellow highlighting in messaging applied to show the differences in the terms)
As a comment: Is this term redundant, if the two child terms are the complete set of options for inconsistency in outcome measurement?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-21 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra"/>
</property>
<concept>
<code value="SEVCO:00245"/>
<display
value="Imbalance in instruments used for outcome measurement"/>
<definition
value="An outcome measurement bias due to differences between groups in the instruments used for measurement."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-17 vote 4-1 by Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde
2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ≅ methods applied)
2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra, Andrew Beck"/>
</property>
</concept>
<concept>
<code value="SEVCO:00246"/>
<display
value="Imbalance in processes used for outcome measurement"/>
<definition
value="An outcome measurement bias due to differences between groups in the processes by which the instruments are used for measurement."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-17 vote 4-1 by Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Robin Ann Yurk
2022-01-07 vote 5-1 by Robin Ann Yurk, Janice Tufte, Paul Whaley, Paola Rosati, Harold Lehmann, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-17 comment: To me there is no semantic difference between this definition and the other subordinate term for inconsistency in outcome measurement (application of methods ≅ methods applied)
2022-01-07 comment: I would approve this, except I am still not sure that one can measure data ("facts and statistics collected together for reference or analysis"). One can collect data, or measure a variable, but I don't think one can collect data."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Paul Whaley, Janice Tufte, Alejandro Piscoya, Philippe Rocca-Serra, Andrew Beck"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00060"/>
<display value="Outcome Classification Bias"/>
<definition
value="An outcome detection bias due to distortions in how the observed outcomes are classified."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Misclassification Bias for outcome determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Outcome Misclassification Bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="PRIOR AGREEMENT 10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper, 7/7 for renaming on 9/24/21: Janice/Brian/Eric/Paola/Jesus/Bhagvan/Mario FOR DEFINITION: An outcome detection bias due to distortions in how the data are classified."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-11 comment: Outcome Classification Bias has a similar term definition as Outcome Measurement Bias. May need to add an additional comment for application from T&O discussion."/>
</property>
<property>
<code value="comment"/>
<valueString
value="If one is addressing a bias in the instruments or processes used to measure the observed outcome, use Outcome Measurement Bias. If one is addressing how the measured outcome is categorized, use Outcome Classification Bias."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-21 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, Paul Harris, Paul Whaley, Alejandro Piscoya, Philippe Rocca-Serra"/>
</property>
<concept>
<code value="SEVCO:00061"/>
<display value="Outcome Classification System Bias"/>
<definition
value="An outcome classification bias resulting from the definition or threshold used for outcome classification."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Definition Bias for outcome determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Outcome definition bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte,"/>
</property>
<property>
<code value="comment"/>
<valueString
value="An outcome classification system bias suggests an internal validity problem in which the definition or threshold used for outcome classification does not represent the outcome of interest. If considering an external validity problem, the "Wrong question bias" (term not yet defined) may be used. An outcome classification system bias is present when there are differences between the outcome of interest and the definition or threshold used for outcome classification."/>
</property>
<concept>
<code value="SEVCO:00105"/>
<display
value="Nonrepresentative definition for outcome classification"/>
<definition
value="An outcome classification system bias due to a mismatch between the outcome of interest and the definition or threshold used for outcome measurement."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Nonrepresentative definition for outcome classification = An outcome classification system bias due to a definition or threshold that does not represent the outcome of interest." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
2021-11-29 vote 6-1 on "Nonrepresentative definition for outcome classification" = "An outcome classification system bias due to a definition or threshold that does not represent the outcome of interest." by Harold Lehmann, Paul Whaley, Janice Tufte, C P Ooi, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: I would eliminate this definition
2021-11-29 comments: ("represent in its entirety" instead? A definition could *partially* represent the outcome of interest, so perhaps we want to make clear that this bias is invoked only for something that is more than "partial"?) "Represent" feels ambiguous, would it be useful to clarify what is meant here? Is it that it includes outcomes in addition to that of interest, and/or excludes outcomes that are of interest? Maybe that doesn't make things clearer."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-10 vote 5-0 by Paul Whaley, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde"/>
</property>
<concept>
<code value="SEVCO:00108"/>
<display
value="Surrogate marker bias for outcome classification"/>
<definition
value="An outcome classification system bias due to use of a definition that is proxy for the outcome rather than direct observation of the outcome."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Substitution bias for outcome classification"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Surrogate marker bias for outcome classification = A nonrepresentative definition for outcome classification due to use of a factor associated with the outcome rather than a direct observation of the outcome." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
2021-11-29 vote 3-3 on "Surrogate marker bias for outcome classification" = "A nonrepresentative definition for outcome classification due to use of a proxy for the outcome rather than a direct observation of the outcome.""/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: I would edit the definition: An outcome classification system bias due to use of a definition that is proxy rather than direct observation of the outcome. {{Definition changed as result of this comment}}
2021-11-29 comments: The 10/15 comments stated that the definition should start with "An outcome classification system bias...."; but this definition does not. A little pickier, I might say, "result from use of a definition" rather than "due to". The latter sounds like the bias will always occur; the former, that there is a bias as a result, in this instance.
I'm not sure I fully understand this definition. A surrogate would generally be used in place of an outcome that cannot readily be observed in a research setting. I am not sure how this can be a classification error (the surrogate is what the surrogate is). I can, however, see how it could be an error in inference (assuming that because the exposure affects the surrogate, then the exposure also affects the outcome of actual interest). Is this a helpful way of thinking about this, or would it just be over-complicating matters?
This suggested definition is more appropriate: An outcome classification system bias due to use of a definition that is proxy rather than direct observation of the outcome"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-10 vote 5-0 by Paul Whaley, Janice Tufte, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00106"/>
<display value="Post-hoc definition of outcome"/>
<definition
value="An outcome classification system bias due to defining the outcome after interacting with the study data."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Post-hoc outcome definition"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Definition for outcome classification not prespecified"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="No prespecified definition for outcome classification"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="10/15/2021 vote 6-1 on "Definition not prespecified for outcome classification = An outcome classification system bias due to absence of a predetermined definition." by Eric Harvey, Paola Rosati, Alejandro Piscoya, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte, Brian S. Alper
10/25/21 vote 3-1 on "Definition not prespecified for outcome classification = An outcome classification system bias due to absence of a predetermined definition." by Paola Rosati, Robin Ann Yurk, Jesus Lopez-Alcalde, Eric Harvey
2021-12-03 vote 5-2 by Harold Lehmann, Paul Whaley, Janice Tufte, C P Ooi, Joanne Dehnbostel, Philippe Rocca-Serra, Robin Ann Yurk
2021-12-10 vote 2-2 by Paul Whaley, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-10-15 comments: I would eliminate this definition as suggest flawed study design
2021-10-25 comment: I would phrase 'Not prespecified definition for outcome classification'
2021-12-03 comments: It feels uninformative to define "not prespecified" as "not predetermined". I wonder if "predetermined" can be clarified - presumably, the issue here is that the outcome is defined post-hoc, after data collection, so that outcome ends up being defined around the data rather than specified in advance of conduct of the research. // Rephrasing to this 'No prespecified definition for outcome classification' may be clearer and easier to understand.
2021-12-10 comments: Consider removing term. As methods are permitted to be revised for a variety of reasons with new definitions but would be described in methods or a revised protocol. If truly post-hoc after a data set is closed then there are different issues for discussion. /// Suggest changing "due to determination of the outcome definition" to "due to outcome being defined""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-07 vote 9-0 by Robin Ann Yurk, Janice Tufte, Paul Whaley, C P Ooi, Paola Rosati, Jesus Lopez-Alcalde, Harold Lehmann, Joanne Dehnbostel, Mario Tristan"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00062"/>
<display value="Outcome Classification Process Bias"/>
<definition
value="An outcome classification bias resulting from the application of the method used for outcome classification."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Classification Process Bias for outcome determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 as of 9/24/21: , Janice Tufte, Brian S. Alper, Eric Harvey, Paola Rosati, Jesus Lopez-Alcalde, Bhagvan Kommadi, Mario Tristan"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="4-1 vote as of 9/17/2021 regarding Outcome Classification Process Bias (SEVCO:00062) (Classification process bias for outcome determination) [Draft Term] = An outcome misclassification bias resulting from the application of the method used for outcome classification.: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte,"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="comment: "This might be related to outcome classification bias (child relationship)""/>
</property>
</concept>
<concept>
<code value="SEVCO:00063"/>
<display value="Incorporation Bias for outcome determination"/>
<definition
value="An outcome classification bias due to the inclusion of the exposure under investigation in the method or process used for outcome classification."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 9/17/2021: Eric Harvey, Paola Rosati, Alejandro Piscoya, Bhagvan Kommadi, Janice Tufte,"/>
</property>
<property>
<code value="comment"/>
<valueString
value="In predictive model research, incorporation bias for outcome determination occurs if the predictor (explanatory variable) is included in the outcome definition."/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00043"/>
<display value="Exposure Detection Bias"/>
<definition
value="A detection bias due to distortions in how an exposure of interest is determined."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Erfan Shamsoddin, Muhammad Afzal, Kenneth Wilkin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<concept>
<code value="SEVCO:00055"/>
<display
value="Cognitive Interpretive Bias for exposure determination"/>
<definition
value="An exposure detection bias due to the subjective nature of human interpretation."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Perception bias for exposure determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Subjective interpretive bias for exposure determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Subjectivity bias for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. The human interpretation can be that of the observer or participant."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-04 comment: Consistency of phrasing with other definitions ("bias due to distortions in..."), need comment for application."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 6-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper"/>
</property>
<concept>
<code value="SEVCO:00056"/>
<display
value="Bias due to lack of masking for exposure determination"/>
<definition
value="A cognitive interpretive bias for exposure determination due to awareness of the participant's status with respect to the outcome of interest or other relevant exposures."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Bias due to lack of blinding during exposure assessment"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Bias due to lack of masking during exposure assessment"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Awareness bias for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Lack of blinding is not automatically a bias, but if awareness of some data systematically distorts the exposure determination then a 'Bias due to lack of masking for exposure determination' exists."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-04 vote 5-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper
2022-02-11 vote 8-1 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Janice Tufte, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde
2022-02-18 vote 10-3 by Rebecca Baker, Brian S. Alper, Mario Tristan, Paul Whaley, Sunu Alice Cherian, Janice Tufte, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Joanne Dehnbostel,Sumalatha A"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-04 comment: Is it just awareness of the participant's status with respect solely to the outcome of interest? I could imagine being aware of e.g. socioeconomic status rather than outcome, and this potentially having an influence on exposure assessment. Blinding I think is supposed to be to as many characteristics of the participant as possible.
2022-02-11 comment: This definition assumes that the "Lack of blinding for exposure determination" always associates bias, which may not be the case. For example, if we want to assess the role of sex as a prognostic factor for ICU admission, the participant may not be blinded but this does not cause bias in his/her prognostic factor determination (sex)
2022-02-18 comments: As "lack of blinding" is contributing to but not the bias itself, perhaps rename to Awareness bias for exposure determination
This definition assumes that the "Lack of blinding for exposure determination" always associates bias, which may not be the case. For example, if we want to assess the role of sex as a prognostic factor for ICU admission, the participant may not be blinded but this does not cause bias in his/her prognostic factor determination (sex)
Not much difference between existing and new terminology
2022-02-25 comment: Suggest removing Lack of blinding during exposure assessment from alternative term and just list the other 3 alternative terms. The comment is based on your comment for application description."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley"/>
</property>
</concept>
<concept>
<code value="SEVCO:00238"/>
<display value="Observer bias for exposure determination"/>
<definition
value="A cognitive interpretive bias for exposure determination due to subjective interpretations in the process of observing and recording information."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
Multiple types of bias can overlap. Observer bias is different than lack of blinding with respect to the outcome. Observer bias is about the influence of the observer's interpretation of what they are observing, whether or not the observer is aware of the participant's outcome."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 6-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper"/>
</property>
<concept>
<code value="SEVCO:00239"/>
<display value="Confirmation bias for exposure determination"/>
<definition
value="An observer bias for exposure determination due to previous opinions or knowledge of a subject’s prior exposures or assessments."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Diagnostic suspicion bias for exposure determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Previous opinion bias for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00214"/>
<display value="Recall bias for exposure determination"/>
<definition
value="A cognitive interpretive bias for exposure determination due to differences in accuracy or completeness of recall of past events or experiences."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley"/>
</property>
</concept>
<concept>
<code value="SEVCO:00215"/>
<display value="Apprehension bias for exposure determination"/>
<definition
value="A cognitive interpretive bias for exposure determination due to a study participant's responding or behaving differently when aware of being observed."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Hawthorne effect for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Mario Tristan"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-04 comment: What about using Hawthorne Effect for term definition and Apprehension Bias for alternative term"/>
</property>
</concept>
<concept>
<code value="SEVCO:00216"/>
<display
value="Hypothetical assessment bias for exposure determination"/>
<definition
value="A cognitive interpretive bias for exposure determination due to a difference between an individual’s report of an imagined or hypothetical response from their actual response."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Subjunctivity bias for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins, Mario Tristan"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. The response may be a behavior or valuation. An individual's response to "What would you do?" or "What would you have done?" (an imagined or hypothetical response) may be different than the individual's response to "What did you do?" or observation of the individual's behavior (a reporting of an actual response). This bias is relevant for preference studies."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-04 vote 4-1 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-04 comments: Is there a spelling error in Subjunctivity?
A minor issue - would the sentence "The response may be a behavior or valuation." be better placed in the comment for application (otherwise, would vote yes)
2022-02-11 comment: I would add a comment for application for the word hypothetical"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 9-0 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:00217"/>
<display value="Mimicry bias for exposure determination"/>
<definition
value="A cognitive interpretive bias for exposure determination due to a misinterpretation of observations that resemble the exposure."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Other terms (Exposure Ascertainment Bias, Exposure Measurement Bias, Exposure Classification Bias) may be used to describe the process in Exposure Detection in which the bias occurs. The term 'Mimicry bias for exposure determination' is used to represent the type of cognitive interpretive bias occurring in this process."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-18 comments: Suggest insert alternative term: Duplicate
I'm not quite sure this is clear enough, though I don't have any concrete suggestions for improvement. It might be that I am not familiar enough with the issue in question to interpret the definition. Reading around this a bit, it resembles a misclassification type bias (for a given set of observations, the observer takes X to be cause when the true cause is Y). Given our model for bias (see our flow diagram), might it be better defined in those terms? -- RESOLVED IN GROUP DISCUSSION"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-18 vote 11-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A"/>
</property>
</concept>
<concept>
<code value="SEVCO:00218"/>
<display value="Unacceptability bias for exposure determination"/>
<definition
value="A cognitive interpretive bias for exposure determination due to distortions in response, response values, or recording of responses resulting from perception of the social unacceptability of an exposure."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Unacceptable disease bias for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Janice Tufte, Joanne Dehnbostel, Ken Wilkins, Lisa Schilling"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Paul Whaley"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00219"/>
<display value="Exposure Ascertainment Bias"/>
<definition
value="An exposure detection bias due to distortions in how the data are collected."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Ascertainment Bias for exposure determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Data collection bias for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-04 comment: Suggest modify alternative term to Data Collection Bias"/>
</property>
<concept>
<code value="SEVCO:00220"/>
<display
value="Nonrepresentative observation period for exposure of interest"/>
<definition
value="An exposure ascertainment bias due to differences in the time period used for observation of the exposure and the intended time period for the exposure of interest."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Misaligned follow up period for exposure of interest"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-04 vote 4-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-04 comment: I think this is about right but it could perhaps be tidied up a bit, e.g. using "time period" in both instances of "period""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 9-0 by Mario Tristan, Paul Whaley, Sunu Alice Cherian, Robin Ann Yurk, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer, Jesus Lopez-Alcalde, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:00221"/>
<display
value="Nonrepresentative context for exposure ascertainment"/>
<definition
value="An exposure ascertainment bias due to differences in the context in which the exposure is observed and the intended context for the exposure of interest."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
This term is used when the context used for exposure ascertainment is incorrect, insensitive, or nonspecific. If the context (whether representative or not) is applied inconsistently, then use the term "Inconsistency in exposure ascertainment""/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-18 vote 10-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-18 comments: Comment for application. I would delete sentence: If the method (whether dependable or undependable) is applied inconsistently then use the term inconsistency in application of exposure of ascertainment.
I'm not sure if "undependable" is the word we really want to use. Also, (1) no method for exposure ascertainment will give a strictly "correct" result, (2) inconsistency can result in random error and imprecision, not necessarily bias, (3) we are presumably worried about consistency over- or under-reading of a measurement method compared to some (possibly hypothetical) gold standard? Overall, it feels like there is more to discuss here.
2022-02-25 comment: I would delete or edit the current alternative term and replace with insensitive, or nonspecific context for exposure ascertainment."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley"/>
</property>
</concept>
<concept>
<code value="SEVCO:00222"/>
<display value="Inconsistency in exposure ascertainment"/>
<definition
value="An exposure ascertainment bias due to differences within or between groups in how the data are collected."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Imbalance in exposure ascertainment"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
This term is used when the context (whether representative or not) is applied inconsistently. If the context used for exposure ascertainment is incorrect, insensitive, or nonspecific, then use the term "Nonrepresentative context for exposure ascertainment""/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-18 vote 8-1 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Joanne Dehnbostel, Sumalatha A"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-18 comments: I would add comment for application from previous term.
If the method (whether dependable or undependable) is applied inconsistently then use the term inconsistency in application of exposure of ascertainment.
I don't really understand the term "Inconsistency in application of exposure ascertainment" - I am not clear what the nouns and verbs actually are here, nor what they refer to. I have been involved in the discussion of the underlying bias model and I still don't grasp the meaning here.
2022-02-25 comment: I would remove alternative term."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-25 vote 8-0 by Robin Ann Yurk, Sunu Alice Cherian, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, nisha mathew, Paul Whaley"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00223"/>
<display value="Exposure Measurement Bias"/>
<definition
value="An exposure detection bias due to distortions in how the observed exposures are measured."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Measurement Bias for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
If one is addressing a bias in the instruments or processes used to measure the observed exposure, use Exposure Measurement Bias. If one is addressing how the measured exposure is categorized, use Exposure Classification Bias."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 5-0 by Jesus Lopez-Alcalde, Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte"/>
</property>
<concept>
<code value="SEVCO:00224"/>
<display value="Inappropriate method for exposure measurement"/>
<definition
value="An exposure measurement bias due to use of an incorrect method or protocol."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Incorrect exposure measurement method"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inappropriate exposure measurement method"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-25 vote 11-2 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-25 comments: I would list measurement methods as examples under comment for application, such as pharma, survey...
I am not sure of the difference between this bias and "Undependable method for exposure ascertainment" bias. It also seems to me that "inappropriate" is a subjective term so I am not sure how it should be applied. [Side note: in the ballot, it might be useful to have terms arranged as they are in the SEVCO hierarchy, as this might be causing some of the confusion I am experiencing.]
The previous term convey almost similar meaning"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley"/>
</property>
</concept>
<concept>
<code value="SEVCO:00225"/>
<display
value="Insensitive measure bias for exposure determination"/>
<definition
value="An exposure measurement bias due to use of a method that does not reliably detect the exposure when the exposure is present."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate sensitivity for exposure measure"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Inadequate sensitivity for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Use of an inadequately sensitive exposure measure is likely to result in false negative findings."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-18 comments: Suggest use term as Sensitivity Measure bias for exposure determination and insensitive measure bias for exposure determination for alternate term.
"Sensitivity" is not, in my experience, viewed exclusively in terms of measurement. Some experimental models cannot show the exposure (or outcome) because they are incapable of it, however it is measured in situ. For example, if the exposure was measured via presence of a metabolite, but the participant was not able to produce the metabolite, then the experiment would be insensitive regardless of measurement method. I am not sure this affects us here, but does it suggest a need for us to handle sensitivity in a comprehensive fashion? (Perhaps also specificity?)
As a side note, defining sensitivity well could be important for progress on risk of bias assessment methods used by EPA, who currently have assessment of "sensitivity" as a separate issue entirely outside of risk of bias assessment.
NEGATIVE VOTE CHANGED TO POSITIVE DURING DISCUSSION 2022-02-25"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-25 vote 13-0 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew"/>
</property>
</concept>
<concept>
<code value="SEVCO:00226"/>
<display
value="Nonspecific measure bias for exposure determination"/>
<definition
value="An exposure measurement bias due to use of a method that falsely detects the exposure when the exposure is absent."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate specificity for exposure measure"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. Use of an inadequately specific exposure measure is likely to result in false positive findings."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-04 vote 5-0 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-04 comment: Suggest use Specificity measure bias for exposure determination and non-specific measure bias for exposure determination for alternative term."/>
</property>
</concept>
<concept>
<code value="SEVCO:00228"/>
<display
value="Inappropriate application of method for exposure measurement"/>
<definition
value="An exposure measurement bias due to inappropriate application of the method or protocol."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inappropriate exposure measurement conduct"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="change-for-vote"/>
<valueString
value="Noted for Outcome Detection Bias: As of 2021-11-05 this term is not being prepared for vote. The current ROB tools do not distinguish the inappropriate conduct (used in QUADAS-2) from inadequate method (used in most other ROB tools) in the same tool, so the demand for this term is uncertain and thus not applied for version 1 of the Code System."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. An inappropriate application of the method or protocol suggests error is introduced by the process of measurement, as distinct from the method or protocol used for measurement (which would be an Inappropriate method for exposure measurement)."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-25 vote 12-1 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-11 comments: Add alternate term: Incorrect application of exposure measurement bias.
I think this is OK, but the term should be rewritten so it is easier to read and understand what it means (the syntax is awkward, as it could be read as one adjective and three nouns)
2022-03-11 comment: In documenting this, and the "inappropriate method for exposure measurement", I think it would be helpful to document what we mean by e.g. "method" vs. "application of method". I feel these are meta-terms like "study design feature" that are part of the scaffolding of SEVCO, but not part of SEVCO itself."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley"/>
</property>
</concept>
<concept>
<code value="SEVCO:00229"/>
<display value="Inconsistency in exposure measurement"/>
<definition
value="An exposure measurement bias due to differences within groups in how the observed exposures are measured."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. "How the observed exposures are measured" may refer to the methods applied for measurement or the application of those methods."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 9-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Paul Whaley, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer"/>
</property>
<concept>
<code value="SEVCO:00247"/>
<display
value="Inconsistency in instruments used for exposure measurement"/>
<definition
value="An exposure measurement bias due to differences within groups in the instruments for measurement."/>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer"/>
</property>
</concept>
<concept>
<code value="SEVCO:00248"/>
<display
value="Inconsistency in processes used for exposure measurement"/>
<definition
value="An exposure measurement bias due to differences within groups in the processes by which the instruments are used for measurement."/>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00241"/>
<display value="Imbalance in exposure measurement"/>
<definition
value="An exposure measurement bias due to differences between groups in how the observed exposures are measured."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Paul Whaley, Robin Ann Yurk, Janice Tufte, Harold Lehmann"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. "How the observed exposures are measured" may refer to the methods applied for data measurement or the application of those methods."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer"/>
</property>
<concept>
<code value="SEVCO:00249"/>
<display
value="Imbalance in instruments used for exposure measurement"/>
<definition
value="An exposure measurement bias due to differences between groups in the instruments used for measurement."/>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
Instruments used for measurement may include devices, surveys, and technologies. The concepts of "instruments used for measurement" is distinct from "process used for measurement" which may include protocols, techniques, and variations in context."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer"/>
</property>
</concept>
<concept>
<code value="SEVCO:00250"/>
<display
value="Imbalance in processes used for exposure measurement"/>
<definition
value="An exposure measurement bias due to differences between groups in the processes by which the instruments are used for measurement."/>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
The processes used for measurement may include protocols, techniques, and variations in context. The concept of "processes used for measurement" is distinct from "instruments used for measurement" which may include devices, surveys, and technologies."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Mario Tristan, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00230"/>
<display value="Exposure Classification Bias"/>
<definition
value="An exposure detection bias due to distortions in how the observed exposures are classified."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Misclassification Bias for exposure determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Exposure Misclassification Bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
If one is addressing a bias in the instruments or processes used to measure the observed exposure, use Exposure Measurement Bias. If one is addressing how the measured exposure is categorized, use Exposure Classification Bias."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer"/>
</property>
<concept>
<code value="SEVCO:00231"/>
<display value="Exposure Definition Bias"/>
<definition
value="An exposure classification bias resulting from the definition or threshold used for exposure classification."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Definition Bias for exposure determination"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Threshold Bias for exposure determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
An exposure definition bias suggests an internal validity problem in which the definition or threshold used for exposure classification does not represent the exposure of interest. If considering an external validity problem, the "Wrong question bias" (term not yet defined) may be used. An exposure definition bias is present when there are differences between the exposure of interest and the definition or threshold used for exposure classification."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-11 comments: Suggest alternative term: threshold bias for exposure determination.
Suggest remove sentence on external validity problem....
In the comments, "term not yet identified", should be flagged for later replacement."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but comment discussion led to new term]"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley"/>
</property>
<concept>
<code value="SEVCO:00232"/>
<display
value="Nonrepresentative definition for exposure classification"/>
<definition
value="An exposure definition bias due to a mismatch between the exposure of interest and the definition or threshold used for exposure measurement."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-11 comment: Should there be a hyphen between "classification" and "system"? (Is it a system(s) bias or a classification-system bias?) (I think this question applies to several definitions)"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term]"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-18 vote 6-0 by Joanne Dehnbostel, Alejandro Piscoya, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley"/>
</property>
<concept>
<code value="SEVCO:00233"/>
<display
value="Surrogate marker bias for exposure classification"/>
<definition
value="An exposure definition bias due to use of a definition that is proxy for the exposure rather than direct observation of the exposure."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Substitution bias for exposure classification"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Proxy bias for exposure classification"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-11 comment: Suggest add alternative term: proxy bias for exposure classification system."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term]"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00234"/>
<display value="Post-hoc definition of exposure"/>
<definition
value="An exposure definition bias due to definition of the exposure after interacting with the study data."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Definition for exposure classification not prespecified"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Post-hoc exposure definition"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="No prespecified definition for exposure classification"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but definition changed to match change to parent term]"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-18 vote 5-0 by Joanne Dehnbostel, Sumalatha A, Janice Tufte, Harold Lehmann, Paul Whaley"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00236"/>
<display
value="Classification process bias for exposure determination"/>
<definition
value="An exposure classification bias resulting from the application of the method used for exposure classification."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Harold Lehmann, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context.
A classification process bias for exposure determination suggests error is introduced by the process of classification, as distinct from the definition or threshold used (which would be an Exposure Definition Bias)."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-11 comments: I would provide an example such as survey severity classification example of a method.
(Inconsistent capitalization)"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-11 vote 8-0 by Jesus Lopez-Alcalde, Mario Tristan, Robin Ann Yurk, Janice Tufte, Sunu Alice Cherian, Alejandro Piscoya, Harold Lehmann, Yasser Sami Amer [but term changed to parallel changes to Exposure Definition Bias]"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-18 vote 6-0 by Joanne Dehnbostel, Sumalatha A, Robin Ann Yurk, Janice Tufte, Harold Lehmann, Paul Whaley"/>
</property>
</concept>
<concept>
<code value="SEVCO:00237"/>
<display value="Incorporation Bias for exposure determination"/>
<definition
value="An exposure classification bias due to the inclusion of the outcome or other relevant exposures under investigation in the method or process used for exposure classification."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Mario Tristan, Paul Whaley"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The exposure of interest can be an intervention or a prognostic factor, depending on the research context. If the statistical analysis assumes independence of two variables, but one variable incorporates the other variable in its definition, the assumption will be false and the result will be distorted."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-25 vote 11-2 by Paul Whaley, Mario Tristan, Robin Ann Yurk, Janice Tufte, Yasser Sami Amer, Harold Lehmann, Alejandro Piscoya, Sunu Alice Cherian, Jesus Lopez-Alcalde, Sumalatha A, Joanne Dehnbostel, Paola Rosati, nisha mathew"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-18 comments: Needs an alternative term or new term definition. I.e. Inclusion Bias for exposure definition for the term. Alternative term; eligibility bias for exposure determination
Definitely needs a comment for application, I can't picture what this means!"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-11 vote 5-0 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, nisha mathew, Paul Whaley"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00044"/>
<display value="Confounder Detection Bias"/>
<definition
value="A detection bias due to distortions in how the data for a potential confounder are determined."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="SEVCO:00045"/>
<display value="Detection Bias related to the reference standard"/>
<definition
value="A detection bias due to distortions in how the reference standard result is determined."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bias for reference standard result determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="10/10 as of 8/27/2021: Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="SEVCO:00046"/>
<display value="Detection Bias related to the index test"/>
<definition
value="A detection bias due to distortions in how the index test result is determined."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bias for index test result determination"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 8/30/2021: Eric Harvey, Harold Lehmann, Mario Tristan, Bhagvan Kommadi, Janice Tufte,"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="8/27/2021 vote 8-1 on "Detection Bias related to the index test (Bias for index text result determination) = A detection bias due to distortions in how the index text result is determined." by, Eric Au, Eric Harvey, Harold Lehmann, Alejandro Piscoya, Mario Tristan, Bhagvan Kommadi, Leo Orozco, Janice Tufte, Jesús López-Alcalde, Brian S. Alper"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-08-27 comment: I think the word "text" should be "test" in the Alternative term and definition. Please consider broadening this term and definition to include distortions in how the index event is determined"/>
</property>
</concept>
<concept>
<code value="SEVCO:00383"/>
<display value="Data Entry Bias"/>
<definition
value="A detection bias due to differences between measured values and recorded values."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Data Recording Bias"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Data Entry Bias may include distorted results due to errors in transcription, translation, or transposition between the measured value and the recorded value, or between a recorded value and a subsequent recording of the value."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Janice Tufte, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-26 vote 7-0 by nisha mathew, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00389"/>
<display
value="Inappropriate time interval between predictor assessment and outcome determination"/>
<definition
value="A detection bias involving the time interval between the observation of the predictor and outcome, where the interval used by the study differs from the interval assumed by the predictive model."/>
<property>
<code value="comment"/>
<valueString
value="Nonrepresentative observation period for outcome of interest is defined as an outcome ascertainment bias due to differences in the period used for observation of the outcome and the period for the outcome of interest.
Nonrepresentative observation period for exposure of interest is defined as an exposure ascertainment bias due to differences in the time period used for observation of the exposure and the intended time period for the exposure of interest.
In the context of predictive modeling, the time interval between the exposure (predictor) and the outcome should be representative of the time interval of interest."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-10-06 vote 3-1 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-10-06 comment: Two problems: (1) I am not sure how the definition equates to the term - in the term, it is about inappropriate time interval, but in the definition it is about the time interval not being that which is intended and representative of application of model. (2) I don't understand what is meant by the phrase "the intended time interval between the predictor and outcome that is representative of the application of the predictive model" - there are too many concepts all at once here, I think?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-20 vote 5-0 by Muhammad Afzal, Eric Harvey, Harold Lehmann, Louis Leff, Joanne Dehnbostel"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00021"/>
<display value="Analysis Bias"/>
<definition
value="A bias related to the analytic process applied to the data."/>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra, Mhuammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 8/15/2021: Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Mhuammad Afzal, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00022"/>
<display value="Bias related to selection of the analysis"/>
<definition
value="An analysis bias due to inappropriate choice of analysis methods before the analysis is applied."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Analysis Selection Bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Philippe Rocca-Serra, Mhuammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 8/15/2021: Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Mhuammad Afzal, Eric Harvey"/>
</property>
<property>
<code value="comment"/>
<valueString
value="An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias."/>
</property>
<concept>
<code value="SEVCO:00376"/>
<display
value="Bias related to selection of the data for analysis"/>
<definition
value="An analysis bias due to inappropriate choice of data included in the analysis before the analysis is applied."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel"/>
</property>
<property>
<code value="comment"/>
<valueString
value="An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan, Paola Rosati"/>
</property>
<concept>
<code value="SEVCO:00213"/>
<display
value="Bias due to post-baseline factors influencing selection of the data for analysis"/>
<definition
value="A bias related to selection of the data analysis based on participant characteristics observed after study enrollment."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="ROBINS-I 2.1. Was selection of participants into the study (or into the analysis) based on participant characteristics observed after the start of intervention?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Janice Tufte, Mario Tristan, Paola Rosati"/>
</property>
</concept>
<concept>
<code value="SEVCO:00312"/>
<display
value="Missing or inadequate intention-to-treat analysis"/>
<definition
value="A bias related to selection of the data analysis in which data are not completely analyzed according to the original assignment to comparison groups in an interventional study."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate as-randomized analysis"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-13 vote 4-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati
2022-05-20 vote 9-1 by Joanne Dehnbostel, nelle.stocquart@kce.fgov.be, Eric M Harvey, Jesus Lopez-Alcalde, Paul Whaley, Robin Ann Yurk, Harold Lehmann, raradhikaag@gmail.com, Mario Tristan, Paola Rosati
2022-05-27 vote 4-1 by Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-13 comment: Instead of defining "Inadequate intention-to-treat analysis" why not defining waht "intention-to-treat analysis" is?
2022-05-20 comment: Suggest change term name to Intention to Treat Analysis and remove word inadequate from the term as this term includes the limitation of the analysis in the definition.
2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules.
Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}}"/>
</property>
<property>
<code value="comment"/>
<valueString
value="An intention-to-treat analysis may be defined as analysis of all randomized subjects according to their assigned intervention rather than according to the intervention actually received. There is considerable variation in reported studies with respect to the use of the term 'intention-to-treat analysis' and 'modified intention-to-treat analysis' but if the risk of bias assessment suggests an insufficient accounting for all participants as intended then one may report 'Inadequate intention-to-treat analysis'.
In non-randomized studies, this term may be used to denote missing or inadequate analysis according to the intended treatment, e.g prescribed medication vs. taken medication."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="SEVCO:00313"/>
<display value="Missing or inadequate per-protocol analysis"/>
<definition
value="A bias related to selection of the data analysis in which data are not completely analyzed according to the study protocol."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Paul Whaley, Harold Lehmann, Muhammad Afzal"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-13 vote 4-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati
2022-05-20 vote 7-3 by Joanne Dehnbostel, nelle.stocquart@kce.fgov.be, Eric M Harvey, Jesus Lopez-Alcalde, Paul Whaley, Robin Ann Yurk, Harold Lehmann, raradhikaag@gmail.com, Mario Tristan, Paola Rosati
2022-05-27 vote 4-1 by Robin Ann Yurk, Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-20 comments: I do not fully agree with this definition. I propose following the Cochrane Handbook:
Naïve ‘per-protocol’ analysis: analysis restricted to individuals who adhered to their assigned interventions.
Moreover, there is another analysis that is often biased:
‘As-treated’ analysis: analysis in which participants are analysed according to the intervention they actually received, even if their randomized allocation was to a different treatment group
I would present these as different analyses (not as synonims)
https://training.cochrane.org/handbook/current/chapter-08
------
I think I see what the definition is saying but it is rather hard to parse.
re: "Inadequate per-protocol analysis" = "A bias related to selection of the data analysis in which data are not completely analyzed according to the assignment to comparison groups according to the interventions received."
Suggest edit term so it reads per protocol analysis and remove the word inadequate. This type of analysis includes the bias in the term already
2022-05-13 comment: Instead of defining "Inadequate per-protocol analysis" why not defining what "per-protocol anlysis" is?
2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules.
Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}}"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A per-protocol analysis may be defined as analysis of participants according to adherence to the assigned intervention (the 'treatment protocol') and/or according to adherence to the data collection protocol. Adherence may refer to adherence by the study participants or study personnel."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="SEVCO:00381"/>
<display value="Missing or inadequate as-treated analysis"/>
<definition
value="A bias related to selection of the data analysis in which data are not completely analyzed according to the interventions actually received."/>
<property>
<code value="comment"/>
<valueString
value="An as-treated analysis may be defined as analysis of subjects according to the intervention actually received rather than their assigned intervention."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-20 comments (from precursor term of Inadequate per-protocol analysis): I do not fully agree with this definition. I propose following the Cochrane Handbook:
Naïve ‘per-protocol’ analysis: analysis restricted to individuals who adhered to their assigned interventions.
Moreover, there is another analysis that is often biased:
‘As-treated’ analysis: analysis in which participants are analysed according to the intervention they actually received, even if their randomized allocation was to a different treatment group
I would present these as different analyses (not as synonims)
https://training.cochrane.org/handbook/current/chapter-08
------
I think I see what the definition is saying but it is rather hard to parse.
re: "Inadequate per-protocol analysis" = "A bias related to selection of the data analysis in which data are not completely analyzed according to the assignment to comparison groups according to the interventions received."
Suggest edit term so it reads per protocol analysis and remove the word inadequate. This type of analysis includes the bias in the term already
2022-05-27 comment: Missing Data Analysis: examples are imputation of data according to rules.
Purpose: To provide additional validity that the data are not biased from the missing data. {{2022-05-27 discussion suggests this can be handled by the SEVCO:00307 term [Inappropriate handling of missing data] which is classified as a 'Bias in processing of data'}}"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-03 vote 6-0 by Joanne Dehnbostel, Mario Tristan, Eric M Harvey, Harold Lehmann, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00377"/>
<display
value="Bias related to selection of the variables for analysis"/>
<definition
value="An analysis bias due to inappropriate choice of variables included in the analysis before the analysis is applied."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-13 vote 5-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati"/>
</property>
<concept>
<code value="SEVCO:00292"/>
<display
value="Bias related to selection of the variables for adjustment for confounding"/>
<definition
value="An analysis bias due to inappropriate choice of the variables for adjustment for confounding before the analysis is applied."/>
<property>
<code value="comment"/>
<valueString
value="An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-13 vote 5-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="This term was determined to also match 'Post-intervention confounding different (draft) Code: SEVCO:00283' which was originally derived from the trigger question from ROBINS-I: 1.6. Did the authors control for any post-intervention variables that could have been affected by the intervention? Detailed analysis found this to be more about improper control of 'confounding variables' that were not truly confounding variables."/>
</property>
<concept>
<code value="SEVCO:00299"/>
<display value="Bias controlling for time-varying confounding"/>
<definition
value="A bias related to selection of the variables for adjustment for confounding in which the confounding is time-dependent."/>
<property>
<code value="comment"/>
<valueString
value="An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="SEVCO:00301"/>
<display value="Inadequate adherence effect analysis"/>
<definition
value="A bias related to selection of the variables for adjustment for confounding by adherence."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bias controlling for adherence effect"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bias controlling for confounding by adherence"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="An analysis selection after the analysis is applied would be considered a Selective Analysis Reporting Bias."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00302"/>
<display value="Predictors included in outcome definition"/>
<definition
value="An analysis bias due to inappropriate choice of the variables for estimation of association in which one variable is incorporated in the definition of the other variable."/>
<property>
<code value="comment"/>
<valueString
value="Predictors are also called covariates, risk indicators, prognostic factors, determinants, index test results, or independent variables (https://www.acpjournals.org/doi/10.7326/M18-1377).
If a predictor in the model forms part of the definition or assessment of the outcome that the model predicts, the association between predictor and outcome will likely be overestimated, and estimates of model performance will be optimistic; in diagnostic research, this problem is generally called incorporation bias. (https://www.acpjournals.org/doi/10.7326/M18-1377)
When this type of analysis bias is applied to predictive model analyses (in which the predictor is the exposure of interest), this type of bias is equivalent to "Incorporation bias for outcome determination" [SEVCO:00063]"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-29 comment: should "incorporation bias" be added as 'alternative term' ?"/>
</property>
</concept>
<concept>
<code value="SEVCO:00319"/>
<display
value="Bias related to selection of predictors based on univariable analysis"/>
<definition
value="An analysis bias due to inappropriate choice of the predictor variables for estimation of association in which predictors are selected based on statistically significant univariable associations (without adjustment for other predictors)."/>
<property>
<code value="comment"/>
<valueString
value="Predictors are also called covariates, risk indicators, prognostic factors, determinants, index test results, or independent variables (https://www.acpjournals.org/doi/10.7326/M18-1377)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="PROBAST (https://www.acpjournals.org/doi/10.7326/M18-1377).:
4.5 Was selection of predictors based on univariable analysis avoided? (Model development studies only)
A data set will often have many features that could be used as candidate predictors, and in many studies researchers want to reduce the number of predictors during model development to produce a simpler model.
In a univariable analysis, individual predictors are tested for their association with the outcome. Researchers often select the predictors with a statistically significant univariable association (for example, P < 0.05) for inclusion in the development of a final prediction model. This method can lead to incorrect predictor selection because predictors are chosen on the basis of their statistical significance as a single predictor rather than in context with other predictors (49, 50, 191). Bias occurs when univariable modeling results in omission of variables from the model, because some predictors are important only after adjustment for other predictors, known from previous research to be important, did not reach statistical significance in the particular development set (for example, due to small sample size). Also, predictors may be selected on the basis of a spurious (accidental) association with the outcome in the development set.
A better approach to decide on omitting, combining, or including candidate predictors in multivariable modeling is to use nonstatistical methods—that is, methods without any statistical univariable pretesting of the associations between candidate predictors and outcome. Better methods include those based on existing knowledge of previously established predictors in combination with the reliability, consistency, applicability, availability, and costs of predictor measurement relevant to the targeted setting. Well-established predictors and those with clinical credibility should be included and retained in a prediction model regardless of any statistical significance (49, 50, 192). Alternatively, some statistical methods that are not based on prior statistical tests between predictor and outcome can be used to reduce the number of modeled predictors (for example, principal components analysis)."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00378"/>
<display
value="Bias related to selection of the analytic framework"/>
<definition
value="An analysis bias due to inappropriate choice of the analytic framework before the analysis is applied."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Bias related to selection of the statistical framework"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Mario Tristan, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="An analytic framework is the model, scaffolding, or organizational representation of concepts used in analyzing the data. The concepts included in an analytic framework may involve data, variables, formulas, assumptions, and adjustments."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-13 vote 6-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Paola Rosati, Janice Tufte [[but then the term changed in webmeeting 2022-05-13]]
2022-05-20 vote 4-2 by Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-20 comments:
seems to be entirely too much overlap with the "inappropriate analytic framework" term
I like this term and definition but I am not sure it is adequately differentiated from "inappropriate analytical framework". I think the term needs changing in some way."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-27 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<concept>
<code value="SEVCO:00297"/>
<display value="Inappropriate statistical model"/>
<definition
value="A bias related to selection of the analytic framework in which the analytic model does not match the dataset characteristics or does not match the intention of the analysis."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inappropriate analytic model"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="A bias related to selection of the analytic framework is defined as an analysis bias due to inappropriate choice of the analytic framework before the analysis is applied.
An inappropriate statistical model may include one in which there is a mismatch between the realities of the data and the assumptions required for the analytic model. Complexities in the data may include univariate concerns (e.g. skewness or outliers) and multivariate concerns (e.g. curvilinearity, co-linearity, or latent associations between variables)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-20 vote 5-1 by Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Jesus Lopez-Alcalde, Paul Whaley, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-20 comment:
I like this term and definition but I am not sure it is adequately differentiated from "Bias related to selection of the analytic framework". I think the term needs changing in some way.
2022-09-30 Steering Group change to Comment to application: comment added to this term instead of creating a new term for 'Inappropriate handling of complexities in the data'"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-27 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Eric M Harvey, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<concept>
<code value="SEVCO:00375"/>
<display value="Inappropriate modeling of censoring"/>
<definition
value="An inappropriate statistical model due to inappropriate accounting for ranges of potential observation in which data observation is not possible."/>
<property>
<code value="external-definitions"/>
<valueString
value="PROBAST 4.6 Were complexities in the data (e.g. censoring, competing risks, sampling of controls) accounted for appropriately?"/>
</property>
<property>
<code value="comment"/>
<valueString
value="An inappropriate statistical model is a bias related to selection of the analytic framework in which the analytic model does not match the dataset characteristics or does not match the intention of the analysis.
The "ranges of potential observation" may include periods of time (temporal ranges within which observation may occur), or ranges of detection with a measurement instrument (ranges of values that could be observed).
The concept of ranges of potential observation in which data observation is "not possible" may include impossibility due to physical realities (such as timing after competing risks or measurement instruments with limited ranges of detection) or impossibility due to administrative decisions (such as the observation period defined by the study protocol)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-10-20 vote 7-0 by Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Mario Tristan, Paul Whaley, Janice Tufte, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00316"/>
<display
value="Bias due to selection of the statistical significance threshold"/>
<definition
value="An analysis bias resulting from selection of an inappropriate threshold for statistical significance."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Significance-threshold selection bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Statistical significance threshold selection bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Bias related to selection of the threshold for statistical significance"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The statistical significance threshold is part of the analytic framework. A bias related to selection of the analytic framework is defined as an analysis bias due to inappropriate choice of the analytic framework before the analysis is applied.
In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-06-10 vote 5-1 by Brian S. Alper, Robin Ann Yurk, Paola Rosati, Mario Tristan, Harold Lehmann, Eric M Harvey
2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-10 comment: Consider editing the term definition to just Statistical significance threshold. For the alternative term remove word bias. For the comment for application remove the first sentence about bias.
2022-06-17 comments: I think I get it, but it is a bit tortured and I wonder if a normal user would interpret it correctly or understand it?
I am not sure we can rephrase the concept name making it more compact like "Statistical significance threshold selection bias""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-24 vote 5-0 by Mario Tristan, Harold Lehmann, Eric Harvey, Janice Tufte, Louis Leff"/>
</property>
<concept>
<code value="SEVCO:00317"/>
<display
value="Bias related to multiple comparison adjustment"/>
<definition
value="An analysis bias resulting from selection of a threshold for statistical significance which does not appropriately account for the effect of multiple comparisons on the statistical probability related to the result."/>
<property>
<code value="comment"/>
<valueString
value="This bias may cause inappropriate rejection of the null hypothesis due to an unmodified threshold for significance in the face of multiple comparisons. This bias may also occur when adjustment for multiple comparisons is inappropriately applied and leads to failure to reject the null hypothesis.
A bias due to selection of the statistical significance threshold is defined as an analysis bias resulting from selection of an inappropriate threshold for statistical significance.
In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-06-10 vote 3-2 by Brian S. Alper, Robin Ann Yurk, Paola Rosati, Mario Tristan, Harold Lehmann
2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-10 comments: The measure does not have a statistical probability, the finding or result has a statistical probability. Change definition to "A statistical significance threshold selection bias in which the threshold for statistical significance does not account for the effect of multiple comparisons on the statistical probability related to the result."Is this a bias or just an incomplete analysis due to data requirements needed to compute the multiple comparison adjustment.
2022-06-17 comment: Looking at the significance threshold bias terms, the other two refer to selection of the analytic framework, but this one does not. Is there a reason for that?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-24 vote 5-0 by Muhammad Afzal, Mario Tristan, Harold Lehmann, Eric Harvey, Louis Leff"/>
</property>
</concept>
<concept>
<code value="SEVCO:00382"/>
<display
value="Mismatch of significance threshold and purpose"/>
<definition
value="An analysis bias resulting from selection of a threshold for statistical significance which is inappropriate due to a mismatch between (1) how the statistical probability related to the result is determined and (2) the purpose for categorizing the result as statistically significant."/>
<property>
<code value="comment"/>
<valueString
value="A threshold used for variable selection in regression analysis is often more liberal than a threshold used in hypothesis testing. Similarly a situation regarding safety may tolerate a higher chance of false positive findings so significance threshold may be higher. Some factors to consider include sample size, power of the test, and expected losses from Type I and Type II errors.
In frequentist analysis, statistical significance is the rejection of the null hypothesis based on the p value. In Bayesian analysis, statistical significance is the acceptance of the hypothesis based on the posterior probability."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Mario Tristan, Paul Whaley"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="How to Choose the Level of Significance: A Pedagogical Note -- The level of significance should be chosen with careful consideration of the key factors such as the sample size, power of the test, and expected losses from Type I and II errors. While the conventional levels may still serve as practical benchmarks, they should not be adopted mindlessly and mechanically for every application.
(https://mpra.ub.uni-muenchen.de/66373/1/MPRA_paper_66373.pdf)"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-06-10 vote 2-2 by Brian S. Alper, Robin Ann Yurk, Mario Tristan, Harold Lehmann
2022-06-17 vote 4-1 by Paul Whaley, Muhammad Afzal, Eric M Harvey, Jesus Lopez-Alcalde, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-10 comments: A mismatch can occur even if the purpose was taken into account. As the term name "Mismatch of significance threshold and purpose" is a match for the definition of the parent term (Statistical significance threshold selection bias) there is a question of whether this term is needed.
Receiver operator curves are traditionally a statistic used to represent the continuum of cut point for the threshold value. The Sensitivity and Specificity can be calculated to evaluate the validity of the threshold cut point.
2022-06-17 comment: Add "Bias related to..." at beginning for consistency with others. What work is "selection of the analytic framework" doing in this definition?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-24 vote 5-0 by Muhammad Afzal, Mario Tristan, Harold Lehmann, Eric Harvey, Janice Tufte"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00304"/>
<display value="Immortal time bias"/>
<definition
value="A bias related to selection of the analytic framework in which an outcome variable includes an observation period during which the outcome could not have occurred."/>
<property>
<code value="comment"/>
<valueString
value="Consider a study in which a sample is followed from 2000 to 2010. Mortality during this time period is the outcome, and receipt of Superdrug is the exposure.
--If 20 people received Superdrug in 2009 and 5 of them died in the subsequent year, the mortality with Superdrug is 25%.
--If 20 people never received Superdrug and 1 died each year so by 2010 the mortality without Superdrug is 50%.
Interpreting this result as Superdrug having a 50% relative risk reduction for mortality would be biased (distorted) by not accounting for the 9 years of time (immortal time) that the Superdrug recipients must have survived to be able to receive Superdrug in 2009.
If the outcome variable were defined as mortality 2009-2010, there would be no bias and the result would be a 150% relative risk increase.
If the outcome variable were defined as mortality 2000-2010, there is an immortal time bias (the Superdrug recipients could not have died before receiving Superdrug)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Catalog of bias: A distortion that modifies an association between an exposure and an outcome, caused when a cohort study is designed so that follow-up includes a period of time where participants in the exposed group cannot experience the outcome and are essentially 'immortal'.
in https://academic.oup.com/aje/article/167/4/492/233064 :
Immortal time refers to a span of time in the observation or follow-up period of a cohort during which the outcome under study could not have occurred (13, 14). It usually occurs with the passing of time before a subject initiates a given exposure. While a subject is not truly immortal during this time span, the subject necessarily had to remain event free until start of exposure to be classified as exposed. An incorrect consideration of this unexposed time period in the design or analysis will lead to immortal time bias.
in JAMA https://jamanetwork.com/journals/jama/article-abstract/2776315
Such studies may be subject to immortal time bias, meaning that, during the period of observation, there is some interval during which the outcome event cannot occur
in https://watermark.silverchair.com/dyab157.pdf
In particular, incorrect handling of follow-up times in terms of exposure status in the analysis of such studies may introduce immortal time bias (ITB) in favour of the exposed group.2,3 Immortal time refers to a period of time in which, by design, participants in the exposed group cannot experience the outcome. This often happens in pharmacoepidemiologic studies in which treatment is prescribed at variable times (with delay) after disease diagnosis. The bias occurs when the exposed group is considered to be exposed during their entire follow-up time (even during periods in which they are theoretically unexposed) or their unexposed follow-up times are discarded.2,3"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-07-15 vote 5-1 by Mario Tristan, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-15 comment: Why the need to specify "in a larger sample" in the second sentence, since there is no assumption about size of the sample in the first assertion?"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-22 vote 7-0 by Mario Tristan, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00293"/>
<display value="Inadequate sample size"/>
<definition
value="A bias related to selection of the analytic framework in which the sample size invalidates the assumptions of the analytic framework."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate numbers for analysis"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="An example of 'Inadequate sample size' is a finding of no effect with inadequate power to detect an effect. Another example of 'Inadequate sample size' is use of a parametric analysis with low numbers, which invalidates the assumptions for use of a parametric analysis."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Janice Tufte, Joanne Dehnbostel, Mario Tristan, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-22 vote 7-0 by Mario Tristan, Paul Whaley, Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00294"/>
<display value="Bias related to execution of the analysis"/>
<definition
value="An analysis bias due to inappropriate decisions pertaining to preparation of data for analysis and/or conduct of the analysis."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bias in processing of data"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Paul Whaley, Yuan Gao"/>
</property>
<property>
<code value="comment"/>
<valueString
value=""Bias related to selection of the analysis" is used when the wrong analysis is done (the analysis is planned wrongly). "Bias in processing of data" is used when the analysis is done wrong (the analysis is executed wrongly)."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-11-04 vote 5-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00305"/>
<display value="Incomplete analysis"/>
<definition
value="An analysis bias due to absence of a component of the analytic process."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Incomplete analysis process"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Missing components may include addressing missing data, addressing potential confounders, checking model assumptions, or robustness checks for model misspecification."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey
2022-08-19 vote 5-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra
2022-08-25 vote 8-1 by nisha mathew, Jesus Lopez-Alcalde, Cauê Monaco, Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-08-12 comment: Ambiguous as to whether the data is incomplete or the analytic process incomplete. Also seems to be ambiguous as to whether the analysis is of a selected subset of the existing data (thus relating to selection bias?), or of data that is not representative of the totality of theoretically available data (thus relating to external validity?).
2022-08-19 comment: tension between bias and process. Shouldn't it be "incomplete analysis related bias"? omission seems to indicate a wilful act. "absence" may be more neutral when considering a 'canonical / state of the art / standardised ' protocol.
"An analysis bias due to absence of a component deemed necessary in a state-of- art (possibly regulator-approved ) analytic process.""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-09 vote 6-0 by Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Khalid Shahin, Janice Tufte, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00306"/>
<display value="Inappropriate handling of uninterpretable data"/>
<definition
value="An analysis bias due to omission of uninterpretable values, or their replacement with inappropriate values."/>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-08-12 comment: I'm not sure I would understand the definition if I had not read the term, suggest rephrasing - "omission of accommodation for" is perhaps the problem part."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Inappropriate values may include use of non-representative imputation treating uninterpretable data like missing data.
In evaluation of diagnostic tests, omission of or inappropriate classification of test results would be Inappropriate handling of uninterpretable data."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew"/>
</property>
</concept>
<concept>
<code value="SEVCO:00307"/>
<display value="Inappropriate handling of missing data"/>
<definition
value="An analysis bias due to use of non-representative values in place of missing data."/>
<property>
<code value="comment"/>
<valueString
value="Handling of missing data may address data missing at levels of single observations or groupings by encounter, participant, site, or subpopulation."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-12 vote 5-0 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00308"/>
<display value="Inappropriate handling of variables"/>
<definition
value="An analysis bias due to processing a variable in an incorrect role or with an incorrect datatype."/>
<property>
<code value="comment"/>
<valueString
value="Typical variable roles are population, exposure, confounder, and outcome.
A variable datatype may be numerical (continuous or discrete) or categorical (ordinal or nominal)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew"/>
</property>
<property>
<code value="change-for-vote"/>
<valueString
value="Consider types to include Inappropriate handling of confounders, and Inappropriate handling of measurement error"/>
</property>
</concept>
<concept>
<code value="SEVCO:00300"/>
<display value="Bias in adjustment for selection bias"/>
<definition
value="An analysis bias due to inappropriate application of adjustment techniques for correction of bias in the selection of participants for analysis."/>
<property>
<code value="comment"/>
<valueString
value="Bias in the selection of participants for analysis could occur due to Participant Selection Bias (SEVCO:00003) or participant-level Bias related to selection of the data for analysis (SEVCO:00376).
"It is in principle possible to correct for selection biases, for example by using inverse probability weights to create a pseudo-population in which the selection bias has been removed, or by modelling the distributions of the missing participants or follow up times and outcome events and including them using missing data methodology." (Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016 Oct 12;355:i4919. doi: 10.1136/bmj.i4919. PMID: 27733354; PMCID: PMC5062054. Supplementary Table A.)"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value=""It is in principle possible to correct for selection biases, for example by using
inverse probability weights to create a pseudo-population in which the
selection bias has been removed, or by modelling the distributions of the
missing participants or follow up times and outcome events and including
them using missing data methodology." (Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016 Oct 12;355:i4919. doi: 10.1136/bmj.i4919. PMID: 27733354; PMCID: PMC5062054. Supplementary Table A.)"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-30 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Janice Tufte, Eric Harve, Morufu Olalekan Raimi"/>
</property>
</concept>
<concept>
<code value="SEVCO:00309"/>
<display value="Data Transition Bias"/>
<definition
value="An analysis bias due to differences between recorded data and data used for analysis."/>
<property>
<code value="comment"/>
<valueString
value="Data Transition Bias may include distorted results due to errors in transcription, translation, erroneous mapping, or transposition between the recorded data (values, labels, and other metadata) and the data used for analysis. Data Transition Bias may occur due to any problem encountered during the Extraction, Transformation, and Loading (ETL) process in data exchange."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-11-04 vote 5-0 by Philippe Rocca-Serra, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00311"/>
<display value="Inappropriate handling of missing confounder data"/>
<definition
value="An analysis bias due to use of non-representative values in place of missing data for variables in the role of confounder."/>
<property>
<code value="comment"/>
<valueString
value="Handling of missing confounder data may address data missing at levels of single observations or groupings by encounter, participant, site, or subpopulation. Inappropriate handling of missing confounder data can result in misleading adjusted analyses."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-16 vote 5-0 by Mario Tristan, Janice Tufte, Eric Harvey, Yaowaluk Ngoenwiwatkul, nisha mathew"/>
</property>
</concept>
<concept>
<code value="SEVCO:00298"/>
<display value="Computational Implementation Bias"/>
<definition
value="An analysis bias due to miscalculations in the processing of the data."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bug"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Statistical Programming Error"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Programming Error"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="This bias is intended to cover a broad range of errors in curating the data and performing the calculations specified or implied by the analytic plan, including but not limited to: memory allocation and other environmental specifications, data ingestion pipeline, statistical package choice and vetting, and syntax, semantics and logic of coding. this bias can be applied to both manual or computer based computation."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Harold Lehmann, Khalid Shahin, Muhammad Afzal, Neeraj Ojha"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey
2022-08-19 vote 4-2 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-08-12 comment: Not sure about including data entry errors among errors in software code - the latter is a computational error, the former is not. Also, the definition does not specify computational processing.
2022-08-19 comment: the class label is ambiguous: is it "computation error caused bias" or it is 'contradictions caused bias? The latter term does not add clarity.
Also, only data entry errors resulting from computational errors would fall under this type of bias, but not direct entry of values."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-09 vote 6-0 by Philippe Rocca-Serra, Harold Lehmann, Jesus Lopez-Alcalde, Khalid Shahin, Janice Tufte, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00324"/>
<display
value="Reported analysis not following pre-specified analysis plan"/>
<definition
value="An analysis bias in which the reported analysis does not match the pre-specified analysis plan."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-03-10 vote 5-0 by Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="SEVCO:00303"/>
<display value="Collider bias"/>
<definition
value="An analysis bias in which an estimation of association between two variables is distorted by controlling for a third variable affected by both variables of interest (or factors causing the variables of interest)."/>
<property>
<code value="external-definitions"/>
<valueString
value="A structural classification of bias distinguishes between biases resulting from conditioning on common effects (“selection bias”) --- A Structural Approach to Selection Bias, https://journals.lww.com/epidem/Fulltext/2004/09000/A_Structural_Approach_to_Selection_Bias.20.aspx
Collider bias occurs when an exposure and outcome (or factors causing these) each influence a common third variable and that variable or collider is controlled for by design or analysis. In contrast, confounding occurs when an exposure and outcome have a shared common cause that is not controlled for. -- JAMA 2022 Mar 14 https://jamanetwork.com/journals/jama/fullarticle/2790247
https://catalogofbias.org/biases/collider-bias/ Collider bias = A distortion that modifies an association between an exposure and outcome, caused by attempts to control for a common effect of the exposure and outcome"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Collider bias occurs when an exposure and outcome (or factors causing these) each influence a common third variable and that variable or collider is controlled for by design or analysis. In contrast, confounding occurs when an exposure and outcome have a shared common cause that is not controlled for. (JAMA 2022 Mar 14 https://jamanetwork.com/journals/jama/fullarticle/2790247)
The "third variable" affected by both variables of interest can also be a "third variable" affected by an "intermediary variable" which is affected by both variables of interest.
An analysis bias is defined as a bias related to the analytic process applied to the data.
A bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation])."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-07-01 vote 3-2 by Harold Lehmann, Paul Whaley, Jesus Lopez-Alcalde, Eric Harvey, Philippe Rocca-Serra"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-01 comments: Is this the same as a confounding variable? If not, please differentiate.
the first comment seems a bit confusing: does collider bias occurs when the study design controls for a variable which is influenced by both the exposure and the outcome?
I'm not sure this is correct. My understanding is that collision comes into play when effect modifiers are treated as confounders (and possibly when confounders are treated as modifiers? I don't know if it is symmetric). This reads as though it is an analysis unadjusted for confounders, with the factor causing both the cause and effect variables.
Confounding: A < B > C and A > C
Modification: A > B > C and A > C
Collision: Conditioning on B under modification rather than confounding."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-08 vote 5-0 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Janice Tufte, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="SEVCO:00314"/>
<display value="Preliminary analysis bias"/>
<definition
value="An analysis bias related to analysis of data before the complete dataset is available."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Interim analysis bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-12-23 vote 6-0 by Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00295"/>
<display value="Data-dredging bias"/>
<definition
value="An analysis bias involving use of data analyses that are not pre-specified and fully disclosed, to select analyses with desirable results."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="p-hacking"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Fishing expedition"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Data snooping bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="p-hacking bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Fishing expedition bias"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Types of data analysis that lead to data-dredging bias include but are not limited to repeated subgroup analyses, repeated adjusted analyses, repeated analyses with different analytic models, and repeated analyses across many outcomes for many variations of defining outcomes, any of which can be done to select ("cherry-pick") the analyses that provide a desired result. The desired result may be statistically significant findings or other specific results.
The terms "p-hacking" and "Fishing expedition" are commonly used terms to describe data-dredging practices that lead to bias and are often used to imply bias."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Janice Tufte, Paul Whaley, Kenneth Wilkins"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="from Catalog of Bias (https://catalogofbias.org/biases/data-dredging-bias/): Data-dredging bias = A distortion that arises from presenting the results of unplanned statistical tests as if they were a fully prespecified course of analyses.
from BMJ Evidence-Based Medicine (https://ebm.bmj.com/content/27/4/209): Background: what is data dredging bias?
Data-dredging bias encompasses a number of more specific questionable practices (eg, fishing, p-hacking) all of which involve probing data using unplanned analyses and then reporting salient results without accurately describing the processes by which the results were generated.
from Wikipedia (https://en.wikipedia.org/wiki/Data_dredging): Data dredging (also known as data snooping or p-hacking) is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-12-09 comment: Ioannidis, J. P. A. (2019) P values linked to null hypothesis significance testing (NHST) is the most widely (mis)used method of statistical inference. Empirical data suggest that across the biomedical literature (1990–2015), when abstracts use P values 96% of them have P values of 0.05 or less. The same percentage (96%) applies for full-text articles.
2022-12-16 comments: Delete comma in definition (before "that").
p-hacking and fishing expedition aren't synonyms but data processes leading to bias. "p-hacking induced bias" maybe"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-12-09 votes 4-0 by Yuan Gao, Mario Tristan, Eric Harvey, Harold Lehmann
2022-12-16 votes 6-1 by Philippe Rocca-Serra, Janice Tufte, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-12-23 vote 6-0 by Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00322"/>
<display
value="final model not corresponding to multivariable analysis"/>
<definition
value="An analysis bias in which the predictors and coefficients in the final model do not match the predictors and coefficients reported in the multivariable analysis."/>
<property>
<code value="external-definitions"/>
<valueString
value="from PROBAST:
4.9 Do predictors and their assigned weights in the final model correspond to the results from the reported multivariable analysis? (Model development studies only)
Predictors and coefficients of the final developed model, including intercept or baseline components, should be fully reported to allow others to correctly apply the model to other individuals. Mismatch between the presented final model and the reported results from the multivariable analysis (such as the intercept and predictor coefficients) is frequent. A review of prediction models in cancer in 2010 found that only 13 of 38 final prediction model equations (34%) used the same predictors and coefficients as the final presented multivariable analyses, 8 used the same predictors but different coefficients, 11 used neither the same coefficients nor the same predictors, and 6 used an unclear method to derive the final prediction model from the presented results of the multivariable analysis (121).
Bias can arise when the presented final model and the results reported from the multivariable analysis do not match. One way this can occur is when nonsignificant predictors are dropped from a larger model to arrive at a final presented model but the predictor coefficients from the larger model are used to define the final model, which are no longer correct. When predictors are dropped from a larger model, it is important to reestimate all predictor coefficients of the smaller model because the latter has become the final model. These newly estimated predictor coefficients are likely different even if nonsignificant or irrelevant predictors from the larger model are dropped.
When a study reports a final model in which both predictors and regression coefficients correspond to the reported results of the multivariable regression analysis or model, this question should be answered as Y. If the final model is based only on a selection of predictors from the reported multivariable regression analysis without refitting the smaller model, it should be answered as N or PN. When no information is given on the multivariable modeling from which predictors and regression coefficients are derived, it should be answered as NI.
This signaling question is not about detecting improper methods of selecting predictors for the final model; such methods are addressed in signaling question 4.5."/>
</property>
<property>
<code value="comment"/>
<valueString
value="This type of bias is applicable to model development studies and model selection within other study designs."/>
</property>
<property>
<code value="editors"/>
<valueString value="Kenneth Wilkins, Brian S. Alper"/>
</property>
<property>
<code value="open-for-voting"/>
<valueString value="2023-12-01"/>
</property>
</concept>
<concept>
<code value="SEVCO:00310"/>
<display value="Cognitive Interpretive Bias affecting analysis"/>
<definition
value="A bias related to the analytic process due to the subjective nature of human interpretation."/>
<property>
<code value="comment"/>
<valueString
value="The Cognitive Interpretive Bias affecting analysis can be mitigated by masking the analyst as to the assignments for the groups, and by specification of the analysis prior to data availability."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-11-18 vote 6-0 by Mahnoor Ahmed, Yuan Gao, Harold Lehmann, Jesus Lopez-Alcalde, Paul Whaley, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00379"/>
<display
value="Cognitive Interpretive Bias affecting analysis selection"/>
<definition
value="A bias related to selection of the analysis due to the subjective nature of human interpretation."/>
<property>
<code value="comment"/>
<valueString
value="Bias related to selection of the analysis is defined as an analysis bias due to inappropriate choice of analysis methods before the analysis is applied.
The Cognitive Interpretive Bias affecting analysis selection can be mitigated by masking the analyst as to the assignments for the groups, and by specification of the analysis prior to data availability."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-29 vote 5-0 by Janice Tufte, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00315"/>
<display value="Availability bias affecting analysis selection"/>
<definition
value="A Cognitive Interpretive Bias due to the use of information which is most readily available, rather than information which is most representative, affecting analysis selection."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Janice Tufte, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Catalogue of Bias:
Availability bias
A distortion that arises from the use of information which is most readily available, rather than that which is necessarily most representative."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-08-12 vote 4-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey
2022-08-19 vote 5-1 by Paul Whaley, Mario Tristan, Harold Lehmann, Janice Tufte, Eric Harvey, Philippe Rocca-Serra"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-08-12 comment: Clarify as to whether this is exclusively about cognitive availability? Seems ambiguous in current phrasing. Would suggest comment for application to make clear specific circumstances in which this applies.
2022-08-19 comment: The definition is ambiguous about whether limits on access to the information is cognitive (e.g. familiarity) or otherwise. Also, the definition specifies "information" when the thing being selected is a technique for analysing information."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Selection of inappropriate data or variables for analysis is an availability bias when the appropriate data or variables are not readily available to the analyst and therefore the appropriate analysis is not selected.
Selection of an inappropriate analysis due to familiarity with the analytic techniques is an availability bias when the appropriate technique is unfamiliar and therefore not selected.
The term "Availability bias affecting analysis selection" is about selection of the analysis and not about missing data."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-26 vote 7-0 by nisha mathew, Philippe Rocca-Serra, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00380"/>
<display
value="Cognitive interpretive bias affecting execution of the analysis"/>
<definition
value="A bias in processing of data due to the subjective nature of human interpretation."/>
<property>
<code value="comment"/>
<valueString
value="Bias in processing of data is defined as an analysis bias due to inappropriate decisions pertaining to preparation of data for analysis and/or conduct of the analysis.
This bias may be mitigated by the partial masking or blinding of the individuals conducting the analysis."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte, Paul Whaley, Yuan Gao, Harold Lehmann, Brian S. Alper"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-12-02 vote 6-0 by Mario Tristan, Yuan Gao, Mahnoor Ahmed, Muhammad Afzal, Janice Tufte, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00296"/>
<display value="Lack of blinding of data analysts"/>
<definition
value="A cognitive interpretive bias affecting execution of the analysis due to the analyst's awareness of the participants' status with respect to the variables defining the comparison groups."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Lack of masking of data analysts"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Janice Tufte, Joanne Dehnbostel, Paul Whaley"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-12-02 vote 5-0 by Mario Tristan, Mahnoor Ahmed, Muhammad Afzal, Janice Tufte, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-12-02 comment: Should it be participants' statuses --- EWG discussion notes that "status" can be used for the plural"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00392"/>
<display value="Inappropriate weighting bias"/>
<definition
value="An analysis bias in which the weights used in model construction do not align with the target of estimation or estimand."/>
<property>
<code value="comment"/>
<valueString
value="This bias often occurs with the omission of sampling weights in a model or in the process of trying to mitigate misrepresentation of a population due to sampling.
One example is use of an unweighted model with National Health and Nutrition Examination Survey (NHANES) data.
This bias occurs when attempting to reweight imbalanced classes in a model to make them representative of the source population, when weights drive estimation away from the target."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-13 vote 6-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte"/>
</property>
<property>
<code value="editors"/>
<valueString value="Brian S. Alper, Kenneth Wilkins"/>
</property>
</concept>
<concept>
<code value="SEVCO:00320"/>
<display
value="Inappropriate evaluation of predictive model performance measures"/>
<definition
value="An analysis bias in which the method for analysis of a performance measure (such as calibration or discrimination) is not adequate or suitable for the predictive model."/>
<property>
<code value="comment"/>
<valueString
value="According to PROBAST explanation, to fully gauge the predictive performance of a model, reviewers must assess both model calibration and discrimination (such as the c-index) addressing the entire range of the model-predicted probabilities. (https://www.acpjournals.org/doi/10.7326/M18-1377)"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte"/>
</property>
<concept>
<code value="SEVCO:00393"/>
<display
value="Inappropriate evaluation of calibration of predictive model"/>
<definition
value="An analysis bias in which the method for analysis of calibration is not adequate or suitable for the predictive model."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-06 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-10-06 comment: Is the bias because an analyst prefers one model over another when there might be a more appropriate one ( perhaps the analyst is not familiar with?)"/>
</property>
</concept>
<concept>
<code value="SEVCO:00394"/>
<display
value="Inappropriate evaluation of discrimination of predictive model"/>
<definition
value="An analysis bias in which the method for analysis of discrimination is not adequate or suitable for the predictive model."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-13 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Paul Whaley"/>
</property>
</concept>
<concept>
<code value="SEVCO:00321"/>
<display value="model overfitting"/>
<definition
value="An analysis bias, specific to predictive model development studies, in which strategies to mitigate overfitting are not adequately applied."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Predictive model performance measures (calibration and discrimination) may be misinterpreted if there are no strategies to mitigate overfitting. This applies to development studies without external validation studies. Strategies to mitigate overfitting may include penalization/regularization, k-fold cross validation, train-test/validation split, etc.
From the PROBAST explanation (https://www.acpjournals.org/doi/10.7326/M18-1377): "quantifying the predictive performance of a model on the same data from which the model was developed (apparent performance) tends to give optimistic estimates of performance due to overfitting—that is, the model is too much adapted to the development data set. This optimism is higher when any of the following are present: too few outcome events in total, too few outcome events relative to the number of candidate predictors (small EPV), dichotomization of continuous predictors, use of predictor selection strategies based on univariable analyses, or use of traditional stepwise predictor selection strategies (for example, forward or backward selection) in multivariable analysis in small data sets (small EPV)""/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-10-06 vote 3-1 by Jesus Lopez-Alcalde, Eric Harvey, Paul Whaley, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-10-06 comments: I am not sure about having a preferred term that actually consists of two terms - overfit and optimism. Is one a synonym of the other?
Optimism- being too over optimistic and fitting things into the model that really were not defined early on? (adding inappropriate data that can skew the outcomes?)"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-20 vote 5-0 by Muhammad Afzal, Eric Harvey, Harold Lehmann, Louis Leff, Joanne Dehnbostel"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00023"/>
<display value="Reporting Bias"/>
<definition
value="A bias due to distortions in the selection of or representation of information in study results or research findings."/>
<property>
<code value="external-definitions"/>
<valueString
value="CoB: Reporting biases = A systematic distortion that arises from the selective disclosure or withholding of information by parties involved in the design, conduct, analysis, or dissemination of a study or research findings (https://catalogofbias.org/biases/reporting-biases/) also notes: The Dictionary of Epidemiology defines reporting bias as the “selective revelation or suppression of information (e.g., about past medical history, smoking, sexual experiences) or of study results.”
The Cochrane Handbook states it arises “when the dissemination of research findings is influenced by the nature and direction of results.”
The James Lind Library states “biased reporting of research occurs when the direction or statistical significance of results influence whether and how research is reported.”
QUIPS: The Statistical Analysis and Reporting domain addresses the appropriateness of the study’s statistical analysis and completeness of reporting. It helps the assessor judge whether results are likely to be spurious or biased because of analysis or reporting. To make this judgment, the assessor considers the data presented to determine the adequacy of the analytic strategy and model-building process and investigates concerns about selective reporting. Selective reporting is an important issue in prognostic factor reviews because studies commonly report only factors positively associated with outcomes. A study would be considered to have low risk of bias if the statistical analysis is appropriate for the data, statistical assumptions are satisfied, and all primary outcomes are reported.
ROB2 = This domain addresses bias that arises because the reported result is selected (based on its direction, magnitude or statistical significance) from among multiple intervention effect estimates that were calculated by the trial investigators. We call this bias in selection of the reported result. Consideration of risk of bias requires distinction between:
• An outcome domain. This is a state or endpoint of interest, irrespective of how it is measured (e.g. severity
of depression);
• An outcome measurement. This is a specific way in which an outcome domain is measured (e.g. measurement of depression using the Hamilton rating scale 6 weeks after starting intervention); and
• An outcome analysis. This is a specific result obtained by analysing one or more outcome measurements (e.g. the difference in mean change in Hamilton rating scale scores from baseline to 6 weeks between experimental and comparator groups). This domain does not address bias due to selective non-reporting (or incomplete reporting) of outcome domains that were measured and analysed by the trial investigators (115). For example, deaths of trial participants may be recorded by the trialists, but the reports of the trial might contain no mortality data, or state only that the intervention effect estimate for mortality was not statistically significant. Such bias puts the result of a synthesis at risk because results are omitted based on their direction, magnitude or statistical significance. It should therefore be addressed at the review level, as part of an integrated assessment of the risk of reporting bias (116).
ROBINS-I = Bias in selection of the reported result"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Bias is defined as a systematic distortion in research results (estimation of effect, association, or inference). Distortions in research results means differences between the reported results (findings, conclusions, effect estimates) and the actuality (the truth, the estimand [the quantity targeted for estimation])."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-10-21 vote 7-0 by Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Mario Tristan, Brian Alper, Janice Tufte, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00024"/>
<display value="Selective Reporting Bias"/>
<definition
value="A reporting bias due to inappropriate selection of the results or research findings that are reported."/>
<property>
<code value="external-definitions"/>
<valueString
value="MASTER-31. There was no discernible data dredging or selective reporting of the outcomes"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-06 vote 5-0 by Harold Lehmann, Yuan Gao, Janice Tufte, Eric Harvey, Mario Tristan"/>
</property>
<concept>
<code value="SEVCO:00330"/>
<display value="Selective Outcome Reporting"/>
<definition
value="A selective reporting bias due to inappropriate selection of which outcomes are reported within results or research findings."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Outcome reporting bias"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-13 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00336"/>
<display value="Selective outcome measure reporting"/>
<definition
value="A selective reporting bias due to inappropriate selection of which outcome measures are reported for an outcome."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Selective outcome measure reporting within outcome domain"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Selective outcome measure reporting may be considered a type of selective outcome reporting in which the measurement method for determination of the outcome is interpreted as a distinct outcome.
A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Paul Whaley"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-13 vote 5-0 by Harold Lehmann, Joanne Dehnbostel, Paul Whaley, Janice Tufte, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00331"/>
<display value="Selective subgroup reporting"/>
<definition
value="A selective reporting bias due to inappropriate selection of subsets of groups of participants for which results or research findings are reported."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Subgroup reporting bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Subgroup analysis reporting bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Selective subgroup analysis reporting"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.
Selective subgroup reporting relates to choice of attributes of participants within cohorts, for example reporting limited to male patients."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Paul Whaley, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-27 vote 7-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Paul Whaley, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00331a"/>
<display value="Selective comparison reporting"/>
<definition
value="A selective reporting bias due to inappropriate selection of comparison groups for which results or research findings are reported."/>
<property>
<code value="comment"/>
<valueString
value="A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.
Selective comparison reporting relates to choice of cohort definitions, for example an intention-to-treat analysis (as-randomized analysis) vs. an as-treated analysis."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Paul Whaley, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-01-20 vote 2-1 by Yuan Gao, Paul Whaley, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-01-20 comment: I don't see enough of a connection between the term (selective comparison) and the definition, which does not seem to talk about comparisons."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-27 vote 6-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00333"/>
<display
value="Selective analysis reporting from repeated analyses at multiple times"/>
<definition
value="A selective reporting bias due to inappropriate selection of which analyses are reported for an outcome that was analyzed at multiple points in time in a longitudinal study."/>
<property>
<code value="comment"/>
<valueString
value="A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Janice Tufte, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-27 vote 6-0 by Janice Tufte, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Yuan Gao, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00334"/>
<display
value="Selective analysis reporting from multiple analytic models"/>
<definition
value="A selective reporting bias due to inappropriate selection of which analyses are reported for an outcome that was analyzed in multiple ways."/>
<property>
<code value="comment"/>
<valueString
value="A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings.
Adjustment reporting bias, or selective reporting of adjusted estimates, is a type of selective analysis reporting from multiple analytic models."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-02-10 vote 6-0 by Cauê Monaco, Paul Whaley, Janice Tufte, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00335"/>
<display value="Selective threshold reporting bias"/>
<definition
value="A selective reporting bias due to inappropriate selection of which thresholds (used for definitions of the variables) are reported."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Selective cutoff reporting bias"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="A selective reporting bias is a reporting bias due to inappropriate selection of the results or research findings that are reported.
A reporting bias is a bias due to distortions in the selection of or representation of information in study results or research findings."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-02-10 vote 5-1 by Cauê Monaco, Paul Whaley, Janice Tufte, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-02-10 comment: I'm not clear how the definition relates specifically to reporting bias."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-02-24 vote 7-0 by Harold Lehmann, Yasser Sami Amer, Mario Tristan, Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00025"/>
<display value="Cognitive interpretive bias in reporting"/>
<definition
value="A distortion in the representation of study results or research findings due to the subjective nature of human interpretation."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Spin bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Cognitive interpretive bias for reporting"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Selective Interpretation Reporting Bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Selective Representation Reporting Bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Cognitive interpretive reporting bias"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="CoB: Spin bias = The intentional or unintentional distorted interpretation of research results, unjustifiably suggesting favourable or unfavourable findings that can result in misleading conclusions (https://catalogofbias.org/biases/spin-bias/)"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings.
Cognitive interpretive bias in reporting is about interpretation of the results rather than the choice of which results are presented (which would be Selective Reporting Bias).
Cognitive interpretive biases in reporting include selective theory reporting, confirmation bias, bias of rhetoric, novelty bias, popularity bias, and positive results bias."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Harold Lehmann, Janice Tufte, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-02-10 vote 4-1 by Cauê Monaco, Paul Whaley, Jesus Lopez-Alcalde, Janice Tuft, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-02-10 comment: I think the definition is sound but the comment for application should be extended to make it clearer that this is about interpretation of the results rather than the choice of which results are presented."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-02-24 vote 7-0 by Harold Lehmann, Yasser Sami Amer, Mario Tristan, Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00338"/>
<display
value="Interpretation of results not addressing potential for bias"/>
<definition
value="A cognitive interpretive bias in reporting whereby the reported interpretation of results does not adequately address potential for bias."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Interpretation of results not addressing risk of bias"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings.
Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation.
Interpretation of results not addressing potential for bias occurs when there is an absence of risk of bias assessment or incomplete inclusion of a risk of bias assessment in the interpretation of findings."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-03-03 vote 6-0 by A.G. Radhika, Cauê Monac, Janice Tufte, Harold Lehmann, Yasser Sami Amer, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00328"/>
<display
value="Results emphasized based on statistical significance"/>
<definition
value="A cognitive interpretive bias in reporting whereby results with statistical significance are given exaggerated attention."/>
<property>
<code value="comment"/>
<valueString
value="This bias may occur in several ways. Results may be interpreted as "positive" or "conclusive" if below the significance threshold and "negative" or "inconclusive" if above the significance threshold without proper interpretation of the meaning of the significance threshold. Results may be selectively emphasized in overall summarization of the results based on whether or not they are under the significance threshold. Results may be interpreted based on statistical significance instead of clinical significance, or results may misrepresent statistical significance and clinical significance as synonymous."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-04-07 vote 5-0 by Paul Whaley, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-04-07 comments:
I support this term as written, although I would suggest that we consider adding that assessment of statistical significance without assessing clinical significance often leads to this bias.
I might suggest adding to Comment for application: "Another mis-interpretation is when statistical significance confused with clinical significance.""/>
</property>
</concept>
<concept>
<code value="SEVCO:00340"/>
<display value="Confirmation bias in reporting"/>
<definition
value="A cognitive interpretive bias in reporting due to the influence of an individual’s ideas, beliefs or hypotheses."/>
<property>
<code value="comment"/>
<valueString
value="Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings.
Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-03-10 vote 8-0 by Joanne Dehnbostel, Mario Tristan, Harold Lehmann, Jesus Lopez-Alcalde, A.G. Rradhika, Janice Tufte, Eric Harvey, Cauê Monaco"/>
</property>
</concept>
<concept>
<code value="SEVCO:00329"/>
<display value="External validity bias"/>
<definition
value="A cognitive interpretive bias in reporting due to a mismatch between what the observed data represent and the results that were reported."/>
<property>
<code value="external-definitions"/>
<valueString
value="derived from ROBIS https://www.bristol.ac.uk/media-library/sites/social-community-medicine/robis/ROBIS%201.2%20Clean.pdf"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Reporting bias is defined as a bias due to distortions in the selection of or representation of information in study results or research findings.
Cognitive interpretive bias in reporting is defined as a distortion in the representation of study results or research findings due to the subjective nature of human interpretation.
In the assessment of systematic reviews, this type of bias can be phrased as "Relevance of studies to research question not appropriately considered"."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Paul Whaley, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-4-14 by Janice Tufte, Eric Harvey, Harold Lehmann, Joanne Dehnbostel, Jesus Lopez-Alcalde"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00327"/>
<display value="Early dissemination bias"/>
<definition
value="A reporting bias due to publication or reporting of results or research findings that change in subsequent reports."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Preliminary reporting bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Early reporting bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Interim reporting bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Early publication bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Interim publication bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Premature publication bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Preliminary publication bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Premature reporting bias"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Premature dissemination bias"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="One form of Early dissemination bias is the reporting of results in preprints or early versions during the peer review and publication process not matching the subsequent reports.
Another form of Early dissemination bias is the reporting of interim results (even if fully peer reviewed) when a study is ongoing and more data will be analyzed for the final results.
This bias may result from failure to disclose that the results are preliminary or subject to change.
This definition is not meant to indicate that preprints are inherently biased."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Paul Whaley"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-04-07 vote 3-1 by Eric Harvey, Harold Lehmann, Paul Whaley, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-04-14 comments:
Should we make clear in Comment for Application that preprints represent *potential* bias, because preprinting does not prima facie mean bias?
It seems to me that the bias falls where the results do ot carefully convey that they are preliminary or early AND not to be read as final results -maybe could be word smithed Do you mean someone is reporting without full disclosure
2023-04-07 comments:
I would suggest "One form of potential Premature...", since prima facie, premature reporting does not *have* to be biased.
I feel that "reporting bias" has the same issue of being semantically loaded as "publication bias" - the problem is premature dissemination of results, via reporting them, publishing them, putting them in a press release, etc. So maybe "premature dissemination bias" could be considered as the preferred term? And then we could even consider "early dissemination bias" as that feels more objective than "premature", now that it is phrased this way."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-04-14 by Janice Tufte, Eric Harvey, Harold Lehmann, Jesus Lopez-Alcalde, Joanne Dehnbostel"/>
</property>
</concept>
<concept>
<code value="SEVCO:00384"/>
<display value="Fabrication Bias"/>
<definition
value="A reporting bias resulting from intentional misrepresentation of any part of the study."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Fraud"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Examples include plagiarism, unjustified authorship, data manipulation, and intentional misrepresentation of figures and charts. Applying this code is a serious allegation of wrongdoing."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-04-21 vote 5-0 by Brian S. Alper, Janice Tufte, Harold Lehmann, Cauê Monaco, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00325"/>
<display value="Inadequate reporting of methods"/>
<definition
value="A reporting bias due to insufficient reporting of methods to determine the validity of the results."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate reporting to assess analytic strategy"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate methods reporting bias"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-03-17 vote on "Inadequate Reporting Bias" 2-1 by Eric Harvey, Jesus Lopez-Alcalde, Janice Tufte
2023-03-17 comment on "Inadequate Reporting Bias": Inadequate reporting of methods is covered by another term. Recommend changing this term to "inadequate reporting of results" or deleting this term if terms covering "reporting results biases" have already been established."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-03-31 vote 5-0 by Harold Lehmann, Eric Harvey, Janice Tufte, Paola Rosati, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="SEVCO:00326"/>
<display value="Inadequate explanation of participant withdrawals"/>
<definition
value="A reporting bias due to insufficient reporting of reasons for withdrawals of participants after study enrollment."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Inadequate explanation of participant attrition"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-03-31 vote 5-0 by Harold Lehmann, Eric Harvey, Janice Tufte, Paola Rosati, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-03-31 comment: Somewhere in this entry should be a link to the "withdrawal" SEVCO term. Or terms."/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00026"/>
<display value="Synthesis Bias"/>
<definition
value="A bias in the conduct of a systematic review resulting from methods used to select, manipulate or interpret data for evidence synthesis."/>
<concept>
<code value="SEVCO:00345"/>
<display value="Synthesis missing eligible studies"/>
</concept>
<concept>
<code value="SEVCO:00346"/>
<display value="Study parameters not appropriate for synthesis"/>
</concept>
<concept>
<code value="SEVCO:00347"/>
<display value="Heterogeneity not addressed"/>
</concept>
<concept>
<code value="SEVCO:00348"/>
<display value="Inconsistency within sensitivity analyses"/>
</concept>
<concept>
<code value="SEVCO:00349"/>
<display value="Biases in studies influence synthesis"/>
</concept>
<concept>
<code value="SEVCO:00350"/>
<display value="Synthesis Information Bias"/>
<concept>
<code value="SEVCO:00351"/>
<display
value="Insufficient study characteristics available for proper results interpretation"/>
</concept>
<concept>
<code value="SEVCO:00352"/>
<display
value="Incomplete collection of relevant study results for synthesis"/>
</concept>
<concept>
<code value="SEVCO:00353"/>
<display value="Methodologic quality assessment inadequate"/>
<property>
<code value="comment"/>
<valueString value="bias in risk of bias assessment?"/>
</property>
</concept>
<concept>
<code value="SEVCO:00354"/>
<display value="Error in risk of bias assessment not minimized"/>
</concept>
<concept>
<code value="SEVCO:00369"/>
<display value="Error in data collection not minimized"/>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00028"/>
<display value="Qualitative Research Bias"/>
<definition
value="A bias specific to the design, conduct, analysis or reporting of qualitative research."/>
<property>
<code value="external-definitions"/>
<valueString
value="MMAT = “Qualitative research is an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem” (Creswell, 2013b, p. 3)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Caue Monaco, Li Wang"/>
</property>
<property>
<code value="open-for-voting"/>
<valueString value="2023-12-01"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Qualitative research is a research approach that studies subjective aspects of social phenomenon and human behavior. Qualitative research may encompass any non-quantitative method of analysis. Qualitative research often explores the meaning individuals or groups assign to concepts."/>
</property>
<concept>
<code value="SEVCO:00356"/>
<display value="Inappropriate qualitative approach"/>
</concept>
<concept>
<code value="SEVCO:00357"/>
<display value="Inadequate qualitative data collection methods"/>
</concept>
<concept>
<code value="SEVCO:00358"/>
<display value="Inappropriate qualitative analysis"/>
</concept>
<concept>
<code value="SEVCO:00359"/>
<display value="Unsubstantiated interpretation of results"/>
</concept>
<concept>
<code value="SEVCO:00360"/>
<display
value="Incoherence between data, analysis, and interpretation"/>
</concept>
</concept>
<concept>
<code value="SEVCO:00029"/>
<display value="Mixed Methods Research Bias"/>
<definition
value="A bias specific to the coordination of design, conduct, analysis or reporting of qualitative research and quantitative research."/>
<property>
<code value="external-definitions"/>
<valueString
value="MMAT: Mixed methods (MM) research involves combining qualitative (QUAL) and quantitative (QUAN) methods. In this tool, to be considered MM, studies have to meet the following criteria (Creswell and Plano Clark, 2017): (a) at least one QUAL method and one QUAN method are combined; (b) each method is used rigorously in accordance to the generally accepted criteria in the area (or tradition) of research invoked; and (c) the combination of the methods is carried out at the minimum through a MM design (defined a priori, or emerging) and the integration of the QUAL and QUAN phases, results, and data"/>
</property>
<concept>
<code value="SEVCO:00361"/>
<display value="Inadequate rationale for mixed methods design"/>
</concept>
<concept>
<code value="SEVCO:00362"/>
<display value="Ineffective integration of study components"/>
</concept>
<concept>
<code value="SEVCO:00363"/>
<display
value="Inappropriate interpretation of integration of qualitative and quantitative findings"/>
</concept>
<concept>
<code value="SEVCO:00364"/>
<display value="Inadequate handling of inconsistency"/>
</concept>
</concept>
<concept>
<code value="SEVCO:00030"/>
<display value="bias in validation assessment"/>
<definition
value="A bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="validation bias"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="PROBAST = ROB, which was defined to occur when shortcomings in study design, conduct, or analysis lead to systematically distorted estimates of model predictive performance. PROBAST enables a focused and transparent approach to assessing the ROB and applicability of studies that develop, validate, or update prediction models for individualized predictions. Prediction models are sometimes described as risk prediction models, predictive models, prediction indices or rules, or risk scores."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Bias in validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies.
A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-27 vote 5-0 by Brian S. Alper, Eric Harvey, Yasser Sami Amer, Janice Tufte, Harold Lehmann"/>
</property>
<concept>
<code value="SEVCO:00368"/>
<display value="bias in external validation assessment"/>
<definition
value="A bias in validation assessment using a sample source that differs from those used in the derivation of the procedure."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="inadequate external validation assessment"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies.
A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure. External validation is tested in populations that differ from the source used for derivation of the procedure.
Bias in validation assessment is defined as a bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Bias in external validation assessment may be used for absence of any external validation assessment or inadequacy in external validation assessment."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-11-26 vote 5-0 by Harold Lehmann, Muhammad Afzal, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="SEVCO:00367"/>
<display value="bias in internal validation assessment"/>
<definition
value="A bias in validation assessment specific to a validation assessment that uses the same sample source that was used in the derivation of the procedure."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="inadequate internal validation assessment"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Validation assessment is often used for predictive model research and diagnostic research where optimal research design includes derivation studies and external validation studies.
A 'validation study' has a validation goal where validation goal {SEVCO:01098} is defined as a study goal with the intent to determine the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Procedures that may be assessed in validation studies include predictive algorithms, measurement instruments, and educational materials. Internal validation is tested in populations from the source used for derivation of the procedure.
Model derivation is often based on a portion of data available from a sample source, and internal validation is performed using the same sample data but a different set of data.
Whereas external validation is tested in populations that differ from the source used for derivation of the procedure, internal validation is tested in the same population.
Bias in validation assessment is defined as a bias in the design, conduct or reporting of studies or analyses intended to evaluate the reliability and/or performance of a procedure for a specific predictive, classification, measurement, or communication purpose.
Bias in internal validation assessment may be used for absence of any internal validation assessment or inadequacy in internal validation assessment. A common cause of bias in internal validation assessment is validation using the same data that was used for derivation."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-12-01 vote 5-0 by Xing Song, Javier Bracchiglione, Harold Lehmann, Eric Harvey, Caue Monaco"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00370"/>
<display value="Early Study Termination Bias"/>
<definition
value="A bias due to the decision to end the study earlier than planned."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bias due to early study termination"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Child terms (types of Early Study Termination Bias) may be used to report the reasons for bias in the decision to end the study earlier than planned. Bias resulting from the early study termination may be described with other terms in the code system."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-04-01 vote 4-1 by Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Robin Ann Yurk, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-04-01 comment: Term Definition: Simplify so it reads.
A bias in the reported results due to early termination of a study resulting in incomplete data collection."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-08 vote 6-0 by nelle.stocquart, nisha mathew, Mario Tristan, Robin Ann Yurk, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<concept>
<code value="SEVCO:00371"/>
<display
value="Early study termination bias due to competing interests"/>
<definition
value="An early study termination bias due to the decision to end the study being influenced by financial, commercial, legal, political, social, professional, or intellectual interests."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-01 vote 6-0 by Brian S. Alper, Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="SEVCO:00372"/>
<display
value="Early study termination bias due to unplanned use of interim analysis"/>
<definition
value="An early study termination bias due to awareness of study results without following a preplanned protocol for how interim results will influence the decision to terminate the study."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-01 vote 5-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="SEVCO:00373"/>
<display
value="Early study termination bias due to inappropriate statistical stopping rule"/>
<definition
value="An early study termination bias due to use of an inappropriate model or threshold in the analysis used for determination to end the study."/>
<property>
<code value="comment"/>
<valueString
value="An example of an inappropriate statistical stopping rule is one that does not account for multiple analyses (i.e. does not use a lower p value threshold) for a conclusion of benefit warranting early termination of the study."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-01 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan, Robin Ann Yurk"/>
</property>
</concept>
<concept>
<code value="SEVCO:00374"/>
<display
value="Early study termination bias due to external factors"/>
<definition
value="An early study termination bias due to a decision to end the study based on factors other than the results of interim analysis."/>
<property>
<code value="comment"/>
<valueString
value="Examples of external factors may include cessation of funding, and safety or efficacy results reported by other studies."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Mario Tristan, Khalid Shahin, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-04-01 vote 6-0 by Joanne Dehnbostel, Jesus Lopez-Alcalde, Harold Lehmann, Cauê Monaco, Mario Tristan, Robin Ann Yurk"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="SEVCO:00027"/>
<display value="conflict of interest"/>
<definition
value="A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have motivations that could compromise their impartiality."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="conflicted interests"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="conflicts of interest"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="competing interest"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="competing interests"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="duality of interest"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="MASTER-28. Conflict of interests were declared and absent"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-11-10 vote 4-1 by Brian S. Alper, Harold Lehmann, Janice Tufte, Eric Harvey, Javier Bracchiglione"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-11-10 comment: I do not think the term should be limited to goals and motivations this seems judgmental and manipulative. COI can be based on intellectual property and or current research work along the same subject where a researcher or partner is too involved with a project or paper on the same subject"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest"."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey"/>
</property>
<concept>
<code value="SEVCO:00355"/>
<display value="financial conflict of interest"/>
<definition
value="A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have financial motivations that could compromise their impartiality."/>
<property>
<code value="comment"/>
<valueString
value="Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. The financial motivations may be direct (e.g. salary or consulting fees) or indirect (e.g. stock interests or spousal financial interests). Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest"."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-12-01 comment: I agree with the definition of the term, but I think it will be better to further explicit what "financial" means in the comments for application (e.g. salary, stocks, paid assistance to congress)"/>
</property>
</concept>
<concept>
<code value="SEVCO:00252"/>
<display value="nonfinancial conflict of interest"/>
<definition
value="A risk factor for bias in which persons influencing research design, conduct, analysis or reporting have non-financial motivations that could compromise their impartiality."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="non-financial conflict of interest"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Motivations may be explicit or implicit. Motivations may be unconscious or unrecognized. The non-financial motivations may be related to social, political, professional, ideological, or other factors. Conflict of interest is sometimes phrased "potential conflict of interest" or "perceived conflict of interest"."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Javier Bracchiglione, Janice Tufte, Muhammad Afzal, Caue Monaco"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-12-01 vote 6-0 by Cauê Monaco, Xing Song, Javier Bracchiglione, Harold Lehmann, Janice Tufte, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-12-01 comment: I agree with the definition of the term, but I think it will be better to further explicit what "non-financial" means in the comments for application (e.g. intellectual)"/>
</property>
</concept>
</concept>
<concept>
<code value="SEVCO:00007"/>
<display value="Rating of Bias Risk"/>
<concept>
<code value="SEVCO:00186"/>
<display value="Low risk of bias"/>
</concept>
<concept>
<code value="SEVCO:00187"/>
<display value="Moderate risk of bias"/>
</concept>
<concept>
<code value="SEVCO:00188"/>
<display value="High risk of bias"/>
</concept>
<concept>
<code value="SEVCO:00189"/>
<display value="Serious risk of bias"/>
</concept>
<concept>
<code value="SEVCO:00190"/>
<display value="Critical risk of bias"/>
</concept>
<concept>
<code value="SEVCO:00191"/>
<display value="Some risk of bias"/>
</concept>
<concept>
<code value="SEVCO:00192"/>
<display value="Unclear risk of bias"/>
</concept>
</concept>
<concept>
<code value="SEVCO:00193"/>
<display value="Rating of Factor Presence"/>
<concept>
<code value="SEVCO:00194"/>
<display value="Factor present"/>
</concept>
<concept>
<code value="SEVCO:00195"/>
<display value="Factor likely present"/>
</concept>
<concept>
<code value="SEVCO:00196"/>
<display value="Factor likely absent"/>
</concept>
<concept>
<code value="SEVCO:00197"/>
<display value="Factor absent"/>
</concept>
<concept>
<code value="SEVCO:00198"/>
<display value="No information on factor presence"/>
</concept>
<concept>
<code value="SEVCO:00199"/>
<display value="Factor presence or absence unclear"/>
</concept>
</concept>
<concept>
<code value="SEVCO:00200"/>
<display value="Rating of Bias Direction"/>
<concept>
<code value="SEVCO:00201"/>
<display value="Risk of bias favoring experimental"/>
</concept>
<concept>
<code value="SEVCO:00202"/>
<display value="Risk of bias favoring comparator"/>
</concept>
<concept>
<code value="SEVCO:00203"/>
<display value="Risk of bias towards null"/>
</concept>
<concept>
<code value="SEVCO:00204"/>
<display value="Risk of bias away from null"/>
</concept>
<concept>
<code value="SEVCO:00205"/>
<display value="Risk of bias direction unpredictable"/>
</concept>
</concept>
<concept>
<code value="SEVCO:00206"/>
<display value="Rating of Potential Influence"/>
<concept>
<code value="SEVCO:00207"/>
<display value="Factor has potential to impact results"/>
</concept>
<concept>
<code value="SEVCO:00208"/>
<display value="Factor likely has potential to impact results"/>
</concept>
<concept>
<code value="SEVCO:00209"/>
<display
value="Factor likely does not have potential to impact results"/>
</concept>
<concept>
<code value="SEVCO:00210"/>
<display value="Factor does not have potential to impact results"/>
</concept>
</concept>
<concept>
<code value="STATO:0000039"/>
<display value="Statistic"/>
<definition
value="An information content entity that is a formalization of relationships between variables and value specification."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="statistic type"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The 'statistic' does not include the numerical value for which the statistic is used--that would be the statistic value, and the 'statistic' does not include the model characteristics."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Philippe Rocca-Serra, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="revision 6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă; original approval 6/6 as of 9/27/2021: Harold Lehmann, Bhagvan Kommadi, Louis Leff, Janice Tufte, Joanne Dehnbostel, Mario Tristan"/>
</property>
<concept>
<code value="STATO:0000047"/>
<display value="Count"/>
<definition
value="A statistic that represents the number of instances or occurrences of something."/>
<property>
<code value="comment"/>
<valueString
value="A count can only be denoted by non-negative integer values."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 9/27/2021: Harold Lehmann, Bhagvan Kommadi, Louis Leff, Janice Tufte, Joanne Dehnbostel, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="TBD:0000001"/>
<display value="Sum"/>
<definition
value="A statistic that represents the result of adding all the values in a collection of values."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="total"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă"/>
</property>
</concept>
<concept>
<code value="TBD:0000002"/>
<display value="Maximum Observed Value"/>
<definition
value="A statistic that represents the largest non-null value in a collection of values that can be ordered by magnitude."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă"/>
</property>
</concept>
<concept>
<code value="TBD:0000003"/>
<display value="Minimum Observed Value"/>
<definition
value="A statistic that represents the smallest non-null value in a collection of values that can be ordered by magnitude."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă"/>
</property>
</concept>
<concept>
<code value="TBD:0000004"/>
<display value="Maximum Possible Value"/>
<definition
value="A statistic that represents the largest value that could occur."/>
<property>
<code value="comment"/>
<valueString
value="This term may be used to denote the upper limit of a scale or score."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă"/>
</property>
</concept>
<concept>
<code value="TBD:0000005"/>
<display value="Minimum Possible Value"/>
<definition
value="A statistic that represents the smallest value that could occur."/>
<property>
<code value="comment"/>
<valueString
value="This term may be used to denote the lower limit of a scale or score."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel,Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/4/2021: Alejandro Piscoya, Bhagvan Kommadi, Brian S. Alper, Janice Tufte, Louis Leff, Sorana D. Bolboacă"/>
</property>
</concept>
<concept>
<code value="TBD:0000006"/>
<display value="Threshold"/>
<definition
value="A statistic that represents the boundary at which something changes."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="cutoff"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="threshold value"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="cutoff value"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="cutoff threshold"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="limit"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The thing that changes at the threshold value may be relevant for function, application, classification, or detection."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Kenneth Wilkins, Khalid Shahin"/>
</property>
<property>
<code value="approval"/>
<valueString
value="5/5 as of 10/11/2021: Janice Tufte, Joanne Dehnbostel, Louis Leff, Vignesh Subbian, Robin Ann Yurk"/>
</property>
</concept>
<concept>
<code value="STATO:0000069"/>
<display value="Degrees of Freedom"/>
<definition
value="A statistic that represents the number of independent values used to calculate a statistical estimate. The number of degrees of freedom ν is equal to the number of independent units of information given the model."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="number of degrees of freedom"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The formula to calculate degrees of freedom will depend on the model. For example, the degrees of freedom for a sample standard deviation, given the sample mean, is N-1, because the Nth observation is no longer independent, given the N-1 other observations and the sample mean."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Bhagvan Kommadi, Kenneth Wilkins, Khalid Shahin, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="Include * in P = x1 * x2...to clarify this is a product."/>
</property>
</concept>
<concept>
<code value="STATO:0000029"/>
<display value="Measure of Central Tendency"/>
<definition
value="A statistic that represents a central value for a set of data."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Central tendency measure"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Centrality measure"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen"/>
</property>
<concept>
<code value="STATO:0000573"/>
<display value="Mean"/>
<definition
value="A measure of central tendency calculated as the sum of a set of values divided by the number of values in the set."/>
<designation>
<language value="fi"/>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000003001"/>
<display value="Fully specified name"/>
</use>
<value value="Keskiarvo"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Arithmetic Mean"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="average"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="average value"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="statistical mean"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="A=sum[Ai] / n where i ranges from 1 to n and Ai represents the value of individual observations."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen"/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Central Tendency"/>
</property>
<concept>
<code value="TBD:0000177"/>
<display value="Mean of differences"/>
<definition
value="A mean of values in which each value is the subtraction of one quantity from another."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Mean of paired differences"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Average of differences"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Mean difference"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Average difference"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Janice Tufte, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-01 vote 6-1 by Louis Leff, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte, C P Ooi"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-01 comment: 'Difference in means' may be more appropriate. 'Mean value from one population subtract the mean value of another population' may be clearer reflecting the definition"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The primary use of this term is in analyzing within-individual differences."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-15 vote 5-0 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Paola Rosati, Brian S. Alper"/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Central Tendency"/>
</property>
</concept>
<concept>
<code value="TBD:0000183"/>
<display value="Mean time-to-event"/>
<definition
value="A mean of values in which each value is the duration of time between the start of observation and the occurrence of an event."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Mean TTE"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000396"/>
<display value="Geometric Mean"/>
<definition
value="A measure of central tendency calculated as the nth root of the product of all of the observations in a data set (n being the number of all observations)."/>
<property>
<code value="comment"/>
<valueString
value="For n observations with values x1, x2, … xn, the product of all the values P = x1 * x2 … xn [also expressed as P = (x1)(x2)...(xn)]. The nth root of the product = (P)^(1/n)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Kenneth Wilkins, Philippe Rocca-Serra"/>
</property>
<property>
<code value="approval"/>
<valueString
value="6/6 as of 10/27/2021: Janice Tufte, Louis Leff, Vignesh Subbian, Robin Ann Yurk, Harold Lehmann, Muhammad Afzal, Pentti Nieminen"/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Central Tendency"/>
</property>
</concept>
<concept>
<code value="STATO:0000574"/>
<display value="Median"/>
<definition
value="A measure of central tendency equal to the middle value (or mean of the two middle values) of a set of ordered data."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="center value"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="statistical median"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="middle value"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The median value is equal to the middle value of a set of ordered data with an odd number of values. The median value is calculated as the mean of the two middle values of a set of ordered data with an even number of values. The median is sometimes called the second quartile or fiftieth percentile."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="6-1 on 2021-11-01 by Louis Leff, Vignesh Subbian, Pentti Nieminen, Bhagvan Kommadi, Janice Tufte, Sorana D. Bolboacă, Robin Ann Yurk
2021-12-01 vote 5-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-11-01 comment: the definition is appropriate. Suggest use alternative terms: center value, statistical median or middle value. I don't recommend using fiftieth percentile or second quartile
2021-12-01 comment: I would change definition to: A measure of central tendency equal to the middle value of a set of ordered data with an odd number of values. It could be calculated also as the mean of the two middle values of a set of ordered data with an even number of values. ((Perhaps simpler as: A measure of central tendency equal to the middle value of a set of ordered data. In a set of ordered data with an even number of values, the middle value is calculated as the mean of the two middle values.))"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-15 vote 6-0 by Robin Ann Yurk, Muhammad Afzal, Harold Lehmann, Janice Tufte, Paola Rosati, Khalid Shahin"/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Central Tendency"/>
</property>
<concept>
<code value="TBD:0000184"/>
<display value="Median time-to-event"/>
<definition
value="A median of values in which each value is the duration of time between the start of observation and the occurrence of an event."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Median TTE"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000033"/>
<display value="Mode"/>
<definition
value="A measure of central tendency that is the most frequently occurring value in a data set. If no value is repeated, there is no mode. If more than one value occurs with the same greatest frequency, each of these values is a mode."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="statistical mode"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="most common value"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="most frequent value"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="7/7 on 2021-11-01 by Louis Leff, Vignesh Subbian, Pentti Nieminen, Bhagvan Kommadi, Janice Tufte, Sorana D. Bolboacă, Robin Ann Yurk"/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Central Tendency"/>
</property>
</concept>
<concept>
<code value="STATO:0000397"/>
<display value="Harmonic Mean"/>
<definition
value="A measure of central tendency calculated by dividing the total number of observations by the sum of the reciprocals of each observed value."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO:
The harmonic mean is a kind of mean which is calculated by dividing the total number of observations by the reciprocal of each number in a series. Harmonic Mean = N/(1/a1+1/a2+1/a3+1/a4+.......+1/aN) where a(i)= Individual score and N = Sample size (Number of scores)"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Harmonic Mean = N/(1/a1+1/a2+1/a3+1/a4+...+1/aN) where a(i)= Individual observed value and N = Sample size (Number of observations)"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey, Yuan Gao"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000007"/>
<display value="Difference"/>
<definition
value="A statistic that is a subtraction of one quantity from another."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Delta"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-01 vote 5-0 by Philippe Rocca-Serra, Paola Rosati, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte"/>
</property>
<concept>
<code value="TBD:0000174"/>
<display value="Absolute Difference"/>
<definition
value="A statistic that is a subtraction of one quantity from another, with no modification of the resulting value."/>
<property>
<code value="comment"/>
<valueString
value="As a type of statistic, "Absolute Difference" is the actual difference between two quantities and can be positive or negative depending on the order of subtraction. The term "Absolute Difference" should not be confused with the mathematical term 'absolute value' which is a numerical value without a negative sign."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-01 vote 5-0 by Philippe Rocca-Serra, Paola Rosati, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte"/>
</property>
<concept>
<code value="TBD:0000008"/>
<display value="Count Difference"/>
<definition
value="A statistic that is a subtraction of one count from another."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Difference in counts"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Number difference"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-01 vote 5-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte
2021-12-08 vote 6-1 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-01 comment: Suggest include as an alternative term under difference and remove this term as unclear on distinction as a separate term.
2021-12-08 comment: Suggest removing this term and adding as an alternative term to Difference {(atlernative term and Comment for application added in response}}"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The term Count Difference is used to specify the Absolute Difference is with respect to a count or number of items (such as number of events, platelet counts, sample size e.g. number of people in the group) to distinguish from differences in other types of statistics (mean difference, median difference, risk difference, etc.)"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-15 vote 6-0 by Robin Ann Yurk, Janice Tufte, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
</concept>
<concept>
<code value="STATO:0000457"/>
<display value="Difference in means"/>
<definition
value="A statistic that is a subtraction of one mean from another."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Difference of means"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Muhammad Afzal, Khalid Shahin, Philippe Rocca-Serra, Kenneth Wilkins, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-01 vote 6-0 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte
2021-12-01 Steering group added comment for application and decided not to send out for vote again."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The primary use of this term is in analyzing between-group differences."/>
</property>
</concept>
<concept>
<code value="TBD:0000009"/>
<display value="Difference in medians"/>
<definition
value="A statistic that is a subtraction of one median from another."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Difference of medians"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Janice Tufte, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2021-12-01 vote 6-0 by Philippe Rocca-Serra, Paola Rosati, Robin Ann Yurk, Joanne Dehnbostel, Jesus Lopez-Alcalde, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="STATO:0000424"/>
<display value="Risk Difference"/>
<definition
value="A measure of association that is the subtraction of the risk of an event in one group from the risk of the same event in another group."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="RD"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Absolute risk difference"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="ARD"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: risk difference = The risk difference is the difference between the observed risks (proportions of individuals with the outcome of interest) in the two groups.
The risk difference is straightforward to interpret: it describes the actual difference in the observed risk of events between experimental and control interventions."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Harold Lehmann, Janice Tufte"/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati"/>
</property>
</concept>
<concept>
<code value="TBD:0000178"/>
<display value="Difference-in-differences"/>
<definition
value="A statistic that is a subtraction of one difference from another."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Difference in differences"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The term 'Difference-in-differences' may be used to assess the incremental benefit or harm of an intervention or exposure, where the effect of the exposure is measured as a difference (for example, pre-post testing comparison of values before and after the exposure) in two groups being compared."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-08 vote 6-0 by Robin Ann Yurk, Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-08 comment: do you want to add to comment for application, pre-post testing or as an alternative term?"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000180"/>
<display value="Relative Difference"/>
<definition
value="A statistic that is a difference between 1 and a ratio of the two quantities being compared."/>
<property>
<code value="comment"/>
<valueString
value="Relative Difference = 1 - ( a / b ). Because 1 - ( a / b ) is not equal to 1 - ( b / a ), Relative Difference may be expressed as "Relative Difference with respect to b" when referring to 1 - ( a / b ).
The relative difference can also be defined as a statistic that is a ratio of the absolute difference (of the two quantities being compared) to the reference value (one of the quantities being compared). Relative Difference = ( b - a ) / ( b ) where b is the reference value and this may also be called "Relative Difference with respect to b""/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Example of a relative difference (relative to placebo) that is not a relative mean difference or a relative risk difference:
Relative median difference (%) = [(active median - placebo median) / placebo median] x 100.
This can be transformed to:
Relative median difference = (active median / placebo median) - 1."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-06-15 vote 2-2 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte
2022-06-22 vote 4-2 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-15 comments: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term.
To me this definition is unclear...sorry, what it means? Is it weird to a have a ratio of a difference to a reference value? Sorry, but I am unable to understand this definition.
relative and absolute difference seems confusing to me"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-29 vote 5-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Eric Harvey"/>
</property>
<concept>
<code value="TBD:0000176"/>
<display value="Relative Mean Difference"/>
<definition
value="A statistic that is a difference between 1 and a ratio of the two mean values being compared."/>
<property>
<code value="comment"/>
<valueString
value="Relative Mean Difference = 1 - ( a / b ) where a and b are mean values.
The relative mean difference can also be defined as a statistic that is a ratio of the difference in means to the reference mean value. Relative Mean Difference = ( b - a ) / ( b ) where b is the reference mean value and a is another mean value."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-06-15 vote 3-1 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte
2022-06-22 vote 5-1 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-15 comment: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term.
2022-06-29 comment: Relative Mean Difference is_a kind of Relative difference where the quantities being compared are two means, one of which is or acts as reference mean value (additional comment: define 'reference mean value' if it refers to something more specific"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-29 vote 6-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="TBD:0000010"/>
<display value="Relative Risk Difference"/>
<definition
value="A statistic that is a difference between 1 and a ratio of the two risk values being compared."/>
<property>
<code value="comment"/>
<valueString
value="Relative Risk Difference = 1 - ( a / b ) where a and b are risk values.
The relative risk difference can also be defined as a statistic that is a ratio of the risk difference to the risk used as a reference. Relative Risk Difference = ( b - a ) / ( b ) where b is the reference risk value and a is another risk value."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-06-15 vote 3-1 by Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte
2022-06-22 vote 5-1 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-15 comment: Suggest combining this set of terms(relative difference, relative mean difference and relative risk difference and summarizing the comment for applications so it is one term."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-22 vote 6-0 by Mario Tristan, Eric M Harvey, Jesus Lopez-Alcalde, Robin Ann Yurk, Paola Rosati, Janice Tufte"/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000100"/>
<display value="Standardized Mean Difference"/>
<definition
value="A statistic that is a difference between two means, divided by a statistical measure of dispersion."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="SMD"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: standardized mean difference (Cohen's d statistic, SMD) = standardized mean difference is data item computed by forming the difference between two means, divided by an estimate of the within-group standard deviation.
It is used to provide an estimatation of the effect size between two treatments when the predictor (independent variable) is categorical and the response(dependent) variable is continuous"/>
</property>
<property>
<code value="comment"/>
<valueString
value="In English, "standardized" is often used to express relative comparison to any reference value. However, in SEVCO, "standardized" is used to express relative comparison to a statistical measure of dispersion.
In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
For example, in Cohen's d statistic, the statistical measure of dispersion is specified as the square root of an average of the variances of the two groups being compared."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-07-06 vote 3-1 by Paola Rosati, Harold Lehmann, Robin Ann Yurk, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-06 comment: Consider listing Cohen's D statistic as an alternative term or selecting one of the two terms to be the term to evaluate as the definition are similar."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-20 vote 6-0 by Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
<concept>
<code value="TBD:0000011"/>
<display value="Cohen’s d statistic"/>
<definition
value="A standardized mean difference which is calculated as a difference between two means, divided by a square root of an average of the variances of the two groups."/>
<property>
<code value="comment"/>
<valueString
value="A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
In Cohen's d statistic, the statistical measure of dispersion is specified as the square root of an average of the variances of the two groups being compared. The variances of the two groups are based on within-group standard deviations.
For sample sizes < 50, a correction factor is used."/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: standardized mean difference (Cohen's d statistic, SMD) = standardized mean difference is data item computed by forming the difference between two means, divided by an estimate of the within-group standard deviation.
It is used to provide an estimatation of the effect size between two treatments when the predictor (independent variable) is categorical and the response(dependent) variable is continuous"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-07-06 vote 3-1 by Paola Rosati, Harold Lehmann, Robin Ann Yurk, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-07-06 comment: Consider listing SMD statistic as an alternative term or selecting one of the two terms to be the term to evaluate as the definition are similar and the other term to be the alternative term."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="STATO:0000135"/>
<display value="Strictly standardized mean difference"/>
<definition
value="A standardized mean difference which is calculated as a difference between two means, divided by the standard error of the difference between the two means."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="SSMD"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: strictly standardized mean difference (SSMS) is a standardized mean difference which corresponds to the ratio of mean to the standard deviation of the difference between two groups. SSMD directly measures the magnitude of difference between two groups. SSMD is widely used in High Content Screen for hit selection and quality control. When the data is preprocessed using log-transformation as normally done in HTS experiments, SSMD is the mean of log fold change divided by the standard deviation of log fold change with respect to a negative reference. In other words, SSMD is the average fold change (on the log scale) penalized by the variability of fold change (on the log scale). For quality control, one index for the quality of an HTS assay is the magnitude of difference between a positive control and a negative reference in an assay plate. For hit selection, the size of effects of a compound (i.e., a small molecule or an siRNA) is represented by the magnitude of difference between the compound and a negative reference. SSMD directly measures the magnitude of difference between two groups. Therefore, SSMD can be used for both quality control and hit selection in HTS experiments."/>
</property>
<property>
<code value="comment"/>
<valueString
value="A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
In Strictly standardized mean difference, the statistical measure of dispersion is specified as the standard error of the difference between means [SEVCO TBD:0000063]."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-20 vote 6-0 by Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="STATO:0000319"/>
<display value="Hedges’s g"/>
<definition
value="A standardized mean difference which is calculated as a difference between two means, divided by the pooled standard deviation."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Hedges's g = Hedges's g is an estimator of effect size, which is similar to Cohen's d and is a measure based on a standardized difference. However, the denominator, corresponding to a pooled standard deviation, is computed differently from Cohen's d coefficient, by applying a correction factor (which involves a Gamma function)."/>
</property>
<property>
<code value="comment"/>
<valueString
value="A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
In Hedges’s g, the statistical measure of dispersion is specified as the pooled standard deviation. There is a correction factor for small sample sizes."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="STATO:0000320"/>
<display value="Glass’s delta"/>
<definition
value="A standardized mean difference which is calculated as a difference between two means (of an experimental group and a control group), divided by the standard deviation of the control group."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Glass's delta is an estimator of effect size which is similar to Cohen's d but where the denominator corresponds only to the standard deviation of the control group (or second group). It is considered less biais than the Cohen's d for estimating effect sizes based on means and distances between means."/>
</property>
<property>
<code value="comment"/>
<valueString
value="A standardized mean difference is a statistic that is a difference between two means, divided by a statistical measure of dispersion. In SEVCO, the term Standardized Mean Difference is a description of the concept without an explicit type of statistical measure of dispersion. If the statistical measure of dispersion is specified, then a type (child term) of Standardized Mean Difference is preferred.
In Glass's delta, the statistical measure of dispersion is specified as the standard deviation of the control group. There is a correction factor for small sample sizes."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:0000012"/>
<display value="Reciprocal of Difference"/>
<definition
value="A statistic that is a quotient of one and a difference."/>
<property>
<code value="comment"/>
<valueString
value="A difference is a statistic that is a subtraction of one quantity from another."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-20 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
<concept>
<code value="TBD:0000013"/>
<display value="Number Needed to Treat"/>
<definition
value="A statistic that represents the number of units that needs to be treated to prevent one additional undesired outcome. The Number Needed to Treat is calculated as the reciprocal of a treatment effect estimate, where the effect estimate is expressed as a risk difference."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="NNT"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Number needed to treat to benefit"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="NNTB"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The Number Needed to Treat (NNT) value is often rounded up to the next highest whole integer."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Centre for Evidence-Based Medicine
Number Needed to Treat (NNT)
The Number Needed to Treat (NNT) is the number of patients you need to treat to prevent one additional bad outcome (death, stroke, etc.).
https://www.cebm.ox.ac.uk/resources/ebm-tools/number-needed-to-treat-nnt"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-20 vote 7-0 by Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="TBD:0000014"/>
<display value="Number Needed to Screen to Detect"/>
<definition
value="A statistic that represents the number of units that needs to be tested to identify one additional case. The Number Needed to Screen to Detect is calculated as the reciprocal of a difference in rate of detected cases with and without screening."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="NNS"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The Number Needed to Screen (NNS) value is often rounded up to the next highest whole integer. The Number Needed to Screen to Detect is distinct from the Number Needed to Screen to Prevent as the formulas to calculate are different, even though both may be abbreviated as NNS."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-03 vote 5-0 by Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Philippe Rocca-Serra, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="TBD:0000015"/>
<display value="Number Needed to Screen to Prevent"/>
<definition
value="A statistic that represents the number of units that needs to be tested to prevent one additional adverse outcome, assuming that positive testing will lead to preventive intervention. The Number Needed to Screen to Prevent is calculated as the Number Needed to Treat divided by the prevalence."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="NNS"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The Number Needed to Screen (NNS) value is often rounded up to the next highest whole integer. The Number Needed to Screen to Detect is distinct from the Number Needed to Screen to Prevent as the formulas to calculate are different, even though both may be abbreviated as NNS.
The formula may be adjusted for test performance characteristics (e.g. dividing by the sensitivity) or assumptions regarding acceptance or adherence of interventions."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="BMJ 1998;317: 307 Number needed to screen: development of a statistic for disease screening
Number needed to screen is defined as the number of people that need to be screened for a given duration to prevent one death or adverse event.
Number needed to screen was then calculated by dividing the number needed to treat for treating risk factors by the prevalence of disease that was unrecognised or untreated.
https://www.bmj.com/content/317/7154/307.long"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-03 vote 5-0 by Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Philippe Rocca-Serra, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="TBD:0000016"/>
<display value="Number Needed to Harm"/>
<definition
value="A statistic that represents the number of units that, if treated or exposed to the intervention, to lead to one additional undesired outcome. The Number Needed to Harm is calculated as the reciprocal of a treatment effect estimate, where the effect estimate is expressed as a risk difference."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="NNH"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Number needed to treat to harm"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="NNTH"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="The Number Needed to Harm (NNH) value is often rounded down to the next lowest whole integer."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Centre for Evidence-Based Medicine
Number needed to treat (NNT): The number of patients who need to be treated to prevent one bad outcome. It is the inverse of the ARR: NNT=1/ARR. Numbers needed to harm (NNH)-the number of patients who, if they received the experimental treatment, would lead to one additional person being harmed compared with patients who receive the control treatment; calculated as 1/ARI.
https://www.cebm.ox.ac.uk/resources/ebm-tools/glossary"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-07-20 vote 7-0 by Cauê Monaco, Janice Tufte, Philippe Rocca-Serra, Mario Tristan, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000184"/>
<display value="Ratio"/>
<definition value="A statistic that is a quotient of two quantities."/>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: A ratio is a data item which is formed with two numbers r and s is written r/s, where r is the numerator and s is the denominator. The ratio of r to s is equivalent to the quotient r/s.
NCIt: The quotient of one quantity divided by another, with the same units of measurement.
UMLS: Quotient of quantities of the same kind for different components within the same system.
OECD: A ratio is a number that expresses the relative size of two other numbers.
OCRe: A ratio is a quotient of quantities of the same kind for different components within the same system.
SCO: A ratio is a relationship between two numbers of the same kind expressed arithmetically as a dimensionless quotient of the two which explicitly indicates how many times the first number contains the second.
Quotient of quantities of the same kind for different components within the same system."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Although some definitions for Ratio include "with the same units of measurement" and some definitions for Ratio include "a dimensionless quotient", not all definitions have these concepts, and there are ratios with units of measurement that are different for numerator and denominator such as event rate, body mass index, and cost-effectiveness ratio."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-22 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, Jesus Lopez-Alcalde
2021-12-29 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, C P Ooi"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-22 comment: I suggest adding "with the same measurement units"2021-12-29 comment: I agree with the term definition. However, the comment could be improved and I would not include BMI as an example as an index may not necessarily be a ratio but a more complex statistic or calculation
2022-01-05 comment: Comment, I would remove body mass index from the comment section as an example as an index is a unique statistical defnition."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-05 vote 6-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde"/>
</property>
<concept>
<code value="TBD:0000017"/>
<display value="Percentage"/>
<definition
value="A ratio that is multiplied by 100, and has the same units of measurement in the numerator and the denominator."/>
<property>
<code value="comment"/>
<valueString
value="When a percentage is a fraction of hundred or proportion per hundred, then the percentage is the proportion multiplied by 100. However, a percentage can be greater than 100% so the definition is a ratio that is multiplied by 100.
Proportion is SEVCO code of TBD:0000018, Ratio is SEVCO code of STATO:0000184"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Janice Tufte, Muhammad Afzal, Khalid Shahin"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="NCIt-A fraction or ratio with 100 understood as the denominator. Alt definition One hundred times the quotient of one quantity divided by another, with the same units of measurement.
OECD-A percentage is a special type of proportion where the ratio is multiplied by a constant, 100, so that the ratio is expressed per 100.
SCO-A fraction or ratio with 100 understood as the denominator.
UMLS-A unit for expressing a number as a fraction of hundred (on the basis of a rate or proportion per hundred)-NCI"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-05 vote 5-1 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-05 comment: Instead of "A ratio" I would propose "A proportion that is multiplied by 100, [...]""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-07 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde, Mario Tristan"/>
</property>
<concept>
<code value="TBD:MeasAccu"/>
<display value="Measurement Accuracy"/>
<definition
value="A percentage in which the numerator represents the absolute value of one minus the difference between the true value and the observed value, and the denominator represents the true value."/>
<property>
<code value="external-definitions"/>
<valueString
value="from https://www.sciencedirect.com/topics/engineering/measurement-accuracy
Measurement Accuracy
Measurement accuracy is defined as the closeness of agreement between a measured quantity value and a true quantity value of a measurand (i.e., the quantity intended to be measured) (ISO-JCGM 200, 2008), and is often limited by calibration errors."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-24 vote 5-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="TBD:RelativeImportance"/>
<display value="Relative Importance"/>
</concept>
</concept>
<concept>
<code value="TBD:0000018"/>
<display value="Proportion"/>
<definition
value="A ratio in which the numerator represents a part, fraction or share of the amount represented by the denominator."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: observed risk [as a data item STATO_0000423] = the proportion of individuals in a population with the outcome of interest
NCIt: A part, fraction, share, or number considered in relation to the whole amount or number.
OECD Definition:
A proportion is a special type of ratio in which the denominator includes the numerator.
An example is the proportion of deaths that occurred to males which would be deaths to males divided by deaths to males plus deaths to females (i.e. the total population).
OCRe: A proportion is a measure of the frequency of some phenomenon of interest within an average population"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The value of a proportion must be between 0 and 1 (inclusive). Proportions may represent the frequency of some phenomenon of interest within a population, or may represent a subset of a whole."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2021-12-29 vote 3-1 by Robin Ann Yurk, Harold Lehmann, Janice Tufte, C P Ooi"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2021-12-29 comment: I agree with the term definition. However, for the comment, I would edit to include the OCRe defintion: A proportion is a measure of the frequency of some phenomenon of interest within a population."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-07 vote 7-0 by Harold Lehmann, Robin Ann Yurk, Janice Tufte, C P Ooi, Louis Leff, Jesus Lopez-Alcalde, Mario Tristan"/>
</property>
<concept>
<code value="STATO:0000413"/>
<display value="Incidence"/>
<definition
value="A proportion in which the numerator represents new events."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Cumulative incidence"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Incidence proportion"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Incidence is the ratio of the number of new cases of a disease divided by the number of persons at risk for the disease.
NCIt The relative frequency of occurrence of something.
OBCS A data item that refers to the number of new events that have occurred in a specific time interval divided by the population at risk at the beginning of the time interval. The result gives the likelihood of developing an event in that time interval.
UMLS The number of new cases of a given disease during a given period in a specified population. It also is used for the rate at which new events occur in a defined population. It is differentiated from PREVALENCE, which refers to all cases in the population at a given time. (MSH)
The relative frequency of occurrence of something. (NCI)
The number of new cases of a disease diagnosed each year. (NCI)
CDC: https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html
Incidence refers to the occurrence of new cases of disease or injury in a population over a specified period of time. Although some epidemiologists use incidence to mean the number of new cases in a community, others use incidence to mean the number of new cases per unit of population.
Two types of incidence are commonly used — incidence proportion and incidence rate.
Incidence proportion or risk
Synonyms for incidence proportion
Attack rate
Risk
Probability of developing disease
Cumulative incidence
Definition of incidence proportion
Incidence proportion is the proportion of an initially disease-free population that develops disease, becomes injured, or dies during a specified (usually limited) period of time. Synonyms include attack rate, risk, probability of getting disease, and cumulative incidence. Incidence proportion is a proportion because the persons in the numerator, those who develop disease, are all included in the denominator (the entire population)."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Outside of the Scientific Evidence Code System (SEVCO), there is substantial inconsistency in the terms and definitions used for incidence and related concepts.
Within SEVCO, Incidence is a proportion in which the numerator represents new events. The denominator may represent the entire population or may represent that population at risk (i.e., those without prior events).
Disease incidence is the ratio of the number of new cases of a disease divided by the number of persons at risk for the disease. Incidence is a proportion because the persons in the numerator, those who develop disease, are all included in the denominator (the entire population).
When a time period or a duration of time is used to define the period of time in which the incidence is measured, the statistic type is Incidence. Examples include 1-year incidence, in-hospital incidence, and cumulative incidence.
When time is considered as a variable in the formalization of the statistic, such as incidence per unit of time, then the statistic type is Incidence Rate (SEVCO code of TBD:0000024)"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-05 vote 2-2 by Robin Ann Yurk, janice tufte, Jesus Lopez-Alcalde, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-05 comments: I propose "The number of new occurrences of an event (for example, infection) in a population at risk over a particular period of time.
Doesn't denominator need to include a time component? Even if not, the time component should be referenced in the Comment. Also, do note that, in public health, the numerator *attempts* to be a subset of the denominator, but that relationship cannot be assured. (E.g., fertility incidence may be number of births (vital statistics) with denominator of number of women of child bearing age (census).
2022-06-15 Expert Working Group/Steering Committee removed 'Risk' as alternative term as we created a separate term for 'Risk' (TBD:0000185)"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-12 vote 6-0 by Harold Lehman, Mario Tristan, janice tufte, Andrew Beck, Robin Ann Yurk, Paul Harris"/>
</property>
</concept>
<concept>
<code value="STATO:0000412"/>
<display value="Prevalence"/>
<definition
value="A proportion in which the numerator represents all events of interest (for example, both new and preexisting cases of a disease) in the population, which is represented by the denominator."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Period prevalence"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Point prevalence"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Lifetime prevalence"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: prevalence is a ratio formed by the number of subjects diagnosed with a disease divided by the total population size.
Period prevalence:
The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population.
a prevalence rate that occurs at a specific period of time
Point prevalence:
NCIt The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population.
OBCS a prevalence rate that occurs at a specific point of time
UMLS:
The total number of cases of a given disease in a specified population at a designated time. It is differentiated from INCIDENCE, which refers to the number of new cases in the population at a given time. (MSH)
The ratio (for a given time period) of the number of occurrences of a disease or event to the number of units at risk in the population. (NCI)
Proportion of the people having a certain disease or condition in a given population (CHV)
CDC https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html
Point prevalence = Number of current cases (new and preexisting) at a specified point in time / Population at the same specified point in time
Period prevalence = Number of current cases (new and preexisting) over a specified period of time / Average or mid-interval population
Definition of prevalence
Prevalence, sometimes referred to as prevalence rate, is the proportion of persons in a population who have a particular disease or attribute at a specified point in time or over a specified period of time. Prevalence differs from incidence in that prevalence includes all cases, both new and preexisting, in the population at the specified time, whereas incidence is limited to new cases only.
Point prevalence refers to the prevalence measured at a particular point in time. It is the proportion of persons with a particular disease or attribute on a particular date.
Period prevalence refers to prevalence measured over an interval of time. It is the proportion of persons with a particular disease or attribute at any time during the interval."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Prevalence is the proportion of persons in a population who have a particular disease or attribute at a specified point in time or over a specified period of time.
Prevalence differs from incidence in that prevalence includes all cases, both new and preexisting, in the population at the specified time, whereas incidence is limited to new cases only.
In Bayesian calculations, the prevalence value is often used as the pre-test probability or prior probability value, but these probability values are not always based on or derived from the prevalence value."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Janice Tufte, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-12 vote 6-1 by Robin Ann Yurk, janice tufte, Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Andrew Beck, Paul Harris
2022-01-19 vote 2-1 by Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya
2022-01-26 vote 7-1 by Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Robin Ann Yurk, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-12 comments: I propose "A proportion in which the numerator represents all events (new and preexisting)." I think it is very important to detail "new and preexisting"2022-01-03 comment: The comment here is better than for incidence (why not copy this comment into "Incidence," and edit?). But it still feels like the denominator should be called out in the definition.
2022-01-19 comment: I would edit the term definition to include ...as part of a denominator of a broader population.
2022-01-26 comments: (1) suggestion: alter the definition to: A proportion in which the numerator represents all events of interest (e.g. both new and preexisting cases of a disease) in the population, which is represented by the denominator.
(2) I would delete this sentence from the comment for application. " Prevalence is a proportion because the persons in the numerator, those who develop or have disease, are all included in the denominator (the entire population)"(3) Probability should be a type of Proportion but distinct from Prevalence. Probability relates to the likelihood of something, but in that sense incidence and prevalence are both probabilities. If Prevalence and Probability were considered synonyms then one would still not call it the same as "Pre-test" or "Prior" probability. The term pre-test probability could be a type of (child of) probability.
2022-02-02 comment: I would remove the statement ..In Bayesian calculations, as the pre-test probability is a formula with new variables."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-02 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="STATO:0000233"/>
<display value="Sensitivity"/>
<definition
value="A proportion in which the numerator represents the detected items within the denominator that represents all items with the targeted attribute."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Recall"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="True Positive Rate"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: true positive rate (recall, sensitivity) = sensitivity is a measurement datum qualifying a binary classification test and is computed by substracting the false negative rate to the integral numeral 1
NCIt diagnostic sensitivity The probability that a test will produce a true positive result when used on effected subjects as compared to a reference or "gold standard". The sensitivity of a test can be determined by calculating: number of true positive results divided by the sum of true positive results plus number of false negative results.
OBCS-
a data item that measures the proportion of actual positives which are correctly identified as such (e.g. the percentage of sick people who are correctly identified as having the condition).
OCRe
An index of performance of a discriminant test calculated as the percentage of correct positives in all true positives
STATO
sensitivity is a measurement datum qualifying a binary classification test and is computed by subtracting the false negative rate to the integral numeral 1
NICE glossary-Sensitivity of a test-How well a test detects what it is testing for. It is the proportion of people with the disease or condition that are correctly identified by the study test. For example, a test with a sensitivity of 96% will, on average, correctly identify 96 people in every 100 who truly have the condition, but incorrectly identify as not having the condition 4 people in every 100 who truly have it. It is different from positive predictive value.
MeSH scope note-sensitivity and specificity-Scope Note
Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)"/>
</property>
<property>
<code value="comment"/>
<valueString
value="In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the sensitivity is the proportion of true positives (all people with the disease who test positive) within all people with the disease (true positives plus false negatives). Sn = TP / (TP + FN).
In information retrieval, recall is the proportion of items correctly retrieved within all relevant items.
True positive rate (TPR) is listed as an alternative term because of common usage, but TPR is not a Rate as defined in SEVCO."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-19 vote 3-1 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-19 comment: I would remove recall from alternative terms and comment for applications, as it is a specialized informatics measures and list it as a separate term. (EWG discussion: This comment is not persuasive. If the same statistic type (formula) has different names in different contexts we still want one common code for the concept. This consolidation of terms is the purpose of a standardized terminology or controlled vocabulary where we are controlling the code for the concept, not the name for common use.)"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-26 vote 10-0 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya, Janice Tufte, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="STATO:0000134"/>
<display value="Specificity"/>
<definition
value="A proportion in which the numerator represents the non-detected items within the denominator that represents all items without the targeted attribute."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="True Negative Rate"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: true negative rate (specificity) = specificity is a measurement datum qualifying a binary classification test and is computed by substracting the false positive rate to the integral numeral 1
NCIt The probability that a test will produce a true negative result when used on non-effected subjects as compared to a reference or "gold standard". The specificity of a test can be determined by calculating: number of true negative results divided by the sum of true negative results plus number of false positive results.
OBCS a data item that refers to the proportion of negatives in a binary classification test which are correctly identified
OCRe An index of performance of a discriminant test calculated as the percentage of negatives in all true negatives
NICE glossary-Specificity (of a test)
How well a test correctly identifies people who do not have what it is testing for. It is the proportion of people without the disease or condition that are correctly identified by the study test. For example, a test with a specificity of 96% will, on average, correctly identify 96 people in every 100 who truly do not have the condition, but incorrectly identify as having the condition 4 people in every 100 who truly do not have it. It is different from negative predictive value.
MeSH scope note-sensitivity and specificity-Scope Note
Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)"/>
</property>
<property>
<code value="comment"/>
<valueString
value="In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the specificity is the proportion of true negatives (all people without the disease who test negative) within all people without the disease (true negatives plus false positives). Sp = TN / (TN + FP).
True Negative Rate (TNR) is listed as an alternative term because of common usage, but TNR is not a Rate as defined in SEVCO."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-19 vote 5-0 by Paul Harris, Harold Lehmann, Robin Ann Yurk, Alejandro Piscoya, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="TBD:0000022"/>
<display value="Positive Predictive Value"/>
<definition
value="A proportion in which the numerator represents the correctly detected items within the denominator that represents all items detected."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="PPV"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Precision (as used in information retrieval)"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="NCIt The probability that an individual is affected with the condition when a positive test result is observed. Predictive values should only be calculated from cohort studies or studies that legitimately reflect the number of people in the population who have the condition of interest at that time since predictive values are inherently dependent upon the prevalence. PPVDT can be determined by calculating: number of true positive results divided by the sum of true positive results plus number of false positive results."/>
</property>
<property>
<code value="comment"/>
<valueString
value="In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the positive predictive value is the proportion of true positives (all people with the disease who test positive) within all the people with a positive test (true positives plus false positives). PPV = TP / (TP + FP).
In information retrieval, 'precision' is the proportion of items correctly retrieved within all retrieved items.
In Bayesian calculations, the 'Positive Predictive Value' is equivalent to the 'post-test probability' or 'posterior probability' following a positive test."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-01-26 vote 7-1 by Robin Ann Yurk, Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-01-26 comments: (1) I would remove precision from alternative terms
(2) minor change=quote terms of interest: In information retrieval, `precision` is the proportion of items correctly retrieved within all retrieved items.
The terms `post-test probability` and `posterior probability` are used in Bayesian calculations.
(3) Post-test probability is not fully synonymous with positive predictrive value. A negative predictive value is also the "post-test" probability of a true negative if the test has a negative result. And a test with a continuous rather than binary result could have a post-test probability that is neither positive nor negative predictive value. Post-test probability (and posterior probability) should become a child of probability.
2022-02-02 comment: I would remove the alternate term Precision and the comment for application for precision."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Phillippe Rocca-Serra, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-02-02 vote 5-0 by Paola Rosati, Mario Tristan, Robin Ann Yurk, Janice Tufte, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="TBD:0000023"/>
<display value="Negative Predictive Value"/>
<definition
value="A proportion in which the numerator represents the correctly non-detected items within the denominator that represents all items not detected."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="NPV"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="NCIt The probability that an individual is not affected with the condition when a negative test result is observed. This measure of accuracy should only be used if the data on the prevalence of condition of interest in given population is available. NPVDT can be determined by calculating: number of true negative results divided by the sum of true negative results plus number of false negative results."/>
</property>
<property>
<code value="comment"/>
<valueString
value="In a population of people with and without a disease, and a test which is positive (suggesting the disease) or negative (suggesting not having the disease), the negative predictive value is the proportion of true negatives (all people without the disease who test negative) within all the people with a negative test (true negatives plus false negatives). NPV = TN / (TN + FN)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Ken Wilkins, Phillippe Rocca-Serra, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-01-26 vote 8-0 by Robin Ann Yurk, Janice Tufte, Harold Lehmann, Philippe Rocca-Serra, Paola Rosati, Mario Tristan, Brian S. Alper, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="TBD:0000149"/>
<display value="Diagnostic Yield"/>
<definition
value="A proportion in which the numerator represents the correctly detected items within the denominator that represents all items tested."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value=""Diagnostic yield was defined as the number of participants with positive findings for advanced neoplasia relative to all participants" in https://pubs.rsna.org/doi/10.1148/radiol.12112486
Other 'definitions' found include synonymous use with sensitivity, and 'diagnostic yield' describing the statistic array of TP, FP, TN, and FN data.
https://medical-dictionary.thefreedictionary.com/diagnostic+yield
Diagnostic yield The likelihood that a test or procedure will provide the information needed to establish a diagnosis."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-10 vote 5-0 by Philippe Rocca-Serra, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-08-10 comment: I would simply suggest to simplify the definition to:
A proportion obtained by dividing the number of correctly detected items (numerator) by the number of all items tested (denominator)"/>
</property>
</concept>
<concept>
<code value="TBD:0000185"/>
<display value="Risk"/>
<definition
value="A proportion in which the numerator represents the cases in which an event or characteristic occurs and the denominator represents all possible cases."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Probability"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="In the English language, 'risk' may be used synonymously with 'hazard', 'chance', 'likelihood', 'relative likelihood', 'probability' and many other terms. In SEVCO the term 'risk' is explicitly defined for how it is used in other terms such as 'Risk Ratio' and 'Relative Risk Difference' The statistical definition of 'risk' does not have a negative or undesirable connotation.
Risk may be conditioned on many factors. In such cases the statistic type is Risk and the statistic may be reported as a conditional risk (for example, predicted risk).
When a time period or a duration of time is used to define the period of time in which the risk is measured, the statistic type is Risk. Examples include 1-year risk, in-hospital risk, and cumulative risk.
In frequentist statistics, the risk is a ratio of the number of events to the number of possible cases. In subjective Bayesian statistics, the risk is a proportion as a whole that represents degree of belief, where 0 represents certainty that an event will not occur and 1 represents certainty that the event will occur."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Janice Tufte"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-06-22 vote 4-1 by Eric Harvey, Janice Tufte, Mario Tristan, Muhammad Afza, Eric M Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-22 comment: It is confusing to consider "conditional probability" as an alternative term for "risk."
Generally;
"Conditional probability" refers to a probability whose value is dependent upon the occurrence of some process/event. In contrast, "risk" refers to the probability that an event will occur.
Mathematically;
"Conditional probability" is a measure of the probability of an event occurring, given that another event has already occurred. Let us have two events, A and B, and we want to know P(A) given P(B); notationally, P(A|B). Here the word 'given' defines a subset of the population of events because it applies condition on B. For example, if we care about the incidence of COVID-19 in men only, we might want to know P(COVID-19 | male). This means that first, pick out all the males, and second, figure out the probability they will get COVID-19. More formally, what P(A|B) says is: pick out the events to which both P(A) and P(B) apply and consider them as part of the subset of events to which only P(B) applies: hence P(A/B) = P(A and B)/P(B).
In simple words, what we are doing with P(A|B) = P(A and B) | P(B) is selecting out the same subset of the event population in both the
the numerator and the denominator: in this case, only men.
While "risk" by definition involves no condition. Taking the same example, we can say, "what is the risk of COVID-19?" here, we refer to the whole population; however, we can apply can make a condition over it like "what is the risk of COVID-19 in males?" This risk may be taken as "conditional risk," and it could be taken as an alternative term to conditional probability.
Conclusion: Let us define two terms, "risk" and "conditional risk," as a subset of "risk." Then "conditional probability" shall be taken as an alternative term to "conditional risk."
One more important point about the current definition of "Risk," i.e.,
Risk = A proportion in which the numerator represents the probability that an event or characteristic occurs and the denominator represents the probability that the event or characteristic occurs or does not occur.
If we write symbolically, it will look like this; P(A)/P(A or B), where A indicates "positive," which is the occurrence of something, and B indicates "negative," which is the non-occurrence of the same. We can write it formally as P(A) / P(AUB). In set theory, when there is "OR," in other words, "Union" infer the True value when either of them is True. It means A is true, or B is true, or both are true; we will get the true result. Interestingly, occurrence and non-occurrence are mutually exclusive, so two situations arise.
I) when the event occurs: P(A) / P(AUB) --> P(A)/P(A) = 1
II) when the event does not occur: P(A) / P(AUB) --> P(A)/P(B) = Odds
Therefore the definition needs to be revised for the correct meaning of the denominator. I believe the denominator refers to the whole population where some people are at risk and some are not, while the numerator refers to only those at risk."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-29 vote 6-0 by Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Janice Tufte, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000019"/>
<display value="Odds"/>
<definition
value="A ratio in which the numerator represents the probability that an event will occur and the denominator represents the probability that an event will not occur."/>
<property>
<code value="external-definitions"/>
<valueString
value="OCRe:
Odds is a quotient in which the relative likelihood that an event will occur is divided by the the relative likelihood that it won't. In probability theory and statistics, where the variable "p" is the probability in favor of the event, and the probability against the event is 1-p, "the odds" of the event are the quotient of the two, or p / (1-p)"/>
</property>
<property>
<code value="comment"/>
<valueString
value="'Odds' and 'Odds ratio' are different terms. 'Odds' is a ratio of probabilities. 'Odds ratio' is a ratio of two different odds.
Odds are calculated as p / (1-p) where p is the probability of event occurrence. When p = 0, the odds = 0. When p = 1, the odds may be expressed as not calculable or as "odds against = 0".
Odds may be expressed as p:(1-p). Odds may be expressed as p:q where q = 1-p. Odds may be expressed as a:b where a and b are multiples of p and (1-p). Examples of different expressions of the same odds include 3:2, 3/2, 0.6:0.4, 0.6/0.4, and 1.5.
Odds may be expressed as "odds for" or "odds in favor" (e.g. 1:5 for a "3" on a 6-sided die) or "odds against" (e.g. 5:1 against a "3" on a 6-sided die).
The term "betting odds" used in gambling that involves financial amounts in the formulation is not an "Odds" in the definition of the Scientific Evidence Code System."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Khalid Shahin, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-02-16 vote 4-2 by Janice Tufte, Paola Rosati, Eric Moyer, Harold Lehmann, Robin Ann Yurk, Jesus Lopez-Alcalde
2022-02-23 vote 5-2 by nisha mathew, Harold Lehman, Paola Rosati, Sunu Alice Cherian, Robin Ann Yurk, Joanne Dehnbostel, Sumalatha A
2022-03-09 vote 3-1 by Robin Ann Yurk, Janice Tufte, Eric Moyer, nisha mathew
2022-03-16 vote 8-1 by Robin Ann Yurk, Janice Tufte, Eric Moyer, nisha mathew, Harold Lehmann, Philippe Rocca-Serra, Louis Leff, Paola Rosati, Mario Tristan"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-16 comments: The term definition and comment for application are clear and well written. It would help to have a discussion on the Parent and Child relationships for this term as right now you only have Statistic, Ratio, Odds. In statistics for the scientific code system is Statistic, Ratio, Odds Ratio a better sequence and put the Odds under comment for application.
This term needs two alternative terms: "Odds For" and "Odds in Favor." The definition needs to deal with the cases p=1 and p=0. (I can think of 3 questions regarding these cases. (1) Are they defined? (2) Is p=1 the same as ∞. (3) Does 3:0=1:0?)
We should mention that this term does not include gambling odds. (As I understand it, gambling odds are the ratio of stake to winnings with several representations and frequently have a "rounding" factor to ensure a profit for the bookmaker).
Another issue is whether to represent "Odds Against" in the vocabulary. It could come up when annotating an immutable pre-existing source that gives odds as odds against; for example, an NLP system that scans published works to output labels for sections of the text.
A term related to "Odds" missing from the parent branch, "Ratio," is "Log Odds."
(Not unique to this term, but I noticed it here) The children of "Statistic" should inherit the application comment from "Statistic" about distinguishing between the statistic and statistic value. That way, a reader will not need to read the whole tree to know that 1.5 is not "Odds"; it is "Odds statistic value." (However, I do not see a place for "Odds statistic value" in the tree.)
Finally, the repetition of "Odds may be expressed as" is awkward.
2022-02-23 comments: "Odds may be expressed as p:(1-p). Odds may be expressed as p:q where q = 1-p. " Sounds redundant.
alternative terms: Probablity likelihood, chance ---{{Group meeting decided that 'probability' and 'likelihood' are terms we may consider adding to the SEVCO but they are not alternative terms for odds, 'chance' is considered a lay term and not a specific statistical term for the code system}}
Odds is a computational function such as addition, subtraction, multiplication. Odds Ratio may be better term for the term definition. This comment is based on your term definition and comment for application.
2022-03-09 comment: Edit the term definition: A ratio of probabilities in which the numerator represents the probability of the number of times an event will occur and the denominator represents the probability of the number of time an event will not occur. (Steering group 2022-03-09 considers the suggested change does not add clarification or improved understanding.)
2022-03-16 comment: I would delete likelihood from the term definition as in statistics it introduces a different formula such as likelihood ratio.
My suggestion is to simplify to a ration in which the numerator represents the number of times an event will occur and the denominator represents the number of times an event will not occur. (Steering group 2022-03-16 again considers the suggested change to include "number of times" not persuasive, but changed "relative likelihood" to "probability" in the definition to avoid the potential confusion with likelihood ratio.)"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-22 vote 5-0 by Muhammad Afzal, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk"/>
</property>
</concept>
<concept>
<code value="TBD:0000020"/>
<display value="Rate"/>
<definition
value="A ratio in which the numerator represents any quantity and the denominator represents an interval of time."/>
<property>
<code value="comment"/>
<valueString
value="When the numerator represents a count, the rate is an Event Rate."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="NCIt
Rate = A measurement of degree, speed, or frequency relative to time.
OBCS
rate= A quality of a single process inhering in a bearer by virtue of the bearer's occurrence per unit time.
OCRe
Rate = A rate is a quantity per unit of time."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-03-30 vote 4-2 by Cauê Monaco, Muhammad Afzal, Jesus Lopez-Alcalde, Philippe Rocca-Serra, Mario Tristan, Robin Ann Yurk
2022-04-06 vote 3-2 by Mario Tristan, Robin Ann Yurk, Paola Rosati, Jesus Lopez-Alcalde, nisha mathew"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-03-30 comments: A rate does not necessarily represent time. "In math, a rate is a special ratio in which the two terms are in different units"Edit term definition: A proportion represented by a rate of an event count for another quantified measure.
My comments are more focused on in the term definition and comment you have incomplete definitions or comments as you only describe in the term definition the denominator and in the comment you only describe the numerator.
Improvement Suggestion: By definition, a rate would have both a numerator and denominator so it is important for your to include in a definition both numerator and denominator.
The term definition should read: A ratio in which the numerator represents an event count and the denominator represents the total sum of the events considered as a count and non count.
The underlying concept for Rate is that the Denominator is a measure of time. So we need a definition where the numerator is X and the denominator is a measure time.
Our approach to definitions has been:
Ratio = A statistic that is a quotient of two quantities. [[By definition any statistic that is a ratio has a numerator and a denominator. Any statistic that has a numerator and a denominator is a Ratio, and may be given a more specific term when it is a type of Ratio.]] The Ratio definition inherits the Statistic definition so we do not re-define statistic.
Rate = A ratio in which the denominator represents a duration of time. This means that when we constrain the definition of ratio to limit to statistics where the denominator represents a duration of time, then the type of Ratio is a Rate. There is a logic to this approach to setting a definition, but your comment shows that it feel lacking because it does not mention the numerator. There is no constraint or modification being applied to the numerator.
Perhaps we can try “Rate = A ratio in which the numerator represents any quantity and the denominator represents a duration of time.”
Would that help clarify this item?
2022-04-06 comments: I would insert in the term definition, the numerator represents a quantity defined as a unit which is a smaller part of the denominator divided by the total sum of units in the denominator.
the concepts, "frequency of events" and "over a specified period of time" are not reflected in this definition
2022-04-27 comment: Edit term definition: A proportion represented by a rate of an event count or another quantified measure divided by the total sum of units. {{Discussion by Expert Working Group: The proposed definition describes a Proportion, but a Rate is NOT a Proportion.}}"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte"/>
</property>
<concept>
<code value="TBD:0000024"/>
<display value="Incidence Rate"/>
<definition
value="A rate in which the number of new events per total at risk is divided by an interval of time."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Incidence density"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Average hazard rate"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Incidence is defined as a proportion in which the numerator represents new events and the denominator represents the total at risk for events. Rate is defined as a ratio in which the numerator represents any quantity and the denominator represents an interval of time. The interval of time used for the denominator may be data-dependent when the duration of observation varies across the observations.
In the method for calculating incidence rate (described at https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html), the numerator is the "Number of new cases of disease or injury during the specified period" and the denominator is the "Time each person was observed, totaled for all persons""/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="NCIt
Incidence Rate = The frequency of new occurrences of an event during a specified time period.
CDC: https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section2.html
Incidence refers to the occurrence of new cases of disease or injury in a population over a specified period of time. Although some epidemiologists use incidence to mean the number of new cases in a community, others use incidence to mean the number of new cases per unit of population.
Two types of incidence are commonly used — incidence proportion and incidence rate.
Synonyms for incidence rate
Person-time rate
Definition of incidence rate
Incidence rate or person-time rate is a measure of incidence that incorporates time directly into the denominator. A person-time rate is generally calculated from a long-term cohort follow-up study, wherein enrollees are followed over time and the occurrence of new cases of disease is documented. Typically, each person is observed from an established starting time until one of four “end points” is reached: onset of disease, death, migration out of the study (“lost to follow-up”), or the end of the study. Similar to the incidence proportion, the numerator of the incidence rate is the number of new cases identified during the period of observation. However, the denominator differs. The denominator is the sum of the time each person was observed, totaled for all persons. This denominator represents the total time the population was at risk of and being watched for disease. Thus, the incidence rate is the ratio of the number of cases to the total time the population is at risk of disease.
alternative terms for incidence rate (incidence density, average hazard) noted at https://www.sjsu.edu/faculty/gerstman/eks/formula_sheet.pdf"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-11 vote 7-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Janice Tufte, Harold Lehmann, Paola Rosati, Robin Ann Yurk"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-11 comment: Suggest improving current term definition with the definition in comment for application. The alternative terms I am not sure fit here--you may want to add more detail for the alternate terms to the comment for application.
2022-05-25 comment: The definition defines the ideal ("at risk"); very often, however, incidence rates are calculated more grossly. While they are semantically wrong, they are quantitatively correct. Classic: birth incidence. The proper denominator would be fertile women, but *could * be calculated "per woman" or even "per capita"."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-25 vote 6-0 by Jesus Lopez-Alcalde, Brian S. Alper, Joanne Dehnbostel, Eric M Harvey, Mario Tristan, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="TBD:0000025"/>
<display value="Hazard Rate"/>
<definition
value="A conditional instantaneous rate in which the numerator represents an incidence conditioned on survival to a specified time, and the denominator represents a time interval with a duration approaching zero."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Hazard"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Hazard Function"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Instantaneous hazard rate"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="In the definition of Hazard Rate, the term "survival" is not literally about life and death but is used to represent existence without experiencing the event. "Hazard" as a statistical term is not specific to "bad" or "dangerous" events.
A hazard rate is expressed as a unitless numerator per unit of time, occurring at a specified time, and conditioned on survival to that time.
A hazard rate is mathematically the negative derivative of the log of the survival function. The survival function is the probability of surviving past a specified point in time, expressed as Pr{ T >= t }.
A hazard rate is also mathematically defined as lim(dt -> 0) [ Pr{ ( t <= T < t + dt ) | ( T >= t ) } / dt ]."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="A Dictionary of Epidemiology (5 ed.) by Miquel Porta
Hazard rate = A theoretical measure of the probability of occurrence of an event per unit time at risk; e.g., death or new disease, at a point in time, t, defined mathematically as the limit, as ∆t approaches zero, of the probability that an individual well at time t will experience the event by t + ∆t, divided by ∆t.
formula expressed at https://data.princeton.edu/wws509/notes/c7s1"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-04-06 vote 4-3 by Mario Tristan, Robin Ann Yurk, Cauê Monaco, Harold Lehmann, Paola Rosati, Jesus Lopez-Alcalde, nisha mathew"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-04-06 comments: An instantaneous rate in which the numerator represents an incidence and the denominator represents a time interval conditioned on survival to a specified time with a duration approaching zero
A hazard is any danger or peril. It does not necessarily represent a survival/death relationship.
I would add a vote choice: No Comment-Specialized Term or Not Applicable or some other choice as this is specialized formula."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-12 vote 8-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew"/>
</property>
</concept>
<concept>
<code value="TBD:0000168"/>
<display value="Event Rate"/>
<definition value="The number of occurrences per unit of time."/>
<property>
<code value="comment"/>
<valueString
value="An event rate is a ratio in which the numerator represents a count and the denominator represents an interval of time.
When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
--If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency.
--If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate.
--If the denominator includes an interval of space, the type of ratio is a Number Density"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-04-27 comment: The term definition and comment for application are comprehensive. However, for your comment for application I would only use the following...
An event rate is a ratio in which the numerator represents a count and the denominator represents an interval of time.
When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
{{Expert Working Group discussion: the comment providing instructions for choosing among 4 related and confusing terms is considered useful for guidance, and purposefully mentions other terms that may be more appropriate.}}"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-12 vote 8-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew"/>
</property>
</concept>
<concept>
<code value="TBD:0000171"/>
<display value="Event Frequency Rate"/>
<definition
value="A ratio in which the numerator represents an event frequency and the denominator represents an interval of time."/>
<property>
<code value="comment"/>
<valueString
value="When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
--If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency.
--If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate.
--If the denominator includes an interval of space, the type of ratio is a Number Density"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000170"/>
<display value="Event Frequency"/>
<definition
value="A ratio in which the numerator represents a count and the denominator represents a count (without involving an interval of time)."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Frequentist Probability"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
--If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency.
--If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate.
--If the denominator includes an interval of space, the type of ratio is a Number Density"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-12 vote 9-0 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="TBD:0000172"/>
<display value="Density"/>
<definition
value="A ratio in which the numerator represents any quantity and the denominator represents an interval of space (distance, area, or volume)."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-08 vote 7-0 by Mario Tristan, Janice Tufte, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Eric M Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-08 comment: Examples would be nice, since "linear density" is not a traditional measure"/>
</property>
<concept>
<code value="TBD:0000173"/>
<display value="Number Density"/>
<definition
value="A ratio in which the numerator represents a count and the denominator represents an interval of space (distance, area, or volume)."/>
<property>
<code value="comment"/>
<valueString
value="When the numerator represents a count:
--If the denominator includes an interval of time, the type of ratio is an Event Rate.
--If the denominator includes a count without an interval of time, the type of ratio is an Event Frequency.
--If the denominator includes a count and an interval of time, the type of ratio is an Event Frequency Rate.
--If the denominator includes an interval of space, the type of ratio is a Number Density"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-05-12 vote 8-1 by Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-05-12 comment: I wonder if we could define better "Number Density" as Density ratio {{2022-05-18 discussion found 2 instances of the term 'Number density' matching our definition, and the term 'density ratio' defines a density divided by a density which does not match this concept.}}"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Wikipedia https://en.wikipedia.org/wiki/Number_density
The number density (symbol: n or ρN) is an intensive quantity used to describe the degree of concentration of countable objects (particles, molecules, phonons, cells, galaxies, etc.) in physical space: three-dimensional volumetric number density, two-dimensional areal number density, or one-dimensional linear number density. Population density is an example of areal number density.
IUPAC Gold Book https://goldbook.iupac.org/terms/view/N04262
number density, n
Number of particles divided by the volume they occupy."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-27 vote 10-0 by Khalid Shahin, Joanne Dehnbostel, Eric Harvey, raradhikaag@gmail.com, Jesus Lopez-Alcalde, Mario Tristan, Harold Lehmann, Muhammad Afzal, nisha mathew, Janice Tufte"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000175"/>
<display value="Concentration"/>
<definition
value="A ratio in which the numerator is a measure of the solute and the denominator is a measure of the solvent."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-05-08 vote 6-0 by Mario Tristan, Harold Lehmann, Paola Rosati, Muhammad Afzal, nisha mathew, Eric M Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000040"/>
<display value="Measure of Association"/>
<definition
value="A statistic that quantitatively represents a relationship between two or more variables."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Neeraj Ojha"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-24 comment: There are measures of association between more than two variables, for example, an estimator of interaction information. So, this should be "two or more variables" (or just "variables").
Also, I don't like the term "represents", I'd prefer to say "A statistic that quantifies a relationship between variables.""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-16 vote 7-0 by Mario Tristan, Paola Rosati, Louis Leff, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte"/>
</property>
<concept>
<code value="TBD:0000021"/>
<display value="Ratio-based Measure of Association"/>
<definition value="A measure of association expressed as a ratio."/>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte, Muhammad Afzal"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-02-24 comment: I think we should replace "represents" with "quantifies" and remove the restriction to two variables.
"A statistic that is a ratio and quantifies a relationship between variables."
Second, I think you want a more restrictive definition than a statistic that is a ratio. For example, the uncertainty coefficient, I(X;Y)/H(Y), is a ratio and a measure of association, but I don't think you'd consider it a ratio-based measure of association (maybe you would, in which case this is OK).
You should also consider whether monotonic transformations of ratios count as ratio-based measures. It is common for people to take logarithms of ratios.
I'm not sure what the utility is of this category. When does someone need it? Could we just put all its children Measure of Association?"/>
</property>
<property>
<code value="comment"/>
<valueString
value="This categorical (parent) term can be used for a statistic that is a ratio, quantifies a relationship between two variables, and is not found in the child terms."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-12-28 vote 7-0 by Janice Tufte, Mario Tristan, Joanne Dehnbostel, Harold Lehman, Yuan Gao, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<concept>
<code value="TBD:0000026"/>
<display value="Hazard Ratio"/>
<definition
value="A measure of association that is the ratio of the hazard rate of an event in one group to the hazard rate of the same event in another group."/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="NCIt:
Hazard ratio = A measure of how often a particular event happens in one group compared to how often it happens in another group, over time. In cancer research, hazard ratios are often used in clinical trials to measure survival at any point in time in a group of patients who have been given a specific treatment compared to a control group given another treatment or a placebo. A hazard ratio of one means that there is no difference in survival between the two groups. A hazard ratio of greater than one or less than one means that survival was better in one of the groups.
https://www.statisticshowto.com/hazard-ratio/
The hazard ratio is a comparison between the probability of events in a treatment group, compared to the probability of events in a control group.
Hazard Ratio in Clinical Trials (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC478551/)
The hazard ratio is an estimate of the ratio of the hazard rate in the treated versus the control group. The hazard rate is the probability that if the event in question has not already occurred, it will occur in the next time interval, divided by the length of that interval. The time interval is made very short, so that in effect the hazard rate represents an instantaneous rate.
The Hazards of Hazard Ratios (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3653612/)
The hazard ratio (HR) is the main, and often the only, effect measure reported in many epidemiologic studies. For dichotomous, non–time-varying exposures, the HR is defined as the hazard in the exposed groups divided by the hazard in the unexposed groups. For all practical purposes, hazards can be thought of as incidence rates and thus the HR can be roughly interpreted as the incidence rate ratio. The HR is commonly and conveniently estimated via a Cox proportional hazards model, which can include potential confounders as covariates."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Hazard rate (SEVCO TBD:0000025) is defined as: A conditional instantaneous rate in which the numerator represents an incidence conditioned on survival to a specified time, and the denominator represents a time interval with a duration approaching zero.
The groups being compared are often the exposed group versus the unexposed group, but hazard ratio can also be applied to comparisons of one exposure relative to another exposure.
A hazard ratio of one means there is no difference between two groups in terms of their hazard rates, based on whether or not they were exposed to a certain substance or factor, or how they responded to two interventions being compared. A hazard ratio of greater than one implies an association of greater risk, and a hazard ratio of less than one implies an association of lower risk.
The hazard ratio can be calculated from studies in which the proportion of exposed participants who had the event is known, the proportion of unexposed participants who had the event is known, and the timing of events for each participant is known or estimable, such as a cohort study or clinical trial."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati"/>
</property>
</concept>
<concept>
<code value="TBD:0000027"/>
<display value="Incidence Rate Ratio"/>
<definition
value="A measure of association that is the ratio of two incidence rates."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="IRR"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Incidence Rate (SEVCO TBD:0000024) is defined as: A rate in which the number of new events per total at risk is divided by an interval of time.
The incidence rates may refer to the same event comparing two different groups, or the same group comparing two different events."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati"/>
</property>
<concept>
<code value="TBD:0000032"/>
<display value="Standardized Incidence Ratio"/>
<definition
value="An incidence rate ratio in which the numerator is the incidence rate in a group and the denominator is the incidence rate for a reference population."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="SIR"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The incidence rate used for the denominator may be an expected incidence rate for a reference population. The reference population may refer to a general population of the geographic area from which the cohort was selected."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-08 comment: ... and the denominator is the incidence rate or expected incidence rate for a reference population.
Comment for application: The reference population may refer to a general population of the geographic area from which the cohort was selected."/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000182"/>
<display value="Odds Ratio"/>
<definition
value="A measure of association that is the ratio of two odds."/>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: odds ratio (OR) = Odds ratio is a ratio that measures effect size, that is the strength of association between 2 dichotomous variables, one describing an exposure and one describing an outcome.
It represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of that exposure ( the probability of the event occuring divided by the probability of an event not occurring). The odds ratio is a ratio of describing the strength of association or non-independence between two binary data values by forming the ratio of the odds for the first group and the odds for the second group. Odds ratio are used when one wants to compare the odds of something occurring to two different groups.
UMLS: The ratio of two odds. The exposure-odds ratio for case control data is the ratio of the odds in favor of exposure among cases to the odds in favor of exposure among noncases. The disease-odds ratio for a cohort or cross section is the ratio of the odds in favor of disease among the exposed to the odds in favor of disease among the unexposed. The prevalence-odds ratio refers to an odds ratio derived cross-sectionally from studies of prevalent cases. (MSH)
A measure of the odds of an event happening in one group compared to the odds of the same event happening in another group. In cancer research, odds ratios are most often used in case-control (backward looking) studies to find out if being exposed to a certain substance or other factor increases the risk of cancer. For example, researchers may study a group of individuals with cancer (cases) and another group without cancer (controls) to see how many people in each group were exposed to a certain substance or factor. They calculate the odds of exposure in both groups and then compare the odds. An odds ratio of one means that both groups had the same odds of exposure and, therefore, the exposure probably does not increase the risk of cancer. An odds ratio of greater than one means that the exposure may increase the risk of cancer, and an odds ratio of less than one means that the exposure may reduce the risk of cancer. (NCI)
The ratio of the odds of an event occurring in one group to the odds of it occurring in another group, or to a sample-based estimate of that ratio. (NCI)
NICE: Compares the odds (probability) of something happening in 1 group with the odds of it happening in another. An odds ratio of 1 shows that the odds of the event happening (for example, a person developing a disease or a treatment working) is the same for both groups. An odds ratio of greater than 1 means that the event is more likely in the first group than the second. An odds ratio of less than 1 means that the event is less likely in the first group than in the second group."/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-03-16 vote 8-0 by Mario Tristan, Paola Rosati, Louis Leff, nisha mathew, Philippe Rocca-Serra, Harold Lehmann, Janice Tufte, Eric Moyer"/>
</property>
</concept>
<concept>
<code value="TBD:0000028"/>
<display value="Prevalence Ratio"/>
<definition
value="A measure of association that is the ratio of two prevalences."/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="https://www.ctspedia.org/do/view/CTSpedia/PrevalenceRatio#:~:text=Reference-,Definition%20of%20Prevalence%20Ratio,the%20proportion%20with%20the%20exposure.
The ratio of the proportion of the persons with disease over the proportion with the exposure.
Calculation is described here:
https://sphweb.bumc.bu.edu/otlt/MPH-Modules/PH717-QuantCore/PH717-Module3-Frequency-Association/PH717-Module3-Frequency-Association12.html"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Prevalence (SEVCO STATO:0000412) is defined as: A proportion in which the numerator represents all events of interest (for example, both new and preexisting cases of a disease) in the population, which is represented by the denominator.
The Prevalence Ratio indicates the magnitude of the prevalence of an event/outcome in one group of subjects/individuals (with characteristics/attribute) relative to another group (with different characteristics/attributes), such as the prevalence of the disease among the exposed persons to the prevalence of the disease among the unexposed persons."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Janice Tufte"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-08 vote 5-0 by Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-06-08 comment: Comment for application: The prevalence Ratio indicates how large is the prevalence of an event/outcome in one group of subjects/individuals (with characteristics/attribute) relative to another group (without the characteristics/attributes), such as the prevalence of the disease among the exposed persons to the prevalence of the disease among the unexposed persons."/>
</property>
</concept>
<concept>
<code value="STATO:0000245"/>
<display value="Risk Ratio"/>
<definition
value="A measure of association that is the ratio of the risk of an event in one group to the risk of the same event in another group."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Relative Risk"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: relative risk (risk ratio) = Relative risk is a measurement datum which denotes the risk of an 'event' relative to an 'exposure'. Relative risk is calculated by forming the ratio of the probability of the event occurring in the exposed group versus the probability of this event occurring in the non-exposed group.
NCIt Relative Risk A measure of the risk of a certain event happening in one group compared to the risk of the same event happening in another group. In cancer research, risk ratios are used in prospective (forward looking) studies, such as cohort studies and clinical trials. A risk ratio of one means there is no difference between two groups in terms of their risk of cancer, based on whether or not they were exposed to a certain substance or factor, or how they responded to two treatments being compared. A risk ratio of greater than one or of less than one usually means that being exposed to a certain substance or factor either increases (risk ratio greater than one) or decreases (risk ratio less than one) the risk of cancer, or that the treatments being compared do not have the same effects
OBCS relative risk A data item that equals the incidence in exposed individuals divided by the incidence in unexposed individuals. The relative risk can be calculated from studies in which the proportion of patients exposed and unexposed to a risk is known, such as a cohort study.
CDC https://www.cdc.gov/csels/dsepd/ss1978/lesson3/section5.html:
A risk ratio (RR), also called relative risk, compares the risk of a health event (disease, injury, risk factor, or death) among one group with the risk among another group. It does so by dividing the risk (incidence proportion, attack rate) in group 1 by the risk (incidence proportion, attack rate) in group 2. The two groups are typically differentiated by such demographic factors as sex (e.g., males versus females) or by exposure to a suspected risk factor (e.g., did or did not eat potato salad). Often, the group of primary interest is labeled the exposed group, and the comparison group is labeled the unexposed group."/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The groups being compared are often the exposed group versus the unexposed group, but risk ratio can also be applied to comparisons of one exposure relative to another exposure.
A risk ratio of one means there is no difference between two groups in terms of their risk, based on whether or not they were exposed to a certain substance or factor, or how they responded to two interventions being compared. A risk ratio of greater than one implies an association of greater risk, and a risk ratio of less than one implies an association of lower risk.
The risk ratio can be calculated from studies in which the proportion of exposed participants who had the event is known and the proportion of unexposed participants who had the event is known, such as a cohort study or clinical trial."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-06-08 vote 6-0 by Robin Ann Yurk, Harold Lehmann, Mario Tristan, Eric M Harvey, Muhammad Afzal, Paola Rosati"/>
</property>
</concept>
<concept>
<code value="STATO:0000411"/>
<display value="Likelihood Ratio Positive"/>
<definition
value="A measure of association that is the ratio of the probability of the test giving a positive result when testing an affected subject and the probability of the test giving a positive result when a subject is not affected."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Positive Likelihood Ratio"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="LR+"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="+LR"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: positive likelihood ratio (likelihood ratio for positive results) = the likelihood ratio of positive results is a ratio which is form by dividing the sensitivity value of a test by the difference between 1 and specificity of the test. This can be expressed also as dividing the probability of the test giving a positive result when testing an affected subject versus the probability of the test giving a positive result when a subject is not affected.
AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes:
The positive and negative likelihood ratios (LR+ and LR-, respectively) quantify the change in the certainty of the “diagnosis” conferred by test results. More specifically, the likelihood ratios transform the pretest odds to the posttest odds of a given (positive or negative) diagnosis:
posttest odds = pretest odds x LR
For a positive result with the medical test, the positive likelihood ratio would be used in the above relationship; for a negative result with the medical test portable monitor, the negative likelihood ratio would be used.
If a given medical test has very good ability to predict the “true disease status,” its positive likelihood ratio will be high (i.e., will greatly increase the odds of a positive diagnosis) and its negative likelihood ratio will be low (i.e., will diminish substantially the likelihood of the positive diagnosis). A completely non-informative portable monitor would have likelihood ratios equal to 1 (i.e., does not transform the pre-test odds substantially in the equation above). Typically, a positive likelihood ratio of 10 or more and a negative likelihood ratio of 0.1 or less are considered to represent informative tests.3 We note that other, more lenient boundaries for LR+ and LR- can be used3 and that the choice of the boundaries is a subjective decision. It is interesting to note that studies with high LR+ and low LR- can be readily identified in the square sensitivity/100 percent-specificity plot, as shown in the Appendix Figure above."/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The probability of the test giving a positive result when testing an affected subject is also called the sensitivity [SEVCO term STATO:0000233] or true positive rate. The probability of the test giving a positive result when a subject is not affected is called the false positive rate and is calculated as 1 minus the specificity [SEVCO term STATO:0000134]. The Likelihood Ratio Positive (LR+) is calculated as Sensitivity / (1 - Specificity).
The Likelihood Ratio Positive may also be calculated as the posterior probability (positive predictive value) divided by the prior probability (prevalence).
When the test result is a specific value on a continuous scale, the Likelihood Ratio Positive is the ratio of the likelihood of the test giving the specific value when testing an affected subject and the likelihood of the test giving the specific value when a subject is not affected.
In the context of a probability distribution function, e.g. normal distribution, the x axis is the value and y axis is the likelihood."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-10 vote 5-0 by Robin Ann Yurk, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati"/>
</property>
</concept>
<concept>
<code value="STATO:0000410"/>
<display value="Likelihood Ratio Negative"/>
<definition
value="A measure of association that is the ratio of the probability of the test giving a negative result when testing an affected subject and the probability of the test giving a negative result when a subject is not affected."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Negative Likelihood Ratio"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="LR-"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="-LR"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: negative likelihood ratio (likelihood ratio for negative results) = the likelihood ratio of negative results is a ratio which is formed by dividing the difference between 1 and sensitivity of the test by the specificity value of a test.. This can be expressed also as dividing the probability of a person who has the disease testing negative by the probability of a person who does not have the disease testing negative.
AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes:
The positive and negative likelihood ratios (LR+ and LR-, respectively) quantify the change in the certainty of the “diagnosis” conferred by test results. More specifically, the likelihood ratios transform the pretest odds to the posttest odds of a given (positive or negative) diagnosis:
posttest odds = pretest odds x LR
For a positive result with the medical test, the positive likelihood ratio would be used in the above relationship; for a negative result with the medical test portable monitor, the negative likelihood ratio would be used.
If a given medical test has very good ability to predict the “true disease status,” its positive likelihood ratio will be high (i.e., will greatly increase the odds of a positive diagnosis) and its negative likelihood ratio will be low (i.e., will diminish substantially the likelihood of the positive diagnosis). A completely non-informative portable monitor would have likelihood ratios equal to 1 (i.e., does not transform the pre-test odds substantially in the equation above). Typically, a positive likelihood ratio of 10 or more and a negative likelihood ratio of 0.1 or less are considered to represent informative tests.3 We note that other, more lenient boundaries for LR+ and LR- can be used3 and that the choice of the boundaries is a subjective decision. It is interesting to note that studies with high LR+ and low LR- can be readily identified in the square sensitivity/100 percent-specificity plot, as shown in the Appendix Figure above."/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The probability of the test giving a negative result when testing an affected subject is also called the false negative rate and is calculated as 1 minus the sensitivity [SEVCO term STATO:0000233]. The probability of the test giving a negative result when a subject is not affected is called the specificity [SEVCO term STATO:0000134] or true negative rate. The Likelihood Ratio Negative (LR-) is calculated as (1 - Sensitivity ) / Specificity.
The Likelihood Ratio Negative may also be calculated as the posterior probability (1 - negative predictive value) divided by the prior probability (prevalence)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-10 vote 5-0 by Robin Ann Yurk, Mario Tristan, Harold Lehmann, Eric Harvey, Paola Rosati"/>
</property>
</concept>
<concept>
<code value="TBD:0000029"/>
<display value="Positive Clinical Utility Index"/>
<definition value="DEFERRED"/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="Mitchell AJ 2011 https://www.psycho-oncology.info/686.pdf https://link.springer.com/article/10.1007/s10654-011-9561-x
positive clinical utility index = sensitivity x PPV
Asberg 2019 A new index of clinical utility for diagnostic tests at https://www.tandfonline.com/doi/full/10.1080/00365513.2019.1677938
We propose a new clinical utility index (CUI), which is the expected gain in utility (EGU) of the test divided by the EGU of an ideal test, both adjusted for EGU of the optimal clinical action without testing. The index expresses the relative benefit of using the test compared to using an optimal test when making a clinical decision.
Expected gain in utility (EGU) of a clinical option, at a certain probability of disease (p), is the difference between its expected utility and the expected utility of another option, for instance doing nothing [4]. The EGU of the option W at probability p is EGUp(W) = p×BW – (1 − p)×CW ......CUI is then a complicated equation."/>
</property>
<property>
<code value="change-for-vote"/>
<valueString
value="2022-08-10 discussion: Considering 2 source definitions that are incompatible and limited usage overall, decision made to defer this term to future consideration for SEVCO."/>
</property>
</concept>
<concept>
<code value="TBD:0000030"/>
<display value="Negative Clinical Utility Index"/>
<definition value="DEFERRED"/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="external-definitions"/>
<valueString value="see Positive Clinical Utility Index"/>
</property>
<property>
<code value="change-for-vote"/>
<valueString
value="2022-08-10 discussion: Considering 2 source definitions that are incompatible and limited usage overall, decision made to defer this term to future consideration for SEVCO."/>
</property>
</concept>
<concept>
<code value="STATO:0000415"/>
<display value="Diagnostic Accuracy"/>
<definition
value="A measure of association that is the ratio of the number of correct results to the total number tested."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Categorical Accuracy"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Classification Accuracy"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "accuracy (Rand accuracy, Rand index) = in the context of binary classification, accuracy is defined as the proportion of true results (both true positives and true negatives) to the total number of cases examined (the sum of true positive, true negative, false positive and false negative).
It can be understood as a measure of the proximity of measurement results to the true value.""/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Where results are reported as positive or negative, correct results are reported as true, and incorrect results are reported as false, the diagnostic accuracy is calculated as ( True Positives + True Negatives ) / ( True Positives + True Negatives + False Positives + False Negatives ).
For continuous values, Measurement Accuracy (SEVCO term: TBD:MeasAccu) would be used instead of Diagnostic Accuracy."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-24 vote 6-0 by Jesus Lopez-Alcalde, Harold Lehmann, Mario Tristan, Cauê Monaco, Eric Harvey, Janice Tufte"/>
</property>
</concept>
<concept>
<code value="TBD:0000033"/>
<display value="Diagnostic Odds Ratio"/>
<definition
value="A measure of association that is the ratio of the odds of a positive test in those with disease relative to the odds of a positive test in those without disease."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="DOR"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="AHRQ https://effectivehealthcare.ahrq.gov/products/test-performance-metrics/appendixes:
The diagnostic odds ratio (DOR) describes the odds of a positive test in those with disease relative to the odds of a positive test in those without disease.4 It can be computed in terms of sensitivity and specificity as well as in terms of positive and negative likelihood ratios (DOR = LR+/LR-). Thus this single measure includes information about both sensitivity and specificity and tends to be reasonably constant despite diagnostic threshold. However, it is impossible to use diagnostic odds ratios to weigh sensitivity and specificity separately, and to distinguish between tests with high sensitivity and low specificity and tests with low sensitivity and high specificity.
Another disadvantage is that it is difficult for clinicians to understand and apply, limiting its clinical value. This is partly because they are not often exposed to diagnostic odds ratios. A diagnostic odds ratio is similar to an odds ratio that measures strength of association in an observational study or effect size in a trial. However, contrary to the typical effect size magnitudes of such odds ratios (often between 0.5 and 2), diagnostic odds ratios can attain much larger values (often greater than 100)."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The Diagnostic Odds Ratio may be calculated as the Likelihood Ratio Positive divided by the Likelihood Ratio Negative. The Diagnostic Odds Ratio is an overall measure of the discriminatory power of a test and does not distinguish between the power to detect (rule in) or exclude (rule out)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-08-31 vote 6-0 by Janice Tufte, nisha mathew, Muhammad Afza, Harold Lehmann, Philippe Rocca-Serra, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="STATO:0000524"/>
<display value="Phi coefficient"/>
<definition
value="A measure of association, ranging from -1 to 1, that measures the strength and direction of the linear relationship between two binary variables."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Matthews Correlation Coefficient"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Yule phi"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Mean Square Contingency Coefficient"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Matthews correlation coefficient (MCC) = Matthews Correlation Coefficient (or MCC) is a correlation coefficient which is a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975."/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="For a 2×2 contingency table where A, B, C, and D represent the observation frequencies (the cell count), the formula for the phi coefficient ($\Phi$) is:
$$
\Phi = \frac{AD - BC}{\sqrt{(A+B)(C+D)(A+C)(B+D)}}
$$"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-25 vote 6-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000031"/>
<display value="Measure of Agreement"/>
<definition
value="A measure of association of two variables representing measurements of the same attribute of an entity."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The term 'Measure of Agreement' is primarily used as a class for types of measure of agreement listed in the hierarchy but may be used as the code for a measure of agreement that is not listed."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-12-21 vote 5-0 by Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Janice Tuft, Harold Lehmann"/>
</property>
<concept>
<code value="TBD:0000034"/>
<display value="Kappa"/>
<definition
value="A measure of agreement among categorical assessments, corrected for chance agreement."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Kappa statistic"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Agreement"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="OBCS kappa statistic = a generic term for several similar measures of agreement used with categorical data; typically used in assessing the degree to which two or more raters, examining the same data, agree on assigning data to categories"/>
</property>
<property>
<code value="comment"/>
<valueString
value="In the literature, the same eponymic term (e.g., 'Cohen's kappa') is used with different formulas. In SEVCO, we define each term with a single formula, and recommend annotators to choose the SEVCO term based on the formula.
This is a widely used term to measure inter-rater reliability. Refer to measures of association to see other terms: for example, intra-class correlation coefficient (ICC)."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-14 (After deleting one "yes" vote
at the request of the voter) vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-09-14 Comment "I recommend adding ....is a measure of interrater reliability or is this an Interrater reliability testing an alternate term.""/>
</property>
<concept>
<code value="TBD:0000037"/>
<display value="Simple chance-corrected agreement coefficient"/>
<definition
value="A Kappa statistic in which the expected agreement by chance is based on an assumption that all possible categories for assignment are equally likely."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Bennett’s Kappa"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Brennan-Prediger agreement coefficient"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Agreement"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement.
In the simple chance-corrected agreement coefficient, the expected chance agreement is modeled as the inverse of the number of categories (1/q) where q is the number of possible categories for assignment.
The simple chance-corrected agreement coefficient is calculated as ( p[a] − 1/q ) / ( 1 − 1/q ) where p[a] is the observed percent agreement and q is the number of possible categories for assignment."/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5965565
Gwet KL. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance. Educ Psychol Meas. 2016 Aug;76(4):609-637. doi: 10.1177/0013164415596420. Epub 2015 Jul 28. PMID: 29795880; PMCID: PMC5965565.
Brennan and Prediger (1981) proposed a simple chance-corrected agreement coefficient, which generalizes to multiple raters and multiple categories, the G-index previously proposed by Holley and Guilford (1964) for two raters and two categories. What is known as the Holley–Guilford G-index was previously proposed independently by various authors under different names. Among them are Guttman (1945), Bennett, Alpert, and Goldstein (1954), and Maxwell (1977). For an interrater reliability experiment involving r raters who classify n subjects into one of q possible categories, the Brennan-Prediger coefficient is given by
κ[BP] = ( p[a] − 1/q ) / ( 1 − 1/q ),
where the percent agreement p[a] is defined by Equation (3 -- see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5965565/#disp-formula3-0013164415596420), and the percent chance agreement is a constant representing the inverse of the number of categories."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-14 vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin"/>
</property>
</concept>
<concept>
<code value="TBD:0000038"/>
<display value="Cohen’s Kappa"/>
<definition
value="A Kappa statistic in which the expected agreement by chance is based on an assumption that the likelihood of each category for assignment is based on the proportion observed, and the number of raters is 2."/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Agreement"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="OBCS cohen's kappa measurement (inter-rater agreement, inter-annotator agreement; inter-rater agreement, inter-annotator agreement) = a statistical measure of agreement for categorical data; a measure of inter-rater agreement or inter-annotator agreement"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement.
In Cohen's kappa, the expected chance agreement is modeled as the summation of the differences, between the square of the expected probability of the category and the quotient of its variance divided by 2 (the number of raters), for each category.
Cohen's kappa is calculated as ( p[a] − p[e] ) / ( 1 − p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-14 vote 6-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde, Khalid Shahin"/>
</property>
</concept>
<concept>
<code value="TBD:0000155"/>
<display value="Modified Cohen’s Kappa for more than 2 raters"/>
<definition
value="A Kappa statistic in which the expected agreement by chance is based on an assumption that the likelihood of each category for assignment is based on the proportion observed, and the number of raters is more than 2."/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Agreement"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="OBCS cohen's kappa measurement (inter-rater agreement, inter-annotator agreement; inter-rater agreement, inter-annotator agreement) = a statistical measure of agreement for categorical data; a measure of inter-rater agreement or inter-annotator agreement"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A Kappa statistic is a measure of agreement among categorical assessments, corrected for chance agreement.
In the modified Cohen's kappa for more than 2 raters, the expected chance agreement is modeled as the summation of the differences, between the square of the expected probability of the category and the quotient of its variance divided by the number of raters, for each category.
The modified Cohen's kappa for more than 2 raters is calculated as ( p[a] − p[e] ) / ( 1 − p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-09-14 vote 5-0 by Nisha Mathew, Philippe Rocca-Serra, Harold Lehmann, Eric Harvey, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="TBD:0000039"/>
<display value="Scott’s pi"/>
<definition
value="A Kappa statistic where the expected agreement between two raters is expressed in terms of the square of arithmetic means of marginal proportions of each assessment category."/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Agreement"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Scott's pi is a kappa statistic for two raters that assumes the likelihood of each category for assignment is based on the same distribution of rater responses, leading to the use of squared arithmetic means of the marginal proportion of each assessment category as its estimate of "chance agreement."
Pr(expected) is calculated using squared "joint proportions" which are squared arithmetic means of the marginal proportions of each assessment category, in contrast to Cohen's Kappa which uses squared geometric means.
Scott's pi = ( p[a] − p[e] ) / ( 1 − p[e] ) where p[a] is the observed percent agreement and p[e] is the expected chance agreement expressed as the squared joint proportions of the marginal sums."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-09-21 comment: I think there should be a formula included in the comment for application as with all the other Kappa terms
2022-09-28 adjustment: Steering Group changed the first sentence of Comment for application to better represent the assumption."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000035"/>
<display value="Misclassification Rate"/>
<definition
value="A ratio of the number of incorrect results to the total number tested."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Classification Error"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Misclassification Error"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="Where results are reported as positive or negative, incorrect results are reported as false, and correct results are reported as true, the misclassification rate is calculated as ( False Positives + False Negatives ) / ( True Positives + True Negatives + False Positives + False Negatives )."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-10-19 vote 5-0 by Joanne Dehnbostel, Muhammad Afzal, Janice Tufte, Mario Tristan, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="TBD:0000036"/>
<display value="F1-score"/>
<definition
value="A ratio representing the harmonic mean of recall and precision."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="F1 score"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="F1 measure"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="F measure"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="F score"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="F-score"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="F-measure"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="F1-measure"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Agreement"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="OCRe F measure [not used due to inaccuracy in the definition]"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The F1-score is used as a measure of quality for classification algorithms and information retrieval strategies, where 1 represents the best precision and recall and 0 represents the worst precision and recall.
A harmonic mean of a set of quantities is the reciprocal of the arithmetic mean of the reciprocals of each quantity. The F score is thus calculated as 1 / (the arithmetic mean of the reciprocals), or:
F = 1 / ( ( (1/recall) + (1/precision) ) / 2 )
F = 2*( (precision*recall) / (precision+recall) )
Recall is sensitivity STATO:0000233
Precision (PPV) is SEVCO TBD:0000022
[[F-beta will be defined elsewhere in the code system.]]"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Brian S. Alper"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-10-19 vote 6-0 by Joanne Dehnbostel, Muhammad Afzal, Jesus Lopez-Alcalde, Mario Tristan, Eric Harvey, Harold Lehmann"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000042"/>
<display value="Measure of Correlation"/>
<definition
value="A measure of association between ordinal or continuous variables."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Correlation"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Yuan Gao, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-10-26 vote 6-1 by Yuan Gao, Philippe Rocca-Serra, Eric Harvey, Paola Rosati, Harold Lehmann, Joanne Dehnbostel, Janice Tufte"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-10-26 comments: there are measures of correlation which characterise non-linear relation between 2 variables . so I was wondering if there was a need to specify "measure of linear correlation" , where a subclass would be 'correlation coefficient). The type 'measure of correlation' becoming a parent class for the 'measure of non-linear correlation'
Should we say, "A value of 0 means no linear association, a value of +1 mean perfect positive linear (a positive slope) association, and a value of -1 means perfect negative association (a negative slope).""/>
</property>
<property>
<code value="comment"/>
<valueString
value="A value of 0 means no association. A positive value means a positive association (as one variable increases, the other variable increases). A negative value means a negative association (as one variable increases, the other variable decreases).
For correlation coefficients, the possible values range from +1 (perfect positive association) to -1 (perfect negative association)."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-11-16 vote 5-0 by Brian S. Alper, Philippe Rocca-Serra, Harold Lehman, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<concept>
<code value="STATO:0000301"/>
<display value="Covariance"/>
<definition
value="A measure of correlation that is not normalized by the variances of the variables."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "covariance = The covariance is a measurement data item about the strength of correlation between a set (2 or more) of random variables.
The covariance is obtained by forming:
cov(X,Y)=E([X-E(X)][Y-E(Y)] where E(X), E(Y) is the expected value (mean) of variable X and Y respectively.
covariance is symmetric so cov(X,Y)=cov(Y,X).
The covariance is usefull when looking at the variance of the sum of the 2 random variables since:
var(X+Y) = var(X) +var(Y) +2cov(X,Y)
The covariance cov(x,y) is used to obtain the coefficient of correlation cor(x,y) by normalizing (dividing) cov(x,y) but the product of the standard deviations of x and y.""/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A measure of correlation is a measure of association between ordinal or continuous variables.
Covariance is used in the calculation of other measures of correlation. Covariance can only be calculated for interval or continuous variables.
Because the covariance is not normalized by the variances of the variables, the magnitude of the covariance is not informative without consideration of the magnitude of the respective variances. Covariance is informative regarding whether both variables vary in the same direction (positive covariance) or in the opposite direction (negative covariance).
Covariance for a sample is calculated as the mean of the products of deviations from the sample mean for the variables.
Cov(X,Y) = Σ (($x_i – \overline{x}$) ($y_i – \overline{y}$)) / (n-1) where $x_i$ is one the observed values of X, $\overline{x}$ is the sample mean of X, $y_i$ is one the observed values of Y, and $\overline{y}$ is the sample mean of Y.
Covariance as the population-level quantity is given by the expected value of the product of deviations from the mean for the variables.
Cov(X, Y) = E [ (X - μ) (Y - ν) ] where μ = E(X) and ν = E(Y)
Covariance is a continuous value with a range of negative infinity to positive infinity."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Kenneth Wilkins, Yuan Gao, Khalid Shahin, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-11-23 vote 5-0 by Mario Tristan, Yuan Gao, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="STATO:0000280"/>
<display value="Pearson Correlation Coefficient"/>
<definition
value="A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the linear relationship between values of two continuous variables."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="product moment"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="product moment correlation"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "Pearson's correlation coefficient (Pearson product-moment correlation coefficient; Pearson's r; r statistics) = The Pearson's correlation coefficient is a correlation coefficient which evaluates two continuous variables for association strength in a data sample. It assumes that both variables are normally distributed and linearity exists.
The coefficient is calculated by dividing their covariance with the product of their individual standard deviations. It is a normalized measurement of how the two are linearly related.""/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A measure of correlation is a measure of association between ordinal or continuous variables. Pearson correlation coefficient is designed to be used between continuous variables.
Pearson correlation coefficient for a sample ($r$) is calculated as $r = \dfrac{\widehat{cov}(x,y)}{s_x*s_y}$ where $ \widehat{cov}(x,y)$ is the estimated covariance, and $s_x$ and $s_y$ are the sample standard deviations.
Pearson correlation coefficient for a population ($\rho$) is defined as $\rho= \dfrac{cov(X,Y)}{\sigma_X*\sigma_Y}$ where cov(X,Y) is covariance of X and Y and $\sigma_X$ and $\sigma_Y$ are the population standard deviations.
Assumptions for computing Pearson's correlation coefficient include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution.
Covariance is [defined in SEVCO](https://fevir.net/resources/CodeSystem/27270#STATO:0000301)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Kenneth Wilkins, Muhammad Afzal, Yuan Gao, Khalid Shahin, Joanne Dehnbostel, Brian S. Alper, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-12-07 vote 5-0 by Muhammad Afzal, Mario Tristan, Eric Harvey, Yuan Gao, Mahnoor Ahmed"/>
</property>
</concept>
<concept>
<code value="STATO:0000201"/>
<display value="Spearman Rank-Order Correlation Coefficient"/>
<definition
value="A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated as the Pearson correlation coefficient between the rank values."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Spearman's rank correlation coefficient"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Spearman's rho"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "Spearman's rank correlation coefficient (Spearman's rho) = Spearman's rank correlation coefficient is a correlation coefficient which is a nonparametric measure of statistical dependence between two ranked variables. It assesses how well the relationship between two variables can be described using a monotonic function. If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other.
Spearman's coefficient may be used when the conditions for computing Pearson's correlation are not met (e.g linearity, normality of the 2 continuous variables) but may require a ranking transformation of the variables""/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Kenneth Wilkins, Muhammad Afzal, Yuan Gao, Joanne Dehnbostel, Brian S. Alper, Harold Lehmann, Noor Ahmed"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A measure of correlation is a measure of association between ordinal or continuous variables. Spearman rank-order correlation coefficient is designed to be used between ordinal and/or continuous variables.
The Spearman rank-order correlation coefficient can identify monotonic (i.e. consistently non-increasing or consistently non-decreasing) relationships, whether the relationships are linear or non-linear.
The Spearman rank-order correlation coefficient between two variables is equal to the [Pearson correlation coefficient](https://fevir.net/resources/CodeSystem/27270#STATO:0000280) between the rank values of those two variables.
The Spearman rank-order correlation coefficient is the nonparametric counterpart to the Pearson correlation coefficient and may be used when the assumptions for computing Pearson's correlation coefficient (include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution) are not met. The Spearman rank-order correlation coefficient is appropriate when either variable has outliers, is ordinal, or is not normally distributed; when the variances of the two variables are unequal; or when the apparent relationship between the variables is non-linear. The assumptions for computing Spearman rank-order correlation coefficient include a monotonic relationship between 2 continuous or ordinal variables."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2022-12-07 comment: The fundamental difference between the two correlation coefficients is that the Pearson coefficient works with a linear relationship between the two variables whereas the Spearman Coefficient works with monotonic relationships as well."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2022-12-14 vote 5-0 by Jesus Lopez-Alcalde, Yuan Gao, Mario Tristan, Eric Harvey, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="STATO:0000240"/>
<display value="Kendall Correlation Coefficient"/>
<definition
value="A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated based on the difference in the number of concordant and discordant pairs of rankings divided by the number of all possible pairs of rankings."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Kendall's tau coefficient"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Kendall's tau"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Kendall's τ"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Kendall rank correlation coefficient"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Kendall's tau-b"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Kendall's correlation coefficient (Kendall's tau (τ) coefficient; Kendall rank correlation coefficient) = Kendall's correlation coefficient is a correlation coefficient between 2 ordinal variables (natively or following a ranking procedure) and may be used when the conditions for computing Pearson's correlation are not met (e.g linearity, normality of the 2 continuous variables)"/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A measure of correlation is a measure of association between ordinal or continuous variables. Kendall's correlation coefficient is designed to be used between ordinal variables (or continuous variables converted to ordinal variables).
The Kendall's correlation coefficient can identify monotonic (i.e. consistently non-increasing or consistently non-decreasing) relationships, whether the relationships are linear or non-linear.
The Kendall's correlation coefficient between two variables is calculated by determining the concordance or discordance of each pair of ranked values (whether or not two raters are concordant in one value being ranked equal or higher to the other value), and then dividing the difference between the number of concordant values ($n_c$) and the number of discordant values ($n_d$) by the number of pairs of ranked values ($\frac{1}{2}n(n-1)$).
$$
\tau = \dfrac{n_c - n_d}{\frac{1}{2}n(n-1)}
$$
The Kendall's correlation coefficient is a nonparametric statistic and may be used when the assumptions for computing Pearson's correlation coefficient (include a linear relationship between 2 continuous variables and each of the variables approximates a normal distribution) are not met. The Kendall's correlation coefficient is appropriate when either variable has outliers, is ordinal, or is not normally distributed; when the variances of the two variables are unequal; or when the apparent relationship between the variables is non-linear. The assumptions for computing Kendall's correlation coefficient include a monotonic relationship between 2 ordinal variables."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2022-12-21 vote 5-0 by Joanne Dehnbostel, Mario Trista, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann
2023-01-04 definition change by Steering Committee"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-25 vote 6-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Yuan Gao, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="TBD:0000043"/>
<display value="Goodman and Kruskal’s Gamma"/>
<definition
value="A measure of correlation, ranging from -1 to 1, that measures the strength and direction of the relationship between ranks by value of two ordinal or continuous variables, and is calculated based on the difference in the number of concordant and discordant pairs of rankings divided by the total number of pairs of rankings, where ties are not counted among the pairs of rankings."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Goodman-Kruskal Gamma"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Gamma Statistic"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Gamma Coefficient"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="https://stats.stackexchange.com/questions/18112/how-do-the-goodman-kruskal-gamma-and-the-kendall-tau-or-spearman-rho-correlation"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-01-25 vote 5-0 by Mario Tristan, Jesus Lopez-Alcalde, Joanne Dehnbostel, Harold Lehmann, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000565"/>
<display value="Regression Coefficient"/>
<definition
value="A measure of association that is used as the coefficient of an independent variable in a regression model, of the dependent variable, which is linear in its parameters."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: regression coefficient = a regression coefficient is a data item generated by a type of data transformation called a regression, which aims to model a response variable by expression the predictor variables as part of a function where variable terms are modified by a number. A regression coefficient is one such number."/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Association"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A value of zero means no association. The sign (positive or negative) reflects the direction of association."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-02-07 vote 5-0 by Cauê Monaco, Harold Lehmann, Mario Tristan, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="TBD:0000044"/>
<display value="Measure of Calibration"/>
<definition
value="A measure of association between a variable representing known or true values and a variable representing measured or predicted values."/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Calibration is often used for measurement devices. The known or true values may be called the reference standard.
Calibration is also used for predictive models."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Harold Lehmann, Kenneth Wilkins"/>
</property>
<concept>
<code value="TBD:0000045"/>
<display value="Mean calibration"/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
</concept>
<concept>
<code value="TBD:0000046"/>
<display value="Calibration-in-the-large"/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
</concept>
<concept>
<code value="TBD:0000047"/>
<display value="Calibration intercept"/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
</concept>
<concept>
<code value="TBD:0000048"/>
<display value="Calibration slope"/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Correlation"/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="STATO:0000028"/>
<display value="Measure of Dispersion"/>
<definition
value="A statistic that represents the variation or spread among data values in a dataset or data distribution."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Measure of variation"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Dispersion"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Statistical dispersion"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: measure of variation (measure of dispersion) = measure of variation or statistical dispersion is a data item which describes how much a theoritical distribution or dataset is spread.
NCIt: "Statistical dispersion-
The variation between data values in a sample."
UMLS: "Dispersion (C0332624)
Definition: The variation between data values in a sample.
Semantic Types: Spatial Concept""/>
</property>
<property>
<code value="comment"/>
<valueString
value="This categorical (parent) term can be used for a statistic that is a measure of dispersion and is not found in the child terms."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Yuan Gao, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-02-07 vote 5-0 by Cauê Monaco, Harold Lehmann, Janice Tufte, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
<concept>
<code value="STATO:0000035"/>
<display value="Range"/>
<definition
value="A measure of dispersion calculated as the difference between the maximum observed value and the minimum observed value."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: range = the range is a measure of variation which describes the difference between the lowest score and the highest score in a set of numbers (a data set)"/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A measure of dispersion is a statistic that represents the variation or spread among data values in a dataset or data distribution.
The maximum observed value is a statistic that represents the largest non-null value in a collection of values that can be ordered by magnitude.
The minimum observed value is a statistic that represents the smallest non-null value in a collection of values that can be ordered by magnitude.
A range (as a statistic) is represented as a single value (the difference between maximum and minimum observed values) while, in common language, the term range is often expressed with two values (from the minimum to maximum values, or from the lower limit to the higher limit)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-02-20 vote 5-0 by Philippe Rocca-Serra, Janice Tufte, Harold Lehmann, Mario Tristan, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="STATO:00000164"/>
<display value="Interquartile range"/>
<definition
value="A measure of dispersion calculated as the difference between the 75th percentile and the 25th percentile."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="IQR"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Middle range"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Median IQR"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Midspread"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "inter quartile range = The interquartile range is a data item which corresponds to the difference between the upper quartile (3rd quartile) and lower quartile (1st quartile).
The interquartile range contains the second quartile or median.
The interquartile range is a data item providing a measure of data dispersion""/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A measure of dispersion is a statistic that represents the variation or spread among data values in a dataset or data distribution.
The 75th percentile is the median of the portion of the dataset or distribution with values greater than the median value.
The 25th percentile is the median of the portion of the dataset or distribution with values lesser than the median value.
An interquartile range (as a statistic) is represented as a single value (the difference between 75th and 25th percentiles) while, in common language, the term interquartile range is often expressed with two values (the 25th percentile and the 75th percentile)."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-02-20 vote 5-0 by Philippe Rocca-Serra, Janice Tufte, Harold Lehmann, Mario Tristan, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="STATO:0000237"/>
<display value="Standard deviation"/>
<definition
value="A measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="SD"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: standard deviation (σ) = The standard deviation of a random variable, statistical population, data set, or probability distribution is a measure of variation which correspond to the average distance from the mean of the data set to any given point of that dataset. It also corresponds to the square root of its variance."/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Kenneth Wilkins"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Standard deviation for sample is a standard deviation in which the dataset is a sample. Standard deviation for population, when used as a statistical model parameter, is not a standard deviation as a type of statistic."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-05-15 vote 6-0 by Muhammad Afzal, Brian S. Alper, Jesus Lopez-Alcalde, Janice Tufte, Eric Harvey, Harold Lehmann"/>
</property>
<concept>
<code value="TBD:0000052"/>
<display value="Standard deviation for sample"/>
<definition
value="A standard deviation that is the square root of the quotient of the summation across data points of the square of the distance from each data point to the sample mean, and the degrees of freedom (where the degrees of freedom is sample size minus one)."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="S"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="s"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Sample standard deviation"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Sample SD"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset.
Standard deviation for sample is a standard deviation in which the dataset is a sample.
The formula for the standard deviation for sample ($s$) is:
$$
s = \sqrt \frac{\sum\\{\substack{n\\i=1}} (x_i - \overline{x})^2}{n - 1}
$$
where $n$ is the sample size (the number of independent observations, indexed by $i$), $x$ is observed value, and $\overline{x}$ is the sample mean.
The formula to calculate degrees of freedom depends on the model. For the degrees of freedom for a sample standard deviation, given the sample mean, it is n-1, because the nth observation is no longer independent, given the n-1 other observations and the sample mean."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Muhammad Afzal, Khalid Shahin, Kenneth Wilkins"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-05-15 vote 5-0 by Muhammad Afzal, Brian S. Alper, Jesus Lopez-Alcalde, Eric Harvey, Harold Lehmann"/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000113"/>
<display value="Variance"/>
<definition
value="A measure of dispersion that represents the square of the standard deviation."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: variance (σ2) = variance is a data item about a random variable or probability distribution. it is equivalent to the square of the standard deviation. It is one of several descriptors of a probability distribution, describing how far the numbers lie from the mean (expected value).The variance is the second moment of a distribution."/>
</property>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset.
Variance for sample is a variance in which the dataset is a sample. Variance for population, when used as a probability distribution parameter, is not a variance as a type of statistic."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey"/>
</property>
<concept>
<code value="TBD:0000055"/>
<display value="variance for sample"/>
<definition
value="A variance that is the quotient of the summation across data points of the square of the distance from each data point to the sample mean, and the degrees of freedom (where the degrees of freedom is sample size minus one)."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="V"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="sample variance"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Var"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="s^2"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Variance is defined as a measure of dispersion that represents the square of the standard deviation. Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset.
Variance for sample is a variance in which the dataset is a sample.
The formula for the variance for sample ($s^2$) is:
$$
s^2 = \frac{\sum\\{\substack{n\\i=1}} (x_i - \overline{x})^2}{n - 1}
$$
where $n$ is the sample size (the number of independent observations, indexed by $i$), $x$ is observed value, and $\overline{x}$ is the sample mean.
The formula to calculate degrees of freedom depends on the model. For the degrees of freedom for a sample variance, given the sample mean, it is n-1, because the nth observation is no longer independent, given the n-1 other observations and the sample mean."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Kenneth Wilkins, Brian S. Alper, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000049"/>
<display value="Gini index"/>
<definition
value="A measure of dispersion that is half the relative mean absolute difference between all pairs of observed values."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Gini coefficient"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="comment"/>
<valueString
value="The Gini index is typically used as a measure of inequality for income, wealth, or resource distribution."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Kenneth Wilkins"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey BUT comment of "between all pairs of observed values?" led to recognition of incorrect definition"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-12-04 vote 5-0 by Yasser Sami Amer, Xing Song, Eric Harvey, Harold Lehmann, Brian S. Alper"/>
</property>
</concept>
<concept>
<code value="TBD:0000057"/>
<display value="Standard error"/>
<definition
value="A measure of dispersion applied to estimates across hypothetical repeated random samples."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="SE"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: It is a measure of how precise is an estimate of the statistical parameter is. Standard error is the estimated standard deviation of an estimate. It measures the uncertainty associated with the estimate. Compared with the standard deviations of the underlying distribution, which are usually unknown, standard errors can be calculated from observed data."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Xing Song, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="open-for-voting"/>
<valueString value="2023-11-27T20:00:00"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error."/>
</property>
<concept>
<code value="STATO:0000037"/>
<display value="standard error of the mean"/>
<definition
value="A measure of dispersion applied to means across hypothetical repeated random samples."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="SEM"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: The standard error of the mean (SEM) is data item denoting the standard deviation of the sample-mean's estimate of a population mean. It is calculated by dividing the sample standard deviation (i.e., the sample-based estimate of the standard deviation of the population) by the square root of n , the size (number of observations) of the sample."/>
</property>
<property>
<code value="comment"/>
<valueString
value="A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.
The standard error of the mean is calculated by dividing the sample standard deviation (STATO:0000237) by the square root of n, the size (number of observations) of the sample."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Muhammad Afzal, Xing Song, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="open-for-voting"/>
<valueString value="2023-11-27"/>
</property>
</concept>
<concept>
<code value="TBD:0000062"/>
<display value="standard error of the proportion"/>
<definition
value="A measure of dispersion applied to proportions across hypothetical repeated random samples."/>
<property>
<code value="comment"/>
<valueString
value="A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.
The formula for the standard error of the sample proportion ($SE(\hat{p})$) is:
$$
SE(\hat{p}) = \sqrt \frac{\hat{p}(1-\hat{p})} {n}
$$
where $\hat{p}$ is the sample proportion and $n$ is the size (number of observations) of the sample."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Xing Song, Harold Lehmann, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="open-for-voting"/>
<valueString value="2023-11-27"/>
</property>
</concept>
<concept>
<code value="TBD:0000063"/>
<display
value="standard error of the difference between independent means"/>
<definition
value="A measure of dispersion applied to differences between means of independent groups across hypothetical repeated random samples."/>
<property>
<code value="comment"/>
<valueString
value="A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.
In cases where the samples are assumed to have unequal population variances for X, the formula for the standard error of the sample difference between means ($SE_{unequal}(\overline{x}_{1} - \overline{x}_{2})$) is:
$$
SE_{unequal}(\overline{x}_{1} - \overline{x}_{2}) = \sqrt{\frac{s^2_1}{n_1}+\frac{s^2_2}{n_2}}
$$
where $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means, $s^2_1$ and $s^2_2$ are the sample standard deviations, and $n_1$ and $n_2$ are the sizes (number of observations) of the samples.
In cases where the samples are assumed to have the same (equal) population variance for X, the formula for the standard error of the sample difference between means ($SE_{equal}(\overline{x}_{1} - \overline{x}_{2})$) is:
$$
SE_{equal}(\overline{x}_{1} - \overline{x}_{2}) = \sqrt{\frac{n_1 s^2_1 + n_2 s^2_2}{n_1 + n_2 - 2}}
$$
where $\overline{x}_{1}$ and $\overline{x}_{2}$ are the sample means, $s^2_1$ and $s^2_2$ are the sample standard deviations, and $n_1$ and $n_2$ are the sizes (number of observations) of the samples."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Brian S. Alper"/>
</property>
<property>
<code value="open-for-voting"/>
<valueString value="2023-12-04"/>
</property>
</concept>
<concept>
<code value="TBD:0000064"/>
<display
value="standard error of the difference between independent proportions"/>
<definition
value="A measure of dispersion applied to differences between proportions arising from independent groups across hypothetical repeated random samples."/>
<property>
<code value="comment"/>
<valueString
value="A standard error is used to quantify the uncertainty around a statistical estimate due to random sampling error.
The formula for the standard error of the sample difference between proportions ($SE(\hat{p}_1 - \hat{p}_2)$) is:
$$
SE(\hat{p}_1 - \hat{p}_2) = \sqrt {\frac{\hat{p}_1(1-\hat{p}_1)} {n_1} + \frac{\hat{p}_2(1-\hat{p}_2)} {n_2}}
$$
where $\hat{p}_1$ and $\hat{p}_2$ are the sample proportions and $n_1$ and $n_2$ are the sizes (number of observations) of the samples."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Brian S. Alper"/>
</property>
<property>
<code value="open-for-voting"/>
<valueString value="2023-12-04"/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000455"/>
<display value="credible interval"/>
<definition
value="The range in which the value of the parameter of interest is likely to reside, typically within a posterior probability distribution."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="region of highest posterior density"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: In Bayesian statistics context, a credible interval is an interval of a posterior distribution which is such that the density at any point inside the interval is greater than the density at any point outside and that the area under the curve for that interval is equal to a prespecified probability level. For any probability level there is generally only one such interval, which is also often known as the highest posterior density region. Unlike the usual confidence interval associated with frequentist inference, here the intervals specify the range within which parameters lie with a certain probability. The Bayesian counterparts of the confidence interval used in Frequentists Statistics.
UMLS: "Interval (C1272706)
Definition: The period of time or the distance separating two instances, events, or occurrences.
Semantic Types: Temporal Concept"
OBCS: A quantitative confidence value that is used in Bayesian analysis to describe the range in which a posterior probability estimate is likely to reside.
OECD: calculated interval-The interval containing possible values for a suppressed cell in a table, given the table structure and the values published.
SCO: interval-An interval is a set of real numbers that includes all numbers between any two numbers in the set."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The credible interval is used in Bayesian analysis and plays an analogous role to the confidence interval in frequentist statistics."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-11-27 vote 5-0 by Xing Song, Yasser Sami Amer, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
<concept>
<code value="STATO:0000196"/>
<display value="confidence interval"/>
<definition
value="The estimated range of values that encompasses the point estimate and quantifies the uncertainty about that estimate in terms of a prespecified level of coverage, expected to include the true value between upper and lower bounds, across hypothetically repeated random samples, with all assumptions regarding the sampling distribution across random samples having been fully met."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Ken Wilkins, Joanne Dehnbostel, Robin Ann Yurk, Janice Tufte"/>
</property>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: A confidence interval is a data item which defines an range of values in which a measurement or trial falls corresponding to a given probability. also confidence interval calculation is a data transformation which determines a confidence interval for a given statistical parameter
NCIt: A range of values for a parameter that may contain the parameter and the degree of confidence that it is in fact there. A measure of the precision of an estimated value. The interval represents the range of values, consistent with the data, that is believed to encompass the "true" value with high probability (usually 95%). The confidence interval is expressed in the same units as the estimate. Wider intervals indicate lower precision; narrow intervals, greater precision. [CONSORT Statement]
OBCS: A quantitative confidence value that refers to an interval give values within which there is a high probability (95 percent by convention) that the true population value can be found. The calculation of a confidence interval considers the standard deviation of the data and the number of observations. Thus, a confidence interval narrows as the number of observations increases, or its variance (dispersion) decreases.
CDISC Glossary: A measure of the precision of an estimated value. The interval represents the range of values, consistent with the data, that is believed to encompass the "true" value with high probability (usually 95%). The confidence interval is expressed in the same units as the estimate. Wider intervals indicate lower precision; narrow intervals, greater precision. [CONSORT Statement]
NICE: "Confidence interval
A way of expressing how certain we are about the findings from a study, using statistics. It gives a range of results that is likely to include the 'true' value for the population. A wide confidence interval (CI) indicates a lack of certainty about the true effect of the test or treatment - often because a small group of patients has been studied. A narrow CI indicates a more precise estimate (for example, if a large number of patients have been studied).
The CI is usually stated as '95% CI', which means that the range of values has a 95 in a 100 chance of including the 'true' value. For example, a study may state that 'based on our sample findings, we are 95% certain that the 'true' population blood pressure is not higher than 150 and not lower than 110'. In such a case the 95% CI would be 110 to 150."
OECD: A confidence interval is an interval which has a known and controlled probability (generally 95% or 99%) to contain the true value.
"Rothman textbook: confidence interval, which provides a range of values for the association, under the hypothesis that only random variation has created discrepancies between the true value of the association under study and the value observed in the data (Altman et al., 2000; see Chapters 13 through 16) Altman DG, Machin D, Bryant TN, Gardner MJ, eds. Statistics with confidence, 2nd ed. London: BMJ Books, 2000
""/>
</property>
<property>
<code value="comment"/>
<valueString
value="The prespecified level of coverage is commonly 0.95 or 95%.
Confidence cannot be directly interpreted as a probability. This is in contrast to credibility for credible intervals. Confidence only conveys uncertainty indirectly by reflecting a long term relative frequency across hypothetically repeated sample estimates.
Width of a confidence interval can convey precision. This precision can be increased by increasing the sample size in most cases assuming variability in sample is only due to random sample-to-sample variation."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-11-27 vote 5-0 by Xing Song, Yasser Sami Amer, Harold Lehmann, Jesus Lopez-Alcalde, Eric Harvey"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000065"/>
<display value="Measure of Discrimination"/>
<concept>
<code value="TBD:0000066"/>
<display value="Area Under the Curve"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="AUC"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: area under curve is a measurement datum which corresponds to the surface define by the x-axis and bound by the line graph represented in a 2 dimensional plot resulting from an integration or integrative calculus. The interpretation of this measurement datum depends on the variables plotted in the graph"/>
</property>
<concept>
<code value="TBD:0000067"/>
<display value="C-statistic"/>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:0000068"/>
<display value="Measure of Heterogeneity"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: a measure of heterogeneity in meta-analysis is a data item which aims to describe the variation in study outcomes between studies."/>
</property>
<concept>
<code value="TBD:0000069"/>
<display value="Chi square for homogeneity"/>
</concept>
<concept>
<code value="TBD:0000070"/>
<display value="Cochran’s Q statistic"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Chi squared for heterogeneity"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Cochran's Q test is a statistical test used for unreplicated randomized block design experiments with a binary response variable and paired data. In the analysis of two-way randomized block designs where the response variable can take only two possible outcomes (coded as 0 and 1), Cochran's Q test is a non-parametric statistical test to verify whether k treatments have identical effects."/>
</property>
</concept>
<concept>
<code value="TBD:0000071"/>
<display value="I-squared"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: I-squared = The quantity called I2, describes the percentage of total variation across studies that is due to heterogeneity rather than chance. I2 can be readily calculated from basic results obtained from a typical meta-analysis as I2 = 100%×(Q - df)/Q, where Q is Cochran's heterogeneity statistic and df the degrees of freedom. Negative values of I2 are put equal to zero so that I2 lies between 0% and 100%. A value of 0% indicates no observed heterogeneity, and larger values show increasing heterogeneity. Unlike Cochran's Q, it does not inherently depend upon the number of studies considered. A confidence interval for I² is constructed using either i) the iterative non-central chi-squared distribution method of Hedges and Piggott (2001); or ii) the test-based method of Higgins and Thompson (2002). The non-central chi-square method is currently the method of choice (Higgins, personal communication, 2006) – it is computed if the 'exact' option is selected. (STATO:0000420)"/>
</property>
</concept>
<concept>
<code value="TBD:0000072"/>
<display value="Tau squared"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Tau-squared is an estimate of the between-study variance in a random-effects meta-analysis. The square root of this number (i.e. tau) is the estimated standard deviation of underlying effects across studies. (STATO:0000421)"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000073"/>
<display value="Hypothesis Testing Measure"/>
<concept>
<code value="TBD:0000074"/>
<display value="Chi square for independence"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Chi-squared statistic is a statistic computed from observations and used to produce a p-value in statistical test when compared to a Chi-Squared distribution. (STATO:0000030)"/>
</property>
</concept>
<concept>
<code value="TBD:0000075"/>
<display value="Chi square for trend"/>
</concept>
<concept>
<code value="TBD:0000076"/>
<display value="p-value"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: A quantitative confidence value that represents the probability of obtaining a result at least as extreme as that actually obtained, assuming that the actual value was the result of chance alone. (OBI:0000175)"/>
</property>
<concept>
<code value="TBD:p-value-one-sided"/>
<display value="p value for one-sided test"/>
<definition value="p value for one-sided test"/>
</concept>
<concept>
<code value="TBD:p-value-two-sided"/>
<display value="p value for two-sided test"/>
<definition value="p value for two-sided test"/>
</concept>
</concept>
<concept>
<code value="TBD:0000077"/>
<display value="Z-score"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: A z-score (also known as z-value, standard score, or normal score) is a measure of the divergence of an individual experimental result from the most probable result, the mean. Z is expressed in terms of the number of standard deviations from the mean value. (STATO:0000104) Z-statistic is a statistic computed from observations and used to produce a p-value when compared to a Standard Normal Distribution in a statistical test called the Z-test. (STATO:0000376)"/>
</property>
</concept>
<concept>
<code value="TBD:0000078"/>
<display value="T-score"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: t-statistic is a statistic computed from observations and used to produce a p-value in statistical test when compared to a Student's t distribution. (STATO:0000176)"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000079"/>
<display value="Predicted Data Item"/>
<definition
value="Modifier Code: add to any other Statistic Type code to note predicted form (not observed form) of that statistic type"/>
</concept>
<concept>
<code value="TBD:0000179"/>
<display value="Absolute Value"/>
</concept>
<concept>
<code value="STATO:0000291"/>
<display value="Quantile"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO-a quantile is a data item which corresponds to specific elements x in the range of a variate X. the k-th n-tile P_k is that value of x, say x_k, which corresponds to a cumulative frequency of Nk/n (Kenney and Keeping 1962). If n=4, the quantity is called a quartile, and if n=100, it is called a percentile."/>
</property>
<concept>
<code value="STATO:0000293"/>
<display value="Percentile"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO-a percentile is a quantile which splits data into sections accrued of 1% of data, so the first percentile delineates 1% of the data, the second quartile delineates 2% of the data and the 99th percentile, 99 % of the data"/>
</property>
</concept>
<concept>
<code value="STATO:0000292"/>
<display value="Decile"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO-a decile is a quantile where n=10 and which splits data into sections accrued of 10% of data, so the first decile delineates 10% of the data, the second decile delineates 20% of the data and the nineth decile, 90 % of the data"/>
</property>
</concept>
<concept>
<code value="STATO:0000152"/>
<display value="Quartile"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO-a quartile is a quantile which splits data into sections accrued of 25% of data, so the first quartile delineates 25% of the data, the second quartile delineates 50% of the data and the third quartile, 75 % of the data"/>
</property>
</concept>
</concept>
<concept>
<code value="TBDTBD"/>
<display value="Measurement Value"/>
<concept>
<code value="TBDTBD2"/>
<display value="Duration"/>
<concept>
<code value="TBDTBD3"/>
<display value="Time-to-Event"/>
<definition value="Event-free duration"/>
</concept>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:0000080"/>
<display value="Hypothesis test attribute"/>
<concept>
<code value="TBD:beta"/>
<display value="Beta"/>
</concept>
<concept>
<code value="TBD:power"/>
<display value="Power"/>
</concept>
<concept>
<code value="TBD:0000081"/>
<display value="alpha setting"/>
<concept>
<code value="TBD:0000084"/>
<display value="alpha setting with subtype unspecified"/>
</concept>
<concept>
<code value="TBD:0000085"/>
<display
value="individual test alpha without multiple testing adjustment"/>
</concept>
<concept>
<code value="TBD:0000086"/>
<display value="overall alpha with multiple testing"/>
</concept>
<concept>
<code value="TBD:0000087"/>
<display
value="individual test alpha with multiple testing adjustment"/>
</concept>
</concept>
<concept>
<code value="STATO:0000286"/>
<display value="one-tailed test"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="one threshold for hypothesis testing"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: one tailed test (one sided test) = a one-tailed test is a statistical test which, assuming an unskewed probability distribution, allocates all of the significance level to evaluate only one hypothesis to explain a difference.
The one-tailed test provides more power to detect an effect in one direction by not testing the effect in the other direction.
one-tailed test should be preceded by two-tailed test in order to avoid missing out on detecting alternate effect explaining an observed difference."/>
</property>
</concept>
<concept>
<code value="STATO:0000287"/>
<display value="two-tailed test"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="two thresholds for hypothesis testing"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: two tailed test (two sided test) = a two tailed test is a statistical test which assess the null hypothesis of absence of difference assuming a symmetric (not skewed) underlying probability distribution by allocating half of the significance level selected to each of the direction of change which could explain a difference (for example, a difference can be an excess or a loss)."/>
</property>
</concept>
<concept>
<code value="TBD:checkIfInSTATOtesting-margin"/>
<display value="Hypothesis testing margin"/>
</concept>
<concept>
<code value="TBD:null-hypothesis"/>
<display value="null hypothesis"/>
</concept>
<concept>
<code value="TBD:alternative-hypothesis"/>
<display value="alternative hypothesis"/>
</concept>
</concept>
<concept>
<code value="STATO:0000107"/>
<display value="Statistical Model"/>
<definition
value="A set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Generative model"/>
</designation>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Harold Lehmann"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A statistical model describes how one or more random variables are related to one or more other variables. A statistical model often relates to the generation of sample data from a larger population. "Generative model" is a term used by the machine learning community."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-05-22 vote 3-1 by Jesus Lopez-Alcalde, Sunu Alice Cherian, Janice Tufte, Harold Lehmann
2023-06-05 vote 5-1 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-05-22 comments:
Definition: A mathematical model that reflects a set of statistical assumptions with regards to the process governing the generation of sample data from a larger population.
Since we now have Statistical Model Characteristics as a separate hierarchy, might we want to refer to that hierarchy in the Comment for Application. ("There are many potential components to a statistical model. Those components are represented by the SEVCO hierarchy beginning with...")
2023-06-05 comment: The comment for application needs to be improved - it is difficult to read and the sentences are not grammatically correct."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-06-12 vote 5-0 by Brian S. Alper, Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey"/>
</property>
<concept>
<code value="TBD:0000090"/>
<display value="Fixed-effect model"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="common-effect model"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="one true effect size"/>
</designation>
</concept>
<concept>
<code value="TBD:0000091"/>
<display value="Random-effects model"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="random effects"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="true effect sizes are distributed"/>
</designation>
</concept>
<concept>
<code value="STATO:0000464"/>
<display value="Generalized Linear Mixed Model"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="GLMM"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: linear mixed model (LMM) = "A lnear mixed model is a mixed model containing both fixed effects and random effects and in which factors and covariates are assumed to have a linear relationship to the dependent variable. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units (longitudinal study), or where measurements are made on clusters of related statistical units. Because of their advantage in dealing with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures ANOVA.
Fixed-effects factors are generally considered to be the variables whose values of interest are all represented in the data file.
Random-effects factors are variables whose values correspond to unwanted variation. They are useful when trying to understand variability in the dependent variable which was not anticipated and exceeds what was expected.
Linear mixed models also allow to specify specific interactions between factors, and allow the evaluation of the various linear effect that a particular combination of factor levels may have on a response variable.
Finally, linear mixed models allow to specify variance components in order to describe the relation between various random effects levels.""/>
</property>
<concept>
<code value="TBD:0000093"/>
<display value="GLMM with probit link"/>
</concept>
<concept>
<code value="TBD:0000094"/>
<display value="GLMM with logit link"/>
</concept>
<concept>
<code value="TBD:0000095"/>
<display value="GLMM with identity link"/>
</concept>
<concept>
<code value="TBD:0000096"/>
<display value="GLMM with log link"/>
</concept>
<concept>
<code value="TBD:0000097"/>
<display value="GLMM with generalized logit link"/>
</concept>
<concept>
<code value="TBD:0000098"/>
<display value="GLMM with subtype unspecified"/>
</concept>
</concept>
<concept>
<code value="TBD:0000099"/>
<display value="GLM"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Generalized Linear Model"/>
</designation>
<concept>
<code value="TBD:0000100"/>
<display value="GLM with probit link"/>
</concept>
<concept>
<code value="TBD:0000101"/>
<display value="GLM with logit link"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Logistic Regression"/>
</designation>
<property>
<code value="multiple-parents"/>
<valueString value="TBD:0000099 and TBD:0000106"/>
</property>
</concept>
<concept>
<code value="TBD:0000102"/>
<display value="GLM with identity link"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Linear Regression"/>
</designation>
<property>
<code value="multiple-parents"/>
<valueString value="TBD:0000099 and TBD:0000106"/>
</property>
</concept>
<concept>
<code value="TBD:0000103"/>
<display value="GLM with log link"/>
</concept>
<concept>
<code value="TBD:0000104"/>
<display value="GLM with generalized logit link"/>
</concept>
<concept>
<code value="TBD:0000105"/>
<display value="GLM with subtype unspecified"/>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:0000121"/>
<display value="Data Transformation"/>
<concept>
<code value="TBD:0000122"/>
<display value="Data imputation"/>
<concept>
<code value="TBD:0000125"/>
<display value="Zero-cell adjustment with constant"/>
</concept>
<concept>
<code value="TBD:0000126"/>
<display value="Zero-cell adjustment with continuity correction"/>
</concept>
</concept>
<concept>
<code value="TBD:0000123"/>
<display value="Meta-analysis"/>
<concept>
<code value="TBD:0000127"/>
<display value="Meta-analysis with fixed-effect model"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: STATO_0000082: fixed effect model = a fixed effect model is a statistical model which represents the observed quantities in terms of explanatory variables that are treated as if the quantities were non-random."/>
</property>
<concept>
<code value="TBD:0000129"/>
<display value="Meta-analysis using inverse variance method"/>
</concept>
<concept>
<code value="TBD:0000130"/>
<display value="Meta-analysis using Mantel-Haenszel method"/>
</concept>
<concept>
<code value="TBD:0000131"/>
<display value="Meta-analysis using Peto method"/>
</concept>
</concept>
<concept>
<code value="TBD:0000128"/>
<display value="Meta-analysis with random-effects model"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: STATO_0000099: random effect model (variance components model) = a random effect(s) model, also called a variance components model, is a kind of hierarchical linear model. It assumes that the dataset being analysed consists of a hierarchy of different populations whose differences relate to that hierarchy."/>
</property>
<concept>
<code value="TBD:0000132"/>
<display value="Meta-analysis using Dersimonian-Laird method"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="meta analysis by DerSimonian and Laird method"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: STATO_0000429: DerSimonian-Laird estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis. The estimator is used in simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies"/>
</property>
</concept>
<concept>
<code value="TBD:0000133"/>
<display value="Meta-analysis using Paule-Mandel method"/>
</concept>
<concept>
<code value="TBD:0000134"/>
<display
value="Meta-analysis using Restricted Maximum Likelihood method"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: STATO_0000427: restricted maximum likelihood estimation (REML) = restricted maximum likelihood estimation is a kind of maximum likelihood estimation data transformation which estimates the variance components of random-effects in univariate and multivariate meta-analysis. in contrast to 'maximum likelihood estimation', reml can produce unbiased estimates of variance and covariance parameters."/>
</property>
</concept>
<concept>
<code value="TBD:0000135"/>
<display value="Meta-analysis using Maximum Likelihood method"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: STATO_0000428: maximum likelihood estimation = "maximum likelihood estimation (MLE) is a method of estimating the parameters of a statistical model, given observations. MLE attempts to find the parameter values that maximize the likelihood function, given the observations.
The method of maximum likelihood is based on the likelihood function, {displaystyle {\mathcal {L}}(\theta \,;x)} {\displaystyle {\mathcal {L}}(\theta \,;x)}. We are given a statistical model, i.e. a family of distributions {\displaystyle \{f(\cdot \,;\theta )\mid \theta \in \Theta \}} {\displaystyle \{f(\cdot \,;\theta )\mid \theta \in \Theta \}}, where {\displaystyle \theta } \theta denotes the (possibly multi-dimensional) parameter for the model. The method of maximum likelihood finds the values of the model parameter, {\displaystyle \theta } \theta , that maximize the likelihood function, {\displaystyle {\mathcal {L}}(\theta \,;x)} {\displaystyle {\mathcal {L}}(\theta \,;x)}. I""/>
</property>
</concept>
<concept>
<code value="TBD:0000136"/>
<display value="Meta-analysis using Empirical Bayes method"/>
</concept>
<concept>
<code value="TBD:0000137"/>
<display value="Meta-analysis using Hunter-Schmidt method"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="meta analysis by Hunter-Schmidt method"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: STATO_0000426: Hunter-Schmidt estimator = Hunter-Schmidt estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis."/>
</property>
</concept>
<concept>
<code value="STATO:0000430"/>
<display
value="Meta-analysis using Hartung-Knapp-Sidik-Jonkman method"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="may need to add synonyms of HKSJ method, Hartung-Knapp method, Sidik-Jonkman method"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: a random effect meta analysis procedure defined by Hartung and Knapp and by Sidik and Jonkman which performs better than DerSimonian and Laird approach, especially when there is heterogeneity and the number of studies in the meta-analysis is small.
also STATO_0000425 Sidik-Jonkman estimator = Sidik-Jonkman estimator is a data item computed to estimate heterogeneity parameter (estimate of between-study variance) in a random effect model for meta analysis."/>
</property>
</concept>
<concept>
<code value="TBD:0000139"/>
<display value="Meta-analysis using modified Knapp-Hartung method"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="may have synonym of mKH method, modified Hartung-Knapp method"/>
</designation>
</concept>
<concept>
<code value="TBD:0000140"/>
<display value="Meta-analysis using Hedges method"/>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:0000124"/>
<display value="Statistical hypothesis test"/>
<concept>
<code value="TBD:0000141"/>
<display value="Between group comparison statistical test"/>
<concept>
<code value="TBD:0000146"/>
<display value="ANOVA"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: uses OBI_0200201: ANOVA or analysis of variance is a data transformation in which a statistical test of whether the means of several groups are all equal."/>
</property>
<concept>
<code value="TBD:0000150"/>
<display value="Multivariate ANOVA"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="MANOVA"/>
</designation>
</concept>
<concept>
<code value="STATO:0000048"/>
<display value="Multiway ANOVA"/>
<definition value="child term ?? 3-way ANOVA"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Multi-way anova is an analysis of variance where the difference groups being compared are associated to the factor levels of more than 2 independent variables. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data."/>
</property>
</concept>
<concept>
<code value="STATO:0000044"/>
<display value="One-way ANOVA"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: one-way ANOVA (one factor ANOVA) = one-way anova is an analysis of variance where the different groups being compared are associated with the factor levels of only one independent variable. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data."/>
</property>
</concept>
<concept>
<code value="TBD:0000153"/>
<display value="Repeated measure ANOVA"/>
</concept>
<concept>
<code value="STATO:0000045"/>
<display value="Two-way ANOVA"/>
<definition
value="child terms ?? 2-way ANOVA without replication ?? 2-way ANOVA with replication"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: two-way ANOVA (two factor ANOVA) = two-way anova is an analysis of variance where the different groups being compared are associated the factor levels of exatly 2 independent variables. The null hypothesis is an absence of difference between the means calculated for each of the groups. The test assumes normality and equivariance of the data."/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000147"/>
<display value="Non-parametric test"/>
<concept>
<code value="STATO:0000094"/>
<display value="Kruskal Wallis test"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Kruskal Wallis test (rank-sum test for the comparison of multiple (more than 2) samples.; H test) = "The Kruskal–Wallis test is a null hypothesis statistical testing objective which allows multiple (n>=2) groups (or conditions or treatments) to be compared, without making the assumption that values are normally distributed. The Kruskal–Wallis test is the non-parametric equivalent of the independent samples ANOVA.
The Kruskal–Wallis test is most commonly used when there is one nominal variable and one measurement variable, and the measurement variable does not meet the normality assumption of an anova.""/>
</property>
</concept>
<concept>
<code value="TBD:0000156"/>
<display value="Log rank test"/>
</concept>
<concept>
<code value="STATO:0000076"/>
<display value="Mann-Whitney U-test"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Wilcoxon Rank-Sum test"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="U test"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Wilcoxon rank-sum test"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="rank-sum test for the comparison of two samples"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "The Mann-Whitney U-test is a null hypothesis statistical testing procedure which allows two groups (or conditions or treatments) to be compared without making the assumption that values are normally distributed.
The Mann-Whitney test is the non-parametric equivalent of the t-test for independent samples""/>
</property>
</concept>
<concept>
<code value="STATO:0000433"/>
<display value="McNemar test"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: McNemar test (McNemar's Chi-squared Test for Count Data; test of the marginal homogeneity of a contingency table; within-subjects chi-squared test) = "McNemar's test is a statistical test used on paired nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whether the row and column marginal frequencies are equal (that is, whether there is ""marginal homogeneity""). It is named after Quinn McNemar, who introduced it in 1947.
An application of the test in genetics is the transmission disequilibrium test for detecting linkage disequilibrium""/>
</property>
</concept>
<concept>
<code value="TBD:0000159"/>
<display value="Sign test"/>
</concept>
<concept>
<code value="TBD:0000160"/>
<display value="Friedman test"/>
</concept>
</concept>
<concept>
<code value="TBD:0000148"/>
<display value="Two sample t-test"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="2-sample t-test, independent"/>
</designation>
<concept>
<code value="STATO:0000303"/>
<display value="Two sample t-test with equal variance"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: two sample t-test with equal variance (t-test for independent means assuming equal variance; two sample t-test) = two sample t-test is a null hypothesis statistical test which is used to reject or accept the hypothesis of absence of difference between the means over 2 randomly sampled populations.
It uses a t-distribution for the test and assumes that the variables in the population are normally distributed and with equal variances."/>
</property>
</concept>
<concept>
<code value="STATO:0000304"/>
<display value="Two sample t-test with unequal variance"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: two sample t-test with unequal variance (t-test for independent means assuming unequal variance; Welsh t-test) = Welch t-test is a two sample t-test used when the variances of the 2 populations/samples are thought to be unequal (homoskedasticity hypothesis not verified). In this version of the two-sample t-test, the denominator used to form the t-statistics, does not rely on a 'pooled variance' estimate."/>
</property>
</concept>
</concept>
<concept>
<code value="STATO:0000052"/>
<display value="Z test for between group comparison"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Z-test is a statistical test which evaluate the null hypothesis that the means of 2 populations are equal and returns a p-value."/>
</property>
</concept>
<concept>
<code value="TBD:ANCOVA"/>
<display value="ANCOVA"/>
<definition value="analysis of covariance (ANCOVA)"/>
</concept>
</concept>
<concept>
<code value="TBD:0000142"/>
<display value="Chi square test"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: from OBI_0200200: The chi-square test is a data transformation with the objective of statistical hypothesis testing, in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true, or any in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-square distribution as closely as desired by making the sample size large enough."/>
</property>
<concept>
<code value="TBD:0000163"/>
<display value="Chi square test for homogeneity"/>
</concept>
<concept>
<code value="STATO:0000074"/>
<display value="Mantel-Haenszel method"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value
value="Cochran-Mantel-Haenszel Chi-Squared Test for Count Data"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Cochran-Mantel-Haenzel test for repeated tests of independence (CHM test; Mantel–Haenszel test) = "Cochran-Mantel-Haenzel test for repeated tests of independence is a statitiscal test which allows the comparison of two groups on a dichotomous/categorical response. It is used when the effect of the explanatory variable on the response variable is influenced by covariates that can be controlled. It is often used in observational studies where random assignment of subjects to different treatments cannot be controlled, but influencing covariates can.
The null hypothesis is that the two nominal variables that are tested within each repetition are independent of each other. So there are 3 variables to consider: two categorical variables to be tested for independence of each other, and the third variable identifies the repeats.""/>
</property>
</concept>
<concept>
<code value="TBD:0000165"/>
<display value="Pearson’s Chi square test of goodness of fit"/>
</concept>
<concept>
<code value="TBD:0000166"/>
<display
value="Pearson’s Chi square test of goodness of independence between categorical variables"/>
<concept>
<code value="TBD:0000167"/>
<display value="Yate’s corrected Chi-Squared test"/>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:0000143"/>
<display value="Single-sample reference comparison statistical test"/>
<concept>
<code value="STATO:0000302"/>
<display value="One sample t-test"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="1-sample t-test"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "one sample t-test is a kind of Student's t-test which evaluates if a given sample can be reasonably assumed to be taken from the population.
The test compares the sample statistic (m) to the population parameter (M).
The one sample t-test is the small sample analog of the z test, which is suitable for large samples.""/>
</property>
</concept>
<concept>
<code value="TBD:0000169"/>
<display value="Z test for single-sample"/>
</concept>
</concept>
<concept>
<code value="TBD:0000144"/>
<display value="Test of association between categorical variables"/>
<concept>
<code value="STATO:0000148"/>
<display value="Cochran-Armitage test for trend"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "The Cochran-Armitage test (CATT) s a statistical test used in categorical data analysis when the aim is to assess for the presence of an association between a dichotomous variable (variable with two categories) and a polychotomous variable (a variable with k categories).
The two-level variable represents the response, and the other represents an explanatory variable with ordered levels. The null hypothesis is the hypothesis of no trend, which means that the binomial proportion is the same for all levels of the explanatory variable
For example, doses of a treatment can be ordered as 'low', 'medium', and 'high', and we may suspect that the treatment benefit cannot become smaller as the dose increases. The trend test is often used as a genotype-based test for case-control genetic association studies.""/>
</property>
</concept>
<concept>
<code value="STATO:0000073"/>
<display value="Fisher’s exact test"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: Fisher's exact test is a statistical test used to determine if there are nonrandom associations between two categorical variables."/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000145"/>
<display value="Within subject comparison statistical test"/>
<concept>
<code value="STATO:0000095"/>
<display value="Paired t-test"/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="2-sample t-test, dependent"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="matched pair t-test"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: paired t-test (t-test for dependent means) = paired t-test is a statistical test which is specifically designed to analysis differences between paired observations in the case of studies realizing repeated measures design with only 2 repeated measurements per subject (before and after treatment for example)"/>
</property>
</concept>
<concept>
<code value="STATO:0000092"/>
<display value="Wilcoxon signed rank test"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: "The Wilcoxon signed rank test is a statistical test which tests the null hypothesis that the median difference between pairs of observations is zero. This is the non-parametric analogue to the paired t-test, and should be used if the distribution of differences between pairs may be non-normally distributed.
The procedure involves a ranking, hence the name. The absolute value of the differences between observations are ranked from smallest to largest, with the smallest difference getting a rank of 1, then next larger difference getting a rank of 2, etc. Ties are given average ranks. The ranks of all differences in one direction are summed, and the ranks of all differences in the other direction are summed. The smaller of these two sums is the test statistic, W (sometimes symbolized Ts). Unlike most test statistics, smaller values of W are less likely under the null hypothesis.""/>
</property>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:0000Log"/>
<display value="Logarithm"/>
</concept>
</concept>
<concept>
<code value="TBD:model-characteristics"/>
<display value="Statistical Model Characteristic"/>
<definition
value="An aspect, attribute, or feature of a statistical model."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Statistical Model Attribute"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Generative Model Characteristic"/>
</designation>
<property>
<code value="comment"/>
<valueString
value="A statistical model is defined as a set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-06-05 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann"/>
</property>
<concept>
<code value="TBD:0000118"/>
<display value="Statistical model goal"/>
<concept>
<code value="TBD:0000119"/>
<display value="Adjustment for clustering"/>
</concept>
<concept>
<code value="TBD:0000120"/>
<display value="Adjustment for covariates"/>
</concept>
<concept>
<code value="TBD:ITTA"/>
<display value="Intention-to-treat analysis"/>
</concept>
<concept>
<code value="TBD:PPA"/>
<display value="Per-protocol analysis"/>
</concept>
<concept>
<code value="TBD:sample-size"/>
<display value="Sample size estimation"/>
</concept>
<concept>
<code value="TBD:net-effect-analysis"/>
<display value="Net effect analysis"/>
<concept>
<code value="TBD:OutcomeSetNetEffect"/>
<display value="Set of outcomes (for a net effect analysis)"/>
</concept>
<concept>
<code value="TBD:net-effect-contribution-analysis"/>
<display value="Net effect contribution analysis"/>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:statistical-model-assumption"/>
<display value="Statistical model assumption"/>
<concept>
<code value="TBD:assumption001"/>
<display value="Data distribution assumption of normal distribution"/>
<definition
value="Assumption that the observed data in each comparison group follows a normal distribution."/>
</concept>
<concept>
<code value="TBD:assumption002"/>
<display
value="Data distribution assumption of equal standard deviations"/>
<definition
value="Assumption that the observed data across comparison groups have the same standard deviation."/>
</concept>
<concept>
<code value="TBD:assumption003"/>
<display
value="Data distribution assumption of asymptotic approximation"/>
<definition
value="Assumption that there is sufficient data across the distribution to permit using an approximation that is [asymptotic]."/>
</concept>
</concept>
<concept>
<code value="TBD:statistical-model-assumption-assessment"/>
<display value="Statistical model assumption assessment"/>
<concept>
<code value="TBD:assumption-assessment-001"/>
<display
value="all the expected counts in the cells of the contingency table meet or exceed a threshold"/>
</concept>
</concept>
<concept>
<code value="TBD:unclassified-terms"/>
<display value="Statistical model characteristic not yet classified"/>
<concept>
<code value="TBD:participant-inclusion-criteria-for-analysis"/>
<display value="participant inclusion criteria for analysis"/>
</concept>
<concept>
<code value="TBD:data-inclusion-criteria-for-analysis"/>
<display value="data inclusion criteria for analysis"/>
</concept>
<concept>
<code value="TBD:handling-of-missing-endpoint-data"/>
<display value="handling of missing endpoint data"/>
</concept>
<concept>
<code value="TBD:single-imputation-by-LOCF"/>
<display
value="single imputation by last-observation-carried-forward (LOCF)"/>
</concept>
<concept>
<code
value="TBD:participant-inclusion-criteria-for-secondary-analysis"/>
<display
value="participant inclusion criteria for secondary analysis"/>
</concept>
<concept>
<code value="TBD:data-inclusion-criteria-for-secondary-analysis"/>
<display value="data inclusion criteria for secondary analysis"/>
</concept>
<concept>
<code value="TBD:statistical-software-package"/>
<display value="statistical software package"/>
</concept>
<concept>
<code value="TBD:sample-size-per-group"/>
<display value="sample size per group"/>
</concept>
<concept>
<code value="TBD:prospective-sample-permutation-testing"/>
<display value="prospective sample permutation testing"/>
</concept>
<concept>
<code value="TBD:number-of-permutations-sampled"/>
<display value="number of permutations sampled"/>
</concept>
<concept>
<code value="TBD:primary-analytic-method"/>
<display value="primary analytic method"/>
</concept>
<concept>
<code value="TBD:identify-source-of-interaction"/>
<display value="identify source(s) of significant interaction"/>
</concept>
<concept>
<code value="TBD:rank-based-analytic-method"/>
<display value="rank-based analytic method"/>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:model-component"/>
<display value="Statistical Model Component"/>
<definition value="A part of a statistical model."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel, Harold Lehmann"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A statistical model is defined as a set of mathematical relationships that express assumptions related to the generation of the observed data and that sets constraints for the analysis of the data.
Statistical model components include graphical structures (e.g. directed acyclic graph), equations (e.g. regression model form), components of equations (e.g. covariate term), and distributional assumptions (e.g. regression error distribution)."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-06-05 vote 6-0 by Cauê Monaco, Eric Harvey, Paul Whaley, Jesus Lopez-Alcalde, Sunu Alice Cherian, Harold Lehmann"/>
</property>
<concept>
<code value="TBD:0000088"/>
<display value="Covariate term"/>
</concept>
<concept>
<code value="STATO:0000469"/>
<display value="Interaction term"/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: model interaction effect term = a model interaction effect term is a model term which accounts for variation explained by the combined effects of the factor levels of more than one (usually 2) independent variables."/>
</property>
</concept>
<concept>
<code value="TBD:0000106"/>
<display value="Regression Model Form"/>
<concept>
<code value="TBD:0000107"/>
<display value="Log Linear Regression"/>
</concept>
<concept>
<code value="TBD:0000108"/>
<display value="Polynomial Regression"/>
</concept>
<concept>
<code value="TBD:0000109"/>
<display value="Cox Proportional Hazards"/>
</concept>
</concept>
</concept>
<concept>
<code value="TBD:PDA"/>
<display value="probability distribution attribute"/>
<definition
value="An aspect, characteristic, or feature of a probability distribution."/>
<property>
<code value="comment"/>
<valueString
value="A probability distribution is represented by a combination of probability distribution attributes."/>
</property>
<property>
<code value="editors"/>
<valueString value="Brian S. Alper, Harold Lehmann, Muhammad Afzal"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann,: Eric Harvey, Mario Tristan"/>
</property>
<concept>
<code value="TBD:0000110"/>
<display value="probability distribution class"/>
<definition
value="A probability distribution attribute that communicates how the likelihood of a specified outcome is calculated."/>
<property>
<code value="comment"/>
<valueString
value="The probability distribution class defines the assumed model. Parametric probability distribution classes are determined by parameters."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Brian S. Alper, Muhammad Afzal, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan"/>
</property>
<concept>
<code value="TBD:0000111"/>
<display value="normal distribution"/>
<definition
value="A probability distribution class in which instances are unimodal, symmetric, and defined by two parameters, mean and standard deviation."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Gaussian distribution"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="Term IRI: http://purl.obolibrary.org/obo/STATO_0000227
Definition: A normal distribution is a continuous probability distribution described by a probability distribution function described here: http://mathworld.wolfram.com/NormalDistribution.html"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Normal distribution is commonly used to approximate the sampling distribution of quantities estimated from samples.
Variance is the square of standard deviation. Variance is sometimes used instead of standard deviation as a parameter for defining a normal distribution.
Standard normal distribution is a special case of normal distribution with a mean = 0, variance = 1, and kurtosis = 3.
All normal distributions have skewness = 0."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Philippe Rocca-Serra, Ken Wilkins, Joanne Dehnbostel, Khalid Shahin, Brian S. Alper, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-08-07 vote 5-0 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann"/>
</property>
</concept>
<concept>
<code value="STATO:0000438"/>
<display value="log normal distribution"/>
<definition
value="A probability distribution class in which the logarithm transformed values of a variable follow a normal distribution. Instances of the log normal distribution class are unimodal and skewed. Variables can only be non-negative real values."/>
<property>
<code value="comment"/>
<valueString
value="Log normal distribution is commonly used to approximate the distribution of times and costs.
The mean of a log normal distribution is the geometric mean of the log transformed values.
Log transformed means the natural log of values replace those values.
Normal distribution is defined as a probability distribution class in which instances are unimodal, symmetric, and defined by two parameters, mean and standard deviation."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-08-07 vote 4-1 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-08-07 comment: (tweak to the definition):
A probability distribution class in which the logarithm transformed values of a variable follow a normal distribution. Instances of the log normal distribution class are unimodal and skewed.
STATO_0000438"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="STATO:0000160"/>
<display value="exponential distribution"/>
<definition
value="A probability distribution class defined by a single parameter, rate. Instances of the exponential distribution class are unimodal and skewed. Variables can only be non-negative real values."/>
<property>
<code value="comment"/>
<valueString
value="Exponential distribution is commonly used to represent the distribution of independent events occurring at the same rate over time.
The mean and standard deviation of an exponential distribution are each the reciprocal of the rate."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Brian S. Alper, Khalid Shahin"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-08-07 vote 4-1 by Joanne Dehnbostel, Mario Tristan, Philippe Rocca-Serra, Eric Harvey, Harold Lehmann"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-08-07 comment: (tweak to the definition):
A probability distribution class defined by a single parameter, rate and commonly used to represent the distribution of independent events occurring at the same rate over time. Instances of the exponential distribution class are unimodal, skewed,
STATO_0000160"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="STATO:0000149"/>
<display value="binomial distribution"/>
<definition
value="A probability distribution class defined by two parameters: the number of independent trials, n, and the probability of success, p. Variables can only be dichotomous values."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: binomial logistic regression for analysis of dichotomous dependent variable = binomial logistic regression model is a model which attempts to explain data distribution associated with *dichotomous* response/dependent variable in terms of values assumed by the independent variable uses a function of predictor/independent variable(s): the function used in this instance of regression modeling is logistic function.
also STATO_0000276: binomial distribution = The binomial distribution is a discrete probability distribution which describes the probability of k successes in n draws with replacement from a finite population of size N.
The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N.
The binomial distribution gives the discrete probability distribution of obtaining exactly n successes out of N Bernoulli trials (where the result of each Bernoulli trial is true with probability p and false with probability q=1-p )
notation: B(n,p)
The mean is N*p
The variance is N*p*q"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Binomial distribution is commonly used to approximate the probability of a dichotomous state (presence/absence, success/failure, true/false).
The mean of a binomial distribution is the number of independent trials, n, multiplied by the probability of success, p. n * p
The variance of a binomial distribution is the number of independent trials, n, multiplied by the probability of success, p, multiplied by the probability of failure, 1-p. n * p * q where q = 1 - p"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Joanne Dehnbostel, Kenneth Wilkins, Muhammad Afzal, Brian S. Alper"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="STATO:0000109"/>
<display value="multinomial distribution"/>
<definition
value="A probability distribution class defined by multiple parameters: the number of independent trials, n, the number of categories, k, and k-1 probabilities of success. Variables can only be polychotomous values."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: multinomial logistic regression for analysis of dichotomous dependent variable = multinomial logistic regression model is a model which attempts to explain data distribution associated with *polychotomous* response/dependent variable in terms of values assumed by the independent variable uses a function of predictor/independent variable(s): the function used in this instance of regression modeling is logistic function.
also multinomial distribution (STATO_0000103) = the multinomial distribution is a probability distribution which gives the probability of any particular combination of numbers of successes for various categories defined in the context of n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability."/>
</property>
<property>
<code value="comment"/>
<valueString
value="Multinomial distribution is commonly used to approximate the probability of a categorical outcome across a discrete number of mutually exclusive possible categories. A classic example is rolling a six-sided die.
For *n* independent trials, the expected (mean) number of times category *i* will appear is *n* multiplied by the probability of success, *p_i*. *n* * *p_i*
The variance of that expectation is *n* multiplied by *p_i* multiplied by the probability of failure, 1-*p_i*"/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Joanne Dehnbostel, Brian S. Alper"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="STATO:0000051"/>
<display value="Poisson distribution"/>
<definition
value="A probability distribution class defined by one parameter: a non-negative real number, λ. Random variables following a Poisson distribution can only have non-negative integer values."/>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: STATO_0000051 is Poisson distribution = "Poisson distribution is a probability distribution used to model the number of events occurring within a given time interval. It is defined by a real number (λ) and an integer k representing the number of events and a function.
The expected value of a Poisson-distributed random variable is equal to λ and so is its variance.""/>
</property>
<property>
<code value="comment"/>
<valueString
value="Poisson distribution is commonly used to approximate the number (count) of events occurring within a given time interval or given spatial region.
The expected value of a Poisson-distributed random variable is equal to λ and so is its variance."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Joanne Dehnbostel, Brian S. Alper, Muhammad Afzal"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-10-02 comment: The other definitions include something about what is called the "support" (binary, polychotomous). Here, we should say, to be consistent, "Variables can take on only non-negative integral values.""/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan BUT definition changed based on comment"/>
</property>
</concept>
<concept>
<code value="STATO:0000283"/>
<display value="negative binomial distribution"/>
<definition
value="A probability distribution class for discrete data of the number of successes in a sequence of Bernoulli trials before a specified number (denoted r) of failures occur."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="Pascal distribution"/>
</designation>
<property>
<code value="external-definitions"/>
<valueString
value="STATO: STATO_0000283: negative binomial distribution (Pascal distribution; Pólya distribution) = negative binomial probability distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number of failures (denoted r) occur. The negative binomial distribution, also known as the Pascal distribution or Pólya distribution, gives the probability of r-1 successes and x failures in x+r-1 trials, and success on the (x+r)th trial."/>
</property>
<property>
<code value="comment"/>
<valueString
value="The negative binomial distribution, also known as the Pascal distribution, gives the probability of r-1 successes and x failures in x+r-1 trials, and success on the (x+r)th trial. Pólya distribution is a variation of negative binomial distribution used for all real numbers, not just non-negative integers."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Kenneth Wilkins, Muhammad Afzal, Joanne Dehnbostel, Brian S. Alper"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:mu"/>
<display value="distribution mean"/>
<definition
value="A probability distribution attribute that represents the expected value of a variable that has that distribution."/>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel, Muhammad Afzal"/>
</property>
<property>
<code value="comment"/>
<valueString
value="For a normal distribution, the distribution parameter mean (also called μ or mu) coincides with the mean of the distribution."/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-06-12 comments: mu represents population mean. It is a measure of central tendency that represents the average value of a variable within an entire population.
To avoid any ambiguity, rather than mu I would use the alternative term μ.
As reported by Wikipedia:
In Ancient Greek, the name of the letter was written μῦ and pronounced [mŷː], but in Modern Greek, the letter is spelled μι and pronounced [mi]. In polytonic orthography, it is written with an acute accent: μί."/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-06-12 vote 2-2 by Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-09 vote 5-0 by Louis Leff, Jesus Lopez-Alcalde, Harold Lehmann, Eric Harvey, Mario Tristan"/>
</property>
</concept>
<concept>
<code value="TBD:0000051"/>
<display value="distribution standard deviation"/>
<definition
value="A probability distribution attribute that is the square root of the distribution variance."/>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="comment"/>
<valueString
value="A distribution variance is defined as a probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value.
For a normal distribution, the distribution parameter standard deviation (also called σ or sigma) coincides with the standard deviation of the distribution. Standard deviation is defined as a measure of dispersion that represents the average of the distances from the mean of the dataset to each data point in the dataset."/>
</property>
<property>
<code value="editors"/>
<valueString value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-06-12 vote 3-1 by Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey"/>
</property>
<property>
<code value="expert-comments"/>
<valueString
value="2023-06-12 comment: sigma represents population standard deviation, It is a measure of the dispersion or spread of data points within an entire population."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="TBD:model-parameter"/>
<display value="probability distribution parameter"/>
<definition
value="A member of a set of quantities that unambiguously defines a probability distribution function."/>
<property>
<code value="comment"/>
<valueString
value="Parameters serve different roles in defining distributions. Location parameters define the position along the range of possible values. Shape and scale parameters define the dispersion around the expected value.
When the probability distribution parameters have values, the set of values defines a particular probability distribution function.
When a statistic applies to a specific set of data, the specific set of data is called a sample and the statistic is called the sample statistic. Likewise, when a probability distribution parameter applies to the group from which a sample may be derived, the group is called a population and the probability distribution parameter is called a population parameter."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins, Joanne Dehnbostel"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey"/>
</property>
<property>
<code value="negative-vote"/>
<valueString
value="2023-06-12 vote 5-0 by Brian S. Alper, Sunu Alice Cherian, Harold Lehmann, Paola Rosati, Eric Harvey BUT the term then changed in committee to grapple with sub-terms"/>
</property>
<concept>
<code value="TBD:mean-normal"/>
<display value="Mean as normal-distribution parameter"/>
<definition
value="A probability distribution parameter for a normal distribution that provides the location of the distribution."/>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Kenneth Wilkins"/>
</property>
<property>
<code value="comment"/>
<valueString
value="This parameter is generally denoted as μ or mu."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde"/>
</property>
</concept>
<concept>
<code value="TBD:variance-normal"/>
<display value="Variance as normal-distribution parameter"/>
<definition
value="A probability distribution parameter for a normal distribution that provides the dispersion of the distribution."/>
<property>
<code value="editors"/>
<valueString
value="Harold Lehmann, Brian S. Alper, Kenneth Wilkins"/>
</property>
<property>
<code value="comment"/>
<valueString
value="This parameter is generally denoted as σ^2 or sigma-squared."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde"/>
</property>
</concept>
</concept>
<concept>
<code value="TBD:0000053"/>
<display value="distribution variance"/>
<definition
value="A probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value."/>
<property>
<code value="editors"/>
<valueString value="Brian S. Alper, Harold Lehmann, Kenneth Wilkins"/>
</property>
<property>
<code value="comment"/>
<valueString
value="For a normal distribution, the distribution parameter variance (also called σ^2 or sigma-squared) coincides with the variance of the distribution."/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-10-16 vote 5-0 by Harold Lehmann, Eric Harvey,: Muhammad Afzal, Louis Leff, Jesus Lopez-Alcalde"/>
</property>
<concept>
<code value="TBD:0000056"/>
<display value="variance of the sampling distribution"/>
<definition
value="A distribution variance in which the distribution is a sampling distribution of a given statistic."/>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="sampling variance"/>
</designation>
<designation>
<use>
<system value="http://snomed.info/sct"/>
<code value="900000000000013009"/>
<display value="Synonym (core metadata concept)"/>
</use>
<value value="variance of the empirical distribution"/>
</designation>
<property>
<code value="statistical-purpose"/>
<valueString value="Measure of Dispersion"/>
</property>
<property>
<code value="comment"/>
<valueString
value="Distribution variance is defined as a probability distribution attribute that is the expected value of the square of the difference of the value of a variable that has that distribution from its expected value.
A sampling distribution is a distribution of values for the given statistic derived from a set of random independent samples from the same population. The samples may be theoretical or actual."/>
</property>
<property>
<code value="editors"/>
<valueString
value="Brian S. Alper, Kenneth Wilkins, Harold Lehmann"/>
</property>
<property>
<code value="approval"/>
<valueString
value="2023-11-20 vote 5-0 by Brian S. Alper, Jesus Lopez-Alcalde, Harold Lehmann, Muhammad Afzal, Eric Harvey"/>
</property>
</concept>
</concept>
</concept>
</CodeSystem>
IG © 2022+ HL7 International / Clinical Decision Support. Package hl7.fhir.uv.ebm#1.0.0-ballot based on FHIR 5.0.0. Generated 2023-12-17
Links: Table of Contents |
QA Report
| Version History |
|
Propose a change