This page is part of the FHIR Specification (v3.3.0: R4 Ballot 2). The current version which supercedes this version is 5.0.0. For a full list of available versions, see the Directory of published versions . Page versions: R5 R4B R4 R3
Work Group FHIR Infrastructure & Security | Ballot Status: Informative |
The Security and Privacy Module describes how to protect a FHIR server (through access control and authorization), how to document what permissions a user has granted (consent), and how to keep records about what events have been performed (audit logging and provenance). FHIR does not mandate a single technical approach to security and privacy; rather, the specification provides a set of building blocks that can be applied to create secure, private systems.
The Security and Privacy module includes the following materials:
Resources | Datatypes | Implementation Guidance and Principles |
The following common use-cases are elaborated below:
FHIR is focused on the data access methods and encoding leveraging existing Security solutions. Security in FHIR needs to focus on the set of considerations required to ensure that data can be discovered, accessed, or altered only in accordance with expectations and policies. Implementation should leverage existing security standards and implementations to ensure that:
For general security considerations and principles, see Security. Please leverage mature Security Frameworks cover device security, cloud security, big-data security, service-to-service security, etc. See NIST Mobile Device Security and OWASP Mobile Secuity . These security frameworks include prioritized lists of most important concerns.
Privacy in FHIR includes the set of considerations required to ensure that individual data are treated according to an individual's Privacy Principles. FHIR includes implementation guidance to ensure that:
Use case: A FHIR server should ensure that API access is allowed for authorized requests and denied for unauthorized requests.
Approach: Authorization details can vary according to local policy, and according to the access scenario (e.g. sharing data among institution-internal subsystems vs. sharing data with trusted partners vs. sharing data with third-party user-facing apps). In general, FHIR enables a separation of concerns between the FHIR REST API and standards-based authorization protocols like OAuth. For the use case of user-facing third-party app authorization, we recommend the OAuth-based SMART protocol see Security: Authentication as an externally-reviewed authorization mechanism with a real-world deployment base — but we note that community efforts are underway to explore a variety of approaches to authorization. For further details, see Security: Authorization and Access Control.
Use-Case: When a user has restricted rights but attempts to do a query they do not have rights to, they should not be given the data. Policy should be used to determine if the user query should result in an error, zero data, or the data one would get after removing the non-authorized parameters.
Example: Using _include or _revinc to get at resources beyond those authorized. Ignoring (removing) the _include parameter would give some results, just not the _include Resources. This could be silently handled and thus give some results, or it could be returned as error.
Use case: "Access to protected Resources are enabled though user Role-Based, Context-Based, and/or Attribute-Based Access Control."
Approach: Users should be identified and should have their Functional and/or Structural role declared when these roles are related to the functionality the user is interacting with. Roles should be conveyed using standard codes from Security Role Vocabulary.
A purpose of use should be asserted for each requested action on a Resource. Purpose of use should be conveyed using standard codes from Purpose of Use Vocabulary.
When using OAuth, the requested action on a Resource for specified one or more purpose of use and the role of the user are managed by the OAuth authorization service (AS) and may be communicated in the security token where jwt tokens are used. For details, see Security: HCS vocabulary.
Use case: "A FHIR server should keep a complete, tamper-proof log of all API access and other security- and privacy-relevant events".
Approach: FHIR provides an AuditEvent resource suitable for use by FHIR clients and servers to record when a security or privacy relevant event has occurred. This form of audit logging records as much detail as reasonable at the time the event happened. The FHIR AuditEvent is aligned and cross-referenced with IHE Audit Trail and Node Authentication (ATNA) Profile. For details, see Security: Audit.
Use case: "A Patient should be offered a report that informs about how their data is Collected, Used, and Disclosed.".
Approach: The AuditEvent resource can inform this report.
There are many motivations to provide a Patient with some report on how their data was used. There is a very restricted version of this in HIPAA as an "Accounting of Disclosures", there are others that would include more accesses. The result is a human readable report. The raw material used to create this report can be derived from a well recorded 'security audit log', specifically based on AuditEvent. The format of the report delivered to the Patient is not further discussed but might be: printed on paper, PDF, comma separated file, or FHIR Document made up of filtered and crafted AuditEvent Resources. The report would indicate, to the best ability, Who accessed What data from Where at When for Why purpose. The 'best ability' recognizes that some events happen during emergent conditions where some knowledge is not knowable. The report usually does need to be careful not to abuse the Privacy rights of the individual that accessed the data (Who). The report would describe the data that was accessed (What), not duplicate the data.
Some events are known to be subject to the Accounting of Disclosures report when the event happens, thus can be recorded as an Accounting of Disclosures - See example Accounting of Disclosures. Other events must be pulled from the security audit log. A security audit log will record ALL actions upon data regardless of if they are reportable to the Patient. This is true because the security audit log is used for many other purposes. - See Audit Logging. These recorded AuditEvents may need to be manipulated to protect organization or employee (provider) privacy constraints. GIven the large number of AuditEvents, there may be multiple records of the same actual access event, so the reporting will need to de-duplicate.
Use case: "Documentation of a Patient's Privacy Consent Directive - rules for Collection, Use, and Disclosure of their health data."
Approach: FHIR provides a Consent resource suitable for use by FHIR clients and servers to record current Privacy Consent state. The meaning of a consent or the absence of the consent is a local policy concern. The Privacy Consent may be a pointer to privacy rules documented elsewhere, such as a policy identifier or identifier in XACML. The Privacy Consent has the ability to point at a scanned image of an ink-on-paper signing ceremony, and supports digital signatures through use of Provenance. The Privacy Consent has the ability to include some simple FHIR centric base and exception rules.
All uses of FHIR Resources would be security/privacy relevant and thus should be recorded in an AuditEvent. Those access qualifying as a Disclosure should additionally be recorded as a Disclosure, see Disclosure Audit Event Example.
For Privacy Consent guidance and examples, see Consent Resource.
Use case: "All FHIR Resources should be capable of having the Provenance fully described."
Approach: FHIR provides the Provenance resource suitable for use by FHIR clients and servers to record the full provenance details: who, what, where, when, and why. A Provenance resource can record details for Create, Update, and Delete; or any other activity. Generally, Read operations would be recorded using AuditEvent. Many Resources include these elements within; this is done when that provenance element is critical to the use of that Resource. This overlap is expected and cross-referenced on the W5 report. For details, see Provenance Resource.
Use case: "For any given query, need Provenance records also."
Approach: Given that a system is using Provenance records.
When one needs the Provenance records in addition to the results of a query on other records (e.g. Query on MedicationRequest),
then one uses reverse include to request that all Provenance records also be returned.
That is to add ?_revinclude=Provenance:target
.
For details, see _revinclude.
Use case: "Digital Signature is needed to prove authenticity, integrity, and non-repudiation."
Approach: FHIR Resources are often parts of Medical Record or are communicated as part of formal Medical Documentation. As such there is a need to cryptographically bind a signature so that the receiving or consuming actor can verify authenticity, integrity, and non-repudiation. This functionality is provided through the signature element in Provenance Resource. Where the signature can be any local policy agreed to signature including Digital Signature methods and Electronic Signature. For details, see Security: Digital Signatures.
Digital Signatures bind cryptographically the exact contents, so that any changes will make the Digital Signature invalid. When a Resource is created, or updated the server is expected to update relevant elements that it manages (id, lastupdated, etc.). These changes, although expected of normal RESTful create/update operations, will break any Digital Signature that has been calculated prior. One solution is to create the Digital Signature after the REST create operation completes, one must first confirm that the resulting created/updated Resource is as expected, then the Digital Signature is formed.
A variation of this happens in Messaging, Documents, and other interaction models. For details, see Ramifications of storage/retrieval variations
De-Identification is inclusive of pseudonymization and anonymization; which are the Process of reducing privacy risk by eliminating and modifying data elements to meet a targeted use-case.
Use-Case: "Requesting Client should have access to De-Identified data only."
Trigger: Based on an Access Control decision that results in a permit with an Obligation to De-Identify, the Results delivered to the Requesting Client would be de-identified.
Consideration: This assumes the system knows the type and intensity of the de-identification algorithm. Where de-identification is best viewed as a Process, not an algorithm. A Process that reduces Privacy risk while enabling a targeted and authorized use-case.
Modifying an element: The de-identification process may determine that specific elements need to be modified to lower privacy risk. Some methods of modifying are: eliminating the element, setting to a static value (e.g. "removed"), fuzzing (e.g. adjusting by some random value), masking (e.g. encryption), pseudonym (e.g. replace with an alias), etc. See standards below for further details.
Discussion: With the Observation Resource, one would remove the subject element as it is a Direct Identifier. However, there are many other Reference elements that can easily be used to navigate back to the Subject; e.g., Observation.context value of Encounter or EpisodeOfCare; or Observation.performer.
Some identifiers in Observation Resource:
Emphasis: The .specimen is a direct identifier of a particular specimen; and would be an indirect identifier of a particular person. There is a ramification of having the specimen identifier. One solution is to create pseudo specimen resources that will stand-in for the original specimen resource. This pseudo specimen management is supplied by a trusted-third-party that maintains a database of pseudo-identifiers with authorized reversibility.
Care should be taken when modifying an isModifier elements, as the modification will change the meaning of the Resource.
Security-label: The resulting Resource should be marked with security-label to indicate that it has been de-identified. This would assure that downstream use doesn't mistake this Resource as representing full fidelity. These security-labels come from the Security Integrity Observation ValueSet. Some useful security-tag vocabulary: ANONYED, MASKED, PSEUDED, REDACTED
Further Standards: Health: ISO Pseudonymization , NIST IR 8053 - De-Identification of Personal Information , IHE De-Identification Handbook , DICOM (Part 15, Chapter E)
Use-Case: There are times when test data is needed. Test data are data that is not associated with any real patient. Test data are usually representative of expected data that is published for the purpose of testing. Test data may be fully fabricated, synthetic, or derived from use-cases that had previously caused failures.
Trigger: When test data are published it may be important to identify the data as test data.
Consideration: This identification may be to assure that the test data is not misunderstood as real data, and that the test data is not factored into statistics or reporting. However, there is a risk that identifying test data may inappropriately thwart the intended test that the data are published to test.
Discussion:
Test data could be isolated in a server specific to test data.
Test data could be intermingled with real-patient data using one or both of the following methods:
Considerations: Note there is a risk when co-mingling test data with real patient data that someone will accidentally use test data without realizing it is test data.
In the STU3 release, FHIR includes building blocks and principles for creating secure, privacy-oriented health IT systems; FHIR does not mandate a single technical approach to security and privacy.
In future releases, we are anticipate including guidance on: