GenAI risk assessment and management
How do I assess GenAI use cases for risk? What are the most important considerations to keep in mind when researching GenAI tools? How should I begin planning to mitigate the risk of GenAI tools?
This section helps you understand and interpret the risk level criteria defined in the GenAI Risk Assessment in a consistent way. It clarifies best practices and offers plain language suggestions on how to identify GenAI risks. A guided workflow demonstrates how to self-assess GenAI risk and engage with the California Department of Technology in consultations on moderate/high risk use cases. It explains the principles that will guide GenAI risk assessment in California.
Introduction
Your department has identified a GenAI use case and determined that a GenAI tool could potentially benefit your program. Next, you must account for and mitigate risks by conducting a risk assessment that documents potential risks to consider before procuring your GenAI use case.
The California Department of Technology (CDT) has developed the Generative Artificial Intelligence Risk Assessment (SIMM 5305-F) to guide state entities through a thorough risk assessment of a proposed GenAI tool before it moves toward procurement. This assessment was developed based on the National Institute of Standards and Technology’s AI Risk Management Framework, the State Administrative Manual (SAM), and the State Information Management Manual (SIMM).
The GenAI Risk Assessment classifies a GenAI tool as “Low,” “Moderate,” or “High” risk based on the GenAI solution, the data underlying it, and how the department wants to use the tool. A high-level risk assessment workflow is shown below. As you can see, GenAI tools assessed as low risk can be largely managed in-house at each department. GenAI tools assessed as moderate or high risk require additional CDT support and oversight so that your department can safely and responsibly deploy the tool.
Department CIOs must conduct a risk assessment for all GenAI purchases, whether intentional or incidental.
Generative Artificial Intelligence Risk Assessment (SIMM 5305-F) is a requirement for state entities to complete when pursuing GenAI projects.
Generative Artificial Intelligence Risk Assessment (SIMM 5305-F) workflow at a glance
- Identify the problem statement of the use case (for example, see Identifying GenAI use cases) and why your department feels GenAI is helpful over other solutions (such as conventional AI or traditional data analytics).
-
Use the Generative Artificial Intelligence Risk Assessment (SIMM 5305-F) to
document risk considerations for the GenAI use case.
- Not sure of a GenAI tool’s risk level? State entities can use the decision flow chart to help with self-assessment of the risk level. This decision flow chart will ask a series of questions regarding the data type, the intended use of the GenAI, and magnitude of risks to evaluate an initial risk level. State entities are responsible for conducting a thorough risk assessment to explain the risk classification level they assign.
-
If your department classifies the GenAI use case as “Low” risk:
-
The CIO/AIO/Designee submits a Case via the
New Technology Consultation and Assessment
request, in the
CDT IT Service Portal,
before procurement is awarded.
-
When the request has been processed, a CDT Customer Engagement
Services (CES) Account Lead will be in contact with instructions to
provide:
- Required Generative Artificial Intelligence Risk Assessment (SIMM 5305-F)
- Required Generative Artificial Intelligence (GenAI) Reporting and Factsheet (STD 1000), once received from the Bidder / Offeror / Contractor Information
- If completed, Privacy Threshold Assessment and Privacy Impact Assessments (SIMM 5310-C).
-
When the request has been processed, a CDT Customer Engagement
Services (CES) Account Lead will be in contact with instructions to
provide:
- CDT reserves the right to audit and consult on "Low" GenAI Risk Levels with potential higher risk concerns.
-
The CIO/AIO/Designee submits a Case via the
New Technology Consultation and Assessment
request, in the
CDT IT Service Portal,
before procurement is awarded.
-
If your department classifies the GenAI use case as “Moderate” or “High”
risk:
-
Your department must consult with CDT so that you can understand your
risk assessment in order for you to design your risk management plan for
the GenAI tool under consideration.
- Using California’s GenAI Risk Management Principles as a guide, your department must develop a risk mitigation strategy to show how the moderate or high risk of the GenAI tool will be adequately mitigated, monitored, and managed prior to procurement and on an ongoing basis.
-
Your department’s CIO, AIO, or your designee will submit a Case via the
New Technology Consultation and Assessment
request, in the
CDT IT Service Portal.
-
When the request has been processed, a CDT Customer Engagement
Services (CES) Account Lead will be in contact with instructions to
provide:
-
Required
Generative Artificial Intelligence Risk Assessment (SIMM
5305-F), Parts 1 and 2
- Part 2 includes privacy, security, transparency, and DEIA considerations for this GenAI use case.
- Required Generative Artificial Intelligence (GenAI) Reporting and Factsheet (STD 1000), once received from the Bidder / Offeror / Contractor Information
- Required mitigation plan
- If completed, Privacy Threshold Assessment and Privacy Impact Assessments (SIMM 5310-C)
-
Required
Generative Artificial Intelligence Risk Assessment (SIMM
5305-F), Parts 1 and 2
-
When the request has been processed, a CDT Customer Engagement
Services (CES) Account Lead will be in contact with instructions to
provide:
- The consultation process will consist of a conversation to address the business needs and risks of using GenAI. Solicitations that contain a “Moderate” or “High” risk GenAI solution may not be released, or a contract executed, without a CDT consultation.
- Assign a GenAI subject matter expert from your department to help facilitate the CDT consultation and contract management. A GenAI subject matter expert is someone who has completed the required technical and business training offered through CalHR to appropriately identify and evaluate the benefits and risks of GenAI technology.
-
Your department must consult with CDT so that you can understand your
risk assessment in order for you to design your risk management plan for
the GenAI tool under consideration.
5305-F Risk assessment workflow
California GenAI Risk Management Principles
California has identified key focus areas for identifying and managing risks of using GenAI. You should use these principles as a foundation in developing a risk mitigation strategy to address potential safety, privacy, and security concerns before deploying GenAI. California’s risk management principles emphasizes key elements of the NIST Artificial Intelligence Risk Management Framework and are intended to ensure NIST standards are considered, referenced, and applied appropriately for California’s needs.
Risk Management is a key consideration for GenAI and is essential for you to successfully plan, implement, and operate GenAI technologies. The GenAI risk management principles are intended to ensure you responsibly deliver critical services to all Californians in a safe, secure, and equitable manner.
Key Risk Management Principles
- Statewide Administrative Management – Statewide Administrative Management refers to policies contained in the Statewide Administrative Manual, State Contracting Manual, Statewide Information Management Manual, State Telecommunications Management Manual (STMM), and the CalHR Human Resources Manual. This principle emphasizes the need for administrative and operational policies, standards, and guidelines to ensure the use of GenAI is fit for the proposed purpose, and ensures the legal, ethical, equitable, safe, and secure use of GenAI technology.
- State Entity Governance – This principle highlights the importance of the departmental policies, processes, risk documentation, and procedures necessary to ensure acceptable use and effective operational oversight of GenAI technology as it relates to your business and mission objectives.
- Quality, Safety, and Security Controls – The principle focuses on the need for adequate quality, safety, and security controls required for you to train, validate, assess, secure, and take corrective actions to ensure that GenAI models and vendors perform as expected.
- Public-Private Partnerships and Community Engagement – This principle emphasizes the need for your department to create, grow, and sustain meaningful partnerships with those who build GenAI solutions as well as those Californians who may be most directly affected by them.