Back to Glossary

AI Risk

AI Risk refers to the potential for negative consequences arising from the development and deployment of AI systems, including bias, discrimination, cybersecurity threats, and unintended outcomes. According to NIST’s AI Risk Management Framework (RMF), risk is a measure of an event’s probability and impact, which can be positive, negative, or both (ISO 31000:2018).

Enkrypt AI defines risk as the potential for loss, harm, or uncertainty in decision-making. Effective AI risk management aims to identify and mitigate risks while ensuring AI’s benefits are realized, fostering trust, transparency, and responsible innovation.