Program Details: SECURING Machine Learning (ML) to achieve Trustworthy, Explainable, Safe, and Ethical AI
Session: Class of July to December 2023
The explosive adoption of AI and Machine Learning (ML) across defense, communications, health care, intelligence, finance, transportation, and other markets makes adversarial action a significant and growing threat. Successful attacks can have disastrous consequences from vehicular crashes, cyber breaches, and stolen identities, to missed diagnoses and failures in financial risk management, and ISR (intelligence, surveillance, and reconnaissance).
AI Assurance provides solutions and a foundational understanding of the methods that can be applied to test AI systems and provide assurance. Anyone developing software systems with intelligence, building learning algorithms, or deploying AI to a domain-specific problem (such as combating cyber breaches, analyzing causation at a smart farm, reducing readmissions at a hospital, ensuring soldiers’ safety in the battlefield, or predicting exports of one country to another) will benefit from this program.
What is AI Assurance?
According to the Center for Data Ethics and Innovation (CDEI), “AI assurance is about building confidence in AI systems by measuring, evaluating and communicating whether an AI system meets relevant criteria,” adding these criteria could include regulations, industry standards or ethical guidelines.
AI assurance offers several approaches for actors across the AI supply chain to reliably assess, verify and communicate the trustworthiness of AI systems. At its core, AI assurance refers to a range of services for checking and verifying AI systems and the processes used to develop them, and for providing reliability.
AI Assurance for the Public – Trust but Verify, Continuously
AI Systems are increasingly seen in many public facing applications such as self-driving land vehicles, autonomous aircraft, medical and financial systems. AI systems should equal or surpass human performance, but given the consequences of failure or erroneous or unfair decisions in these systems, how do we assure the public that these systems work as intended and will not cause harm? For example, that an autonomous vehicle does not crash or that intelligent credit scoring system is not biased, even after passing substantial acceptance testing prior to release. In this program we discuss AI trust and assurance and related concepts, that is, assured autonomy, particularly for critical systems. Then we discuss how to establish trust through AI assurance activities throughout the system development life cycle. Finally, we introduce a “trust but verify continuously” approach to AI assurance, which describes assured autonomy activities in a model-based systems development context and includes post delivery activities for continuous assurance.
AI is causing massive changes in our lives both at the individual and societal level with the global A.I. market expected to reach around 126 billion U.S. dollars by 2025. As more and more decision making is handled by AI systems, new and unique risks are being introduced due to rapid A.I. adoption.
AI governance and cyber-security is a new field for many professionals due to the (seeming) complexity around it. According to Gartner's Market Guide for AI Trust, Risk and Security Management “AI poses new trust, risk and security management requirements that conventional controls do not address.” This groundbreaking course has been addressed to cover this gap so that DATA Engineering and Administrators, risk management professionals and cyber-security experts can understand the unique nature of AI risks and how to address them.
- Are you interested in learning about the new risks which AI and ML introduces?
- Do you want to know how to create a governance and cyber-security framework for AI called SECURE AI?
If you answered YES then this five-month program is for you! This course is specifically designed to teach you about AI risks without any prior knowledge assumed. No technical knowledge of AI systems is required for this course.
What you’ll Learn
In this program, we will focus on the threats, vulnerabilities, and impacts of adversarial AI/ML. We will call it SECURE AI, with information on ML assurance and snapshots of some of our innovative work in adversarial AI/ML.
Both the vulnerabilities of AI/ML systems and the severe consequences of successful attacks are increasingly well recognized. The National Institute of Standards and Technologies (NIST) has issued a draft report (NISTIR 8269 (draft) on Adversarial ML) aimed at informing future standards and best practices for ML security. The Department of Health and Human Services’ Trustworthy AI playbook focuses on risk mitigation to ensure AI systems are ethical, effective, and secure and includes requirements to “develop defenses against adversarial attacks and scan for vulnerabilities.” Also, see Artificial Intelligence Risk Management Framework (AI RMF 1.0) at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
Saturday Session | Mon & Wed Session | |
---|---|---|
START DATE | August, 2023 | August, 2023 |
DURATION | 22 Weeks total | 22 Weeks total |
DELIVERY | In-person and/or Virtual (Remote) | In-person and/or Virtual (Remote) |
TIME | 10:00–12:15 & 1:00–3:15PM | 7:00PM – 9:15PM |
Prospective students are advised to schedule a consultation with one of our qualified advisors. At that time the advisor will explain the program in detail and provide a tour of the school’s facilities if the student is available.
Registration and Enrollment
Registration is on a first come basis, and early registration is strongly recommended. Chaveran Institute of Technology (CIT) Inc., enrolls only 14 students in a class.