April 4, 2024

M-24-10 Took The Words Right Out of Our Mouth

The recent memorandum for the heads of government agencies, M-24-10 – Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, was greeted with excitement here at Vero AI. To put it lightly, M-24-10 is speaking our language.  

The memorandum requires agencies to implement a number of new practices by December 1of this year, and these practices are quite aligned with our own thinking here at Vero AI.We’ll walk through each of the minimum practices from M-24-10 below, along with how the Vero AI platform can help meet those requirements.

  • Complete an AI impact assessment, including the intended purpose, potential risks, and the quality and appropriateness of the relevant data. Vero AI’s VIOLET Impact Analysis meets all of the recommendations for an AI impact assessment. We also test for a number of attributes above and beyond the memorandum’s requirements, including visibility for affected individuals, internal transparency for agency users, and algorithm optimization.
  • Test the AI for performance in a real-world context. It can be enormously challenging to test the performance of advanced AI models in the real world, but this is fundamental to the Vero AI platform. In fact, we don’t stop at point-in-time audits; we enable continuous monitoring of system effectiveness.  
  • Independently evaluate the AI. While the memorandum calls for this review to be completed by an agency AI oversight board or other appropriate agency office, Vero AI can help prepare documentation for this review. A true independent evaluation by a third party like Vero AI will ensure any issues are identified prior to oversight board review.
  • Conduct ongoing monitoring. Keeping an eye on how systems are functioning over time is our forte. Instead of adding regular monitoring to agencies’ to-do lists, outsourcing this testing to Vero AI allows agencies to focus on more central responsibilities.
  • Regularly evaluate risks from the use of AI. M-24-10 requires periodic human reviews to identify potential changes in risks and benefits of AI systems. At Vero AI, we use our Iris engine to support and expedite human review – and our humans in the loop are AI experts who understand the risks.  
  • Mitigate emerging risks to rights and safety. The Vero AI approach involves comprehensively evaluating AI systems and producing actionable results. Users can easily drill down in our results scoreboards to see where the problem lies.  
  • Ensure adequate human training and assessment. Vero AI also provides state-of-the-art AI training and assessment within our platform. Agencies can monitor the ongoing performance of AI tools and keep an eye on training completions, all in one place.
  • Provide additional human oversight, intervention, and accountability as part of decisions or actions that could result in a significant impact on rights or safety. In addition to helping provide additional oversight, Vero AI can help identify those decisions that are likely to be rights- or safety-impacting. We can help agencies design a strategy – then help with implementation.  
  • Provide public notice and plain-language documentation. This is often easier said than done! Translating complicated AI and algorithmic solutions and their impact into plain language is as difficult as translating other human languages. However, since we’re fluent in both data science and normal human-ese, we’ve got you covered here too.

We’re thrilled by the additional attention to AI governance, innovation, and risk management  documented in M-24-10, and we stand ready and waiting to help agencies address these new requirements. For some government agencies, this may be new territory – but for us here at Vero AI, this feels like home.  

External link 🔗
Chief Client Officer

Jensen Mecca, PhD

The VIOLET Impact ModelTM

Vero AI' work centers on the VIOLET Impact ModelTM, a holistic framework that provides a comprehensive and objective view of the impact of algorithms and complex systems.


The degree to which affected individuals are aware of how algorithms are being used.


Whether the algorithm does good and whether it is fair to all classes.


How well the algorithm was built.

Legislative Preparedness

How well the algorithm and surrounding systems are prepared to meet the requirements of current and upcoming legislation.


How well the algorithm works.


How clearly understood the algorithm and its uses are internally.

Contact us for a free consultation

Contact us for a free consultation

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.