As theEU Artificial Intelligence Act takes shape, it's crucial for U.S. companies to understand the roadmap for compliance and take action. The Act's phased implementation includes certain prohibitions effective just six months post-enactment, with additional obligations rolling out over the following 36months.
The AIAct's risk categorization of AI systems, with varying compliance requisites, is at the crux of the regulation. To comply, U.S. enterprises must scrutinize their AI offerings and classify them within the unacceptable, high, limited, or minimal risk categories.
ForU.S. companies operating in the EU or handling EU data, the time to act is now.A thorough familiarization with the Act's final version upon its publication isthe initial step, followed by an introspective evaluation of AI systems against the stipulated risk levels. This classification will chart out the path to compliance.
Conducting a gap analysis is equally essential, pinpointing discrepancies between currentAI operations and the AI Act's mandates. This preparatory phase is not merelyabout compliance but embracing the ethos of Responsible AI—incorporating ethical, transparent, and secure AI practices as a fundamental business principle.
U.S.companies can showcase their commitment to ethical AI conduct by engaging in the European Commission's AI Pact, a voluntary initiative that serves as a testament to a company's dedication to embodying the Act's stipulations ahead of their mandatory application.
In essence, U.S. companies must leverage this period to fortify their AI systems against the impending EU regulations.As an advocate of Responsible AI, the counsel is clear: prioritize transparency, fairness, and diligence in AI applications to ensure not only adherence to the EU AI Act but also a strong footing in the global AI domain.
Kevin Neary, is an RAI advocate, CEO at Orcawise, and a keynote speaker on the future of business and humanity.