Big hopes come along with the recently published AI Act. The demands by stakeholders are high, including industry, society and regulators.
The AI Act foresees several options to facilitate compliance with its requirements. These include Codes of Practice, Standards, Codes of Conduct and Alternative Adequate Means.
Most pressingly, the AI Office is required to facilitate the drafting of a Code of Practice. The deadline is nine months after entering into force, i.e., 2. May 2025. Nonetheless, time pressure should not jeopardize quality. Potential impacts of the AI Act may be significant. The impact may positively guardrail future AI, empowering societal benefits, amplifying innovation while ensuring societal interest will be duly respected; likewise, the impact may (unintendedly and) negatively hinder European developments in the fields of AI, resulting in lack of innovation, increase of dependencies, and frustrate beneficial, societal interests and benefits.
Media reports indicate that the AI Office will utilize an external consulting agency to accelerate its processes translating its appointed tasks into operations. It should be clarified that the consulting agency is considered as supporting the AI Office procedurally, and editorially. Media indicate that the AI Office will remain in oversight and content leadership, and that the AI Office will not hand-over such powers to one single party. The outsourcing should not circumvent or even release the AI Office to meet the procedural expectations engraved in the AI Act, Art. 56.
In other words: the process should remain a multistakeholder process, implemented early and efficiently in line with applicable competition rules as expressively stated in Art. 4 (1) of the Commission Decision establishing the AI Office. Future calls for participation should therefore be fair, transparent and public. Inclusive processes significantly amplify industry and societal acceptance and subsequently adoption of any such self- and co-regulatory tools.
In this respect the efforts by the AI Office are acknowledged and noted, in establishing an experts group supporting the drawing-up of Codes of Practice. It is explicitly noted that the European AI Office invites – besides providers – other industry organisations, other stakeholder organisations such as civil society organisations or rightsholders organisations, as well as academia and other independent experts.
In order to ensure high-quality and ready-for-practice results and maintain the incredibly short timelines, the process may entail suitable governance rules and suitable technical operations and platforms. E.g., where use cases, concerns and alike may be communicated by any stakeholder, the drafting of requirements may require distinct expertise, arguing in favour of related editorial privileges. Likewise, the process may foresee structured gathering of feedback, input and (public) consultations. In case of the latter, any suitable means to provide feedback, including clarity if such feedback will be published, should be easy accessibly and transparently communicated.
Against this background, it is acknowledged and duly noted that the AI Office has launched a multi-stakeholder consultation on trustworthy general-purpose AI models in the context of the AI Act, accepting responses latest by Tuesday, 10 September 2024, 18:00 CET.
For the time being it is imperative that the AI Office’s processes and governance will be as transparent and accessible as possible. Considering short term deadlines for implementation, industry is amidst its evaluation of the requirements of the AI Act. The possibility of different compliance mechanisms, including self- and co-regulatory measures, is generally welcomed. Explicit interest in individual Codes of Practice or other means will be determined along the way.
Selbstregulierung Informationswirtschaft e.V. (SRIW) stands ready to support the process as an independent expert (Art. 56 (3) AI Act). SRIW brings unique expertise in drafting and implementing self and co-regulatory measures applicable to several legal frameworks, generally focussed on challenges of the digital industry. SRIW offers its contributions to establish a methodology to foster innovation and, at the same time, promote corporate responsibility with regards to the requirements of the AI Act. This includes identifying potential challenges in the AI Act addressable by self-regulatory means, setting good governance models ensuring the balance of different interests, and creating a suitable structure for a European code. SRIW may provide is expertise and allow stakeholders to build upon existing work, ensuring codes’ compatibility with legal frameworks like GDPR and advocating for interoperable regulatory frameworks.