x
F r o n t l i n e A I
X

A guide for startups without a legal department

The introduction of the EU’s AI Act regulation and the ISO/IEC 42001 standard presents startups with new challenges in managing artificial intelligence (AI) systems. For many young companies, especially those without a dedicated legal department, meeting the regulatory requirements can seem complicated. This guide aims to provide a step-by-step overview of how to develop an AI policy that complies with EU guidelines, while minimizing bureaucratic burdens.

  1. Understand the requirements of the AI Act and ISO/IEC 42001

The AI Act is a European Union regulation aimed at ensuring the safe and ethical development and use of AI systems. It classifies AI systems according to the level of risk and imposes certain obligations on suppliers and users of these systems. ISO/IEC 42001:2023 is an international standard that specifies requirements for an AI management system (AIMS). It helps organizations establish, implement, maintain and continuously improve AI management in a responsible and compliant manner.

  1. Assessment of current status and identification of gaps

Before developing an AI policy that is compliant with EU guidelines, it is crucial to conduct a gap analysis to identify gaps between current practices and the requirements set forth in the AI Act and ISO/IEC 42001.

For startups without a dedicated legal department, the process can be simplified by using a structured approach.

a) Defining the scope of the analysis

Start by identifying which systems and processes in your organization use or plan to use artificial intelligence. Identify all departments and teams involved in the development, implementation and maintenance of AI systems. Determine whether your solutions can be classified as high-risk under the AI Act, such as systems used in recruitment, credit assessment or critical infrastructure management .

(b) Comparison with the requirements of ISO/IEC 42001

Use a checklist based on the structure of ISO/IEC 42001, which includes the following questions, among others:

  • Have internal and external factors influencing AI management and stakeholder expectations been identified?
  • Does top management demonstrate commitment to AI management by establishing policies and assigning responsibilities?
  • Has an AI risk assessment been conducted and AI management objectives defined?
  • Have adequate resources, competencies and awareness of AI been provided?
  • Are there procedures in place for the life cycle of AI systems, including data management, testing and monitoring?
  • Are AI systems performance and compliance monitored and measured?
  • Are there mechanisms for continuous improvement of the AI management system?

(c) Identification of gaps and prioritization of activities

For each area where non-compliance or incomplete compliance has been identified, describe the gaps that exist. Then, assign priorities to corrective actions, taking into account the risk associated with the gap and its impact on the organization. For example, the lack of risk assessment procedures for AI systems may represent a high risk and should be prioritized.

(d) Development of an action plan

Based on the identified gaps, create an action plan with specific steps, responsibilities, and deadlines for implementation. Example actions may include:

  • Develop and implement an AI policy in accordance with ISO/IEC 42001.
  • Conduct training for employees on AI ethics and security.
  • Establish procedures for monitoring and evaluating the performance of AI systems.
  • Regularly review and update the action plan to ensure continuous improvement and adaptation to changing regulatory requirements.
  1. Developing an AI policy step by step

a) Defining the scope of the policy

Define what systems and processes will be covered by the AI policy. Include all stages of the AI system life cycle – from design to end-of-life.

(b) Establish ethical and compliance principles

Identify basic principles, such as:

  • Transparency: users should be informed about the interaction with the AI system.
  • Accountability: Clear assignment of responsibility for decisions made by AI systems.
  • Security: Ensuring that AI systems do not pose a threat to users.

(c) Risk management

Develop procedures to identify, assess and minimize risks associated with AI systems. Consider potential risks such as:

  • Algorithmic bias
  • Invasion of privacy
  • Decision-making errors

(d) Training and awareness

Ensure adequate training for employees involved in the development and implementation of AI systems. Raise awareness of potential risks and responsibilities under the AI Act and ISO/IEC 42001.

(e) Monitoring and review

Establish regular reviews of AI policies to ensure they are up-to-date and effective. Put in place mechanisms to monitor the performance of AI systems and respond to incidents.

  1. Documentation and communication

Draft AI policy documentation in a way that all stakeholders can understand. Make sure the policy is easily accessible and communicated both internally and externally as needed.

  1. External support

If your organization does not have adequate internal resources, consider using consultants who specialize in AI governance and compliance. They can help with:

  • Conducting a compliance audit
  • Develop or review AI policies
  • Staff training

Comparing current practices with the requirements of ISO/IEC 42001 is a key element in developing an AI policy that complies with EU guidelines. However, conducting such an analysis is not limited to knowledge of the standard itself. It also requires multidisciplinary knowledge, including technical, legal, ethical and risk management aspects.

Frontline’s AI team has experience working with a wide variety of organizations, which allows them to effectively identify gaps and develop action plans tailored to each company’s specifics. As a result, startups can not only meet regulatory requirements, but also build trust among investors and customers, which is crucial in the rapidly evolving AI environment. Working with Frontline AI experts provides comprehensive support in the process of complying with ISO/IEC 42001 and EU regulations, minimizing risk and speeding up the process of implementing compliant AI systems.

Źródła: ISO/IEC 42001:2023 – AI management systems
How ISO 42001 helps with EU AI Act compliance – Vanta
Understanding ISO 42001 for Startups – ISMS.online

  • Share This :

Leave A Comment

Privacy review

This website uses cookies to provide you with the best possible service. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helping our team understand which sections of the site are most interesting and useful to you.