AI & Ethics
Artificial Intelligence
Responsible AI
AI should not be a purpose, but a tool to serve individuals and society. The goal is to ensure that AI systems are used in a transparent, fair, safe and effective manner for individuals and society. This includes:
- Protecting data privacy and security
- Preventing bias in data and algorithms
- Ensuring the transparency of decisions made by the AI system
- The fight against socio-economic inequalities
- The fight against discrimination and the violation of fundamental rights
- Managing other risks
It is not only a matter of abiding by applicable regulations but also of respecting ethical standards for the design, development and use of AI.
Our services
By organization

AI & ethics awareness
- Presentation of the European Union’s AI Act
- Presentation of the concept of “responsible AI”

AI & Ethics Maturity Assessment
- Assessment of the organization’s level of maturity regarding the design, development and use of AI systems with respect to applicable regulations and ethical standards

Ethic by design
- Definition of a corporate strategy and roadmap for AI and ethics
- Resources: ethics charter, guidelines, risk analysis tools, and training materials
By project
- Data Protection Impact Analysis (DPIA) on AI systems that process personal data
- Risk analysis on AI systems
FAQ
What is artificial intelligence?
There are many definitions. One is: “AI refers to systems that display intelligent behavior by analysis their environment and taking action – with some degree of autonomy – to achieve specific goals.”
(Source: European Commission’s 2018 definition of AI)
AI-based systems can be purely software, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things).”
What is an "AI system"?
A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (article 3 (1) AI Act).
The OECD defines it as a machine-based system that can influence the environment by producing an outcome (predictions, recommendations or decisions) for a given set of objectives. It uses machine- and/or human-based data and inputs and has varying degrees of autonomy.
What does Foundation Model mean?
A large-scale pre-trained model for AI capabilities, such as language (LLM), vision, robotics, reasoning, search, or human interaction, which can serve as the foundation for other applications. The model is trained on large and diverse datasets.
What is the relationship between Large Language Models (LLM) and Foundation Models?
LLMs are a class of “foundation models.” LLMs are neural networks that can process massive amounts of unstructured text and learn the relationships between words or parts of words, known as tokens. This enables LLMs to generate natural language text and perform tasks such as summarization or knowledge extraction. For example, LaMDA is the LLM behind Bard.
What legislation governs the design and use of AI systems?
The European regulation on artificial intelligence (“AI Act”) concerns the rules applicable to AI systems used on the EU market based on their classification according to their level of risk (minimal, limited, high, unacceptable). The regulation provides for penalties of up to 7% of the organization’s global revenue, or €35 million.
Around the world, there is a growing body of legislation aiming at regulating the responsible and ethical use of AI systems.
What is AI Governance?
It involves defining a strategy and determining the processes, policies, and tools necessary to design, deploy, control, and maintain the management of AI systems.
AI Governance must ensure, among other things, the inventory of AI systems, the identification and dynamic management of risks, the compliance of AI systems, as well as documentation and transparency.
What does the Artificial Intelligence Act say when an AI system presents an unacceptable risk?
All AI systems considered a clear threat to human safety, livelihoods, and rights will be banned, from social tagging by governments to toys using voice assistance that encourage dangerous behaviors.
What are the areas where the use of AI systems is identified as high risk under the AI Act?
- Critical infrastructure (e.g. transportation)
- Education or vocational training (e.g. exam marking)
- Product safety components (e.g. the application of AI in robot-assisted surgery)
- Employment, worker management and access to self-employment (e.g. CV sorting software for recruitment procedures)
- Essential private and public services (e.g. credit rating denying citizens the ability to obtain a loan)
- Management of migration, asylum applications and border controls (e.g. verification of the authenticity of travel documents)
- Remote biometric identification and categorization of individuals
- High-risk AI systems will have to respect strict requirements before they can be marketed in the EU.
Under what conditions will it be possible to use high-risk AI systems?
High-risk AI systems will be subject to strict obligations before they can be put on the market: risk assessment and risk mitigation measures, high quality of the datasets feeding the system to minimize risks and discriminatory results, traceability measures for of the results, detailed documentation providing all the necessary information about the system and its purpose, so that authorities can assess its compliance, clear and adequate information to users, appropriate human oversight measures to minimize risks and a high level of robustness, safety and accuracy.
Will the use of AI systems also be subject to special rules?
In addition to compliance with data protection legislation, the AI Act requires greater transparency in the marketing of solutions such as chatbots (Article 50 AI Act).
What does "trustworthy AI" mean?
There is no common framework for trustworthy AI. The OECD was the first in 2019 to define principles for of trustworthy AI at an intergovernmental level. In Europe, the High-Level Expert Group proposed that based on fundamental rights and ethical principles, AI systems should meet certain key requirements to be trustworthy (non-exhaustive list):
- Human action and control including respect for fundamental rights
- Technical robustness and safety including accuracy, reliability and reproducibility
- Privacy and data governance
- Transparency including traceability, explicability and communication
- Diversity, non-discrimination and equity
- Societal and environmental well-being including environmental sustainability and respect, social impact, society and democracy
- Accountability including auditability, minimization and reporting of negative impact
Are there standards for assessing the risks associated with AI systems?
Yes, there are several. One is ISO/IEC 23894, which provides strategic guidance for managing the risks associated with the development and use of AI and how organizations can integrate risk management into their AI-based activities. This standard provides a functional mapping of risk management processes throughout the AI system life cycle. NIST also released the AI RFM, a framework designed to equip organizations and individuals – the AI stakeholders – with approaches to increase the reliability of AI systems and to support the responsible design, development, deployment, and use of AI systems over time.
What are the actors mentioned in the European Artificial Intelligence (AI) Act?
The actors mentioned in the European Artificial Intelligence (AI) Act include the provider, the deployer, the importer, the representative, and the distributor.


