AI & Ethics – Blockchain

 

#Artificial Intelligence 

Responsible AI 

AI should not be a purpose, but a tool to serve individuals and society. The goal is to ensure that AI systems are used in a transparent, fair, safe and effective manner for individuals and society. This includes: 

  • Protecting data privacy and security 
  • Preventing bias in data and algorithms 
  • Ensuring the transparency of decisions made by the AI system  
  • The fight against socio-economic inequalities 
  • The fight against discrimination and the violation of fundamental rights 
  • Managing other risks 

It is not only a matter of abiding by applicable regulations but also of respecting ethical standards for the design, development and use of AI.  

Our services

By organization​

Sensibilisation IA & ethique

AI & ethics awareness 

  • Presentation of the European Union’s AI Act
  • Presentation of the concept of “responsible AI” 
AI & ethics maturity assessment

AI & Ethics Maturity Assessment 

  • Assessment of the organization’s level of maturity regarding the design, development and use of AI systems with respect to applicable regulations and ethical standards  
Ethic by design

Ethic by design

  • Definition of a corporate strategy and roadmap for AI and ethics 
  • Resources: ethics charter, guidelines, risk analysis tools, and training materials 

By project

 

  • Data Protection Impact Analysis (DPIA) on AI systems that process personal data 
  • Risk analysis on AI systems 

FAQ

What is artificial intelligence?

There are many definitions. One is: “AI refers to systems that display intelligent behavior by analysis their environment and taking action – with some degree of autonomy – to achieve specific goals.” 

(Source: European Commission’s 2018 definition of AI) 

 AI-based systems can be purely software, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things).” 

What is an "AI system"?

Software that is developed through machine learning, logic- and knowledge-based approaches and statistical approaches. It can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with; (source: Proposal for an EU ARTIFICIAL INTELLIGENCE ACT ). 

The OECD defines it as a machine-based system that can influence the environment by producing an outcome (predictions, recommendations or decisions) for a given set of objectives. It uses machine- and/or human-based data and inputs and has varying degrees of autonomy. 

What legislation governs the design and use of AI systems?

Data protection laws provide a framework for the processing of personal data using AI systems. In the EU, the General Data Protection Regulation (“GDPR”) provides a framework for automated processing of personal data, including profiling (Article 22). 

A draft European regulation on artificial intelligence (“AI Act”) concerns the rules applicable to AI systems used on the EU market based on their classification according to their level of risk (minimal, limited, high, unacceptable). The regulation provides for penalties of up to 6% of the organization’s global revenue, or €30 million. 

Around the world, there is a growing body of legislation aiming at regulating the responsible and ethical use of AI systems. 

What does the Artificial Intelligence Act say when an AI system presents an unacceptable risk?

All AI systems considered a clear threat to human safety, livelihoods, and rights will be banned, from social tagging by governments to toys using voice assistance that encourage dangerous behaviors. 

What are the areas where the use of AI systems is identified as high risk under the AI Act?

  • Critical infrastructure (e.g. transportation) 
  • Education or vocational training (e.g. exam marking) 
  • Product safety components (e.g. the application of AI in robot-assisted surgery) 
  • Employment, worker management and access to self-employment (e.g. CV sorting software for recruitment procedures) 
  • Essential private and public services (e.g. credit rating denying citizens the ability to obtain a loan) 
  • Management of migration, asylum applications and border controls (e.g. verification of the authenticity of travel documents) 
  • Remote biometric identification and categorization of individuals 

High-risk AI systems will have to respect strict requirements before they can be marketed in the EU.  

Under what conditions will it be possible to use high-risk AI systems?

High-risk AI systems will be subject to strict obligations before they can be put on the market: risk assessment and risk mitigation measures, high quality of the datasets feeding the system to minimize risks and discriminatory results, traceability measures for of the results, detailed documentation providing all the necessary information about the system and its purpose, so that authorities can assess its compliance, clear and adequate information to users, appropriate human oversight measures to minimize risks and a high level of robustness, safety and accuracy. 

Will the use of AI systems also be subject to special rules?

In addition to compliance with data protection legislation (see #2), the AI Act requires greater transparency in the marketing of solutions such as chatbots. 

What does "trustworthy AI" mean?

There is no common framework for trustworthy AI. The OECD was the first in 2019 to define principles for of trustworthy AI at an intergovernmental level. In Europe, the High-Level Expert Group proposed that based on fundamental rights and ethical principles, AI systems should meet certain key requirements to be trustworthy (non-exhaustive list): 

  • Human action and control including respect for fundamental rights 
  • Technical robustness and safety including accuracy, reliability and reproducibility 
  • Privacy and data governance 
  • Transparency including traceability, explicability and communication 
  • Diversity, non-discrimination and equity 
  • Societal and environmental well-being including environmental sustainability and respect, social impact, society and democracy 
  • Accountability including auditability, minimization and reporting of negative impact 

Are there standards for assessing the risks associated with AI systems?

Yes, there are several. One is ISO/IEC 23894, which provides strategic guidance for managing the risks associated with the development and use of AI and how organizations can integrate risk management into their AI-based activities. This standard provides a functional mapping of risk management processes throughout the AI system life cycle. NIST also released the AI RFM, a framework designed to equip organizations and individuals – the AI stakeholders – with approaches to increase the reliability of AI systems and to support the responsible design, development, deployment, and use of AI systems over time. 

 

#Blockchain

The challenges of blockchain

Blockchain technology has the potential to reshape the way economic agents interact across all sectors. Blockchain brings to the internet the layer of trust it previously lacked. 

The integration of blockchain technology is not without risks, especially regarding compliance with data protection regulations. 

  • The application of some of the principles of the GDPR is problematic when using blockchains, especially for permissionless blockchains. 
  • The regulatory environment may not provide clear answers to data protection challenges. 
  • The regulatory framework is not fully defined and is still evolving. 
  • There is no harmonization of regulations in the world. 
  • The cost of developing or integrating a blockchain solution is significant. The immutable nature and consensus governance systems of the blockchain make it difficult or impossible to correct errors. 

Our services

Stratégie

Strategy

  • Define a data protection strategy for your company‘s storage of personal data on the blockchain 
  • Establish a governance framework to ensure compliance with regulatory requirements 
Formation

Training

  • Educate your team on the topics of blockchain, web3, metaverse, and data privacy regulation issues 
  • Train your management team on the issues related to the professional use of these technologies 
Accompagnement de projet

Project support

  • Data protection risk assessment for your project 
  • Support in the implementation of the “Privacy by Design” concept 
Audit

Audit

Conduct an audit of your blockchain project 

  • Compliance audit 
  • Design audit 
  • Technical audit (Architecture, Wearable, etc.) 
Innovation

Innovation

  • Technical support: Incorporate anonymization techniques to ensure confidentiality 
  • Platform Sandbox: An environment to facilitate compliance checks prior to solution deployment 

Vos interlocuteurs

Florence BONNET Partner
Rim FERHAH ASSOCIATE DIRECTOR
Youcef DAMMANE DIRECTOR

Une idée, un besoin ?
Racontez-nous votre projet