Fujitsu releases toolkit with guidance on the ethical impact of AI
Fujitsu has announced the development of a resource toolkit that offers developers guidance for evaluating AI systems' ethical impact and risks.
The company will offer these resources, based on international AI ethics guidelines, free of charge from February 21, 2022, in an effort to promote the safe and secure deployment of AI systems.
The toolkit consists of various case studies and reference materials, including a newly developed method for clarifying ethical requirements in AI ethics guidelines written in natural language and applying ethical requirements to actual AI systems.
Fujitsu says that with the guidance, it aims to prevent misunderstandings and potential risks caused by differences in the interpretation of descriptions in guidelines. And offer AI system developers and operators new tools for thoroughly identifying and preventing possible ethical issues early in the development process, in keeping with international best practices.
"In Europe, there is a growing debate about AI regulations, and one of the key issues is how to close the gap between principles and practices, or 'what' and 'how.'," says leading authority in the research of responsible AI and business ethics from the Technical University of Munich, Dr Christoph Lutge.
"I believe that the results of this research are very significant in that they enable us to practice based on principles. I would also like to express my deep appreciation for the decision to open up the research results and stimulate discussion worldwide."
Fujitsu says going forward it will actively work to partner with government agencies, private companies, and leading researchers to further refine and promote its newly developed methodology and aims to release an expanded version of the resource toolkit in the fiscal year 2022.
In April 2021, the European Commission issued a draft for a regulatory framework calling for a comprehensive ethical response for AI system developers, users, and stakeholders to increase concerns surrounding algorithmic bias and discriminatory decision-making in AI and machine learning applications.
Fujitsu says to commit fully to the responsible use of technology and earn society's trust in AI systems and the companies and organisations involved in this space, it had formulated its own AI Commitment in 2019, as well as a new AI Ethics and Governance Office to develop and enforce robust policies for AI ethics, promote organisational AI ethical governance to ensure their effectiveness.
The company says it will move from principle to practice by steadily implementing best practices in the real world, ensuring the realisation of ethical, safe, and transparent AI and machine learning technologies.
"It is common practice in AI system development to identify possible ethical risks in AI systems based on AI ethics guidelines issued by government authorities and companies," says Fujitsu.
"These guidelines are written in natural language, however, contributing to possible differences in interpretation and misunderstandings amongst designers and developers that can lead to inappropriate or insufficient measures."
Under this method, it is also difficult to judge whether the contents of the guidelines were thoroughly and appropriately reviewed.
"Many challenges remain, however, and possible misinterpretation of guidelines in the design phase of new technologies can potentially lead to insufficient or inappropriate measures to counter risk."
Fujitsu will be offering the following resource toolkit, consisting of a variety of resources and guidance for developers to refer to in their work:
- Whitepaper: A general overview of the methodology
- AI ethical impact assessment procedure manual: AI system diagram, preparation procedure of AI ethical model and explanation of problem correspondence method.
- AI ethical model: An AI ethical model based on AI ethical guidelines published by the European Commission (created by Fujitsu).
- AI ethics analysis case studies: Results of analysis of major AI ethics issues from the AI Incident Database of Partnership on AI (As of February 21, six cases were added sequentially).