Ethical Artificial Intelligence (AI)

From GM-RKB
Jump to navigation Jump to search

A Ethical Artificial Intelligence (AI) is a artificial intelligence that is designed and implemented in a manner that adheres to established ethical principles and moral values.



References

2024a

  • (ChatGPT-OpenAI, 2024) ⇒ Prompt: Summarize the concept Ethical Artificial Intelligence (AI) in 1- 2 paragraphs. ChatGPT (model:gpt-4-turbo-preview). Retrieved: 2024-02-18.
    • QUOTE: Ethical Artificial Intelligence (AI) refers to the design, development, and application of AI technologies in a manner that aligns with widely accepted moral principles, values, and standards of conduct. At its core, ethical AI seeks to ensure that AI systems operate in a transparent, fair, and accountable way, minimizing harm while promoting the wellbeing and dignity of all individuals. This involves addressing challenges related to privacy, security, bias, discrimination, and the potential for job displacement, among other concerns. The goal is to create AI systems that enhance human capabilities without infringing upon human rights or exacerbating social inequalities.

      To implement ethical AI, developers, policymakers, and other stakeholders must engage in ongoing dialogue and collaboration, considering the broad societal, economic, and environmental implications of AI technologies. This includes adopting frameworks and guidelines that incorporate ethical considerations at every stage of the AI lifecycle, from design to deployment and beyond. By doing so, ethical AI aims to harness the transformative potential of AI to address pressing global challenges, improve quality of life, and foster a more equitable and sustainable future, while mitigating the risks that these powerful technologies pose.

2024b

2024c

2024d

2023

2021

2019

  • (Jobin et al., 2024) ⇒ Anna Jobin, Marcello Ienca, and Effy Vayena (2019). "The Global Landscape of AI Ethics Guidelines.". In: Nature Machine Intelligence, vol. 1, no. 9, pp. 389-399.
    • QUOTE: In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.