Responsible Artificial Intelligence (AI)

From GM-RKB
Jump to navigation Jump to search

A Responsible Artificial Intelligence (AI) is a artificial intelligence (AI) that is developed and deployed with a commitment to ethical practice, legal practice, and socially beneficial practices.

  • Context:
  • Example(s):
    • AI systems in autonomous vehicles designed with safety and accountability features.
    • AI applications in finance that provide transparent decision-making processes.
    • Microsoft has created its own responsible AI governance framework with the help of its AI, Ethics and Effects in Engineering and Research Committee and Office of Responsible AI groups.
    • FICO has developed responsible AI governance policies that include building, executing, and monitoring explainable models for AI, and using blockchain as a governance tool.
    • IBM employs an internal AI Ethics Board to support the creation of ethical and responsible AI across the organization.
    • ...
  • Counter-Example(s):
    • AI systems that operate with minimal human oversight leading to ethical dilemmas.
    • AI used in surveillance without adequate privacy protections.
    • AI systems that operate without transparency or accountability.
    • AI deployments that fail to address privacy and security concerns adequately.
    • Unregulated AI Systems,
    • Black-Box Algorithms,
    • Bias-Promoting AI.
  • See: AI Ethics, AI Governance, Sustainable AI, AI Bias, Transparency in AI, AI Security, Ethical Computing.


References

2024a

2024b