Carl Shulman
Jump to navigation
Jump to search
A Carl Shulman is a person.
- Context:
- It can typically conduct Research Work through philosophical analysis.
- It can typically publish Academic Papers through research institutions.
- It can typically contribute to Effective Altruism Movement through conceptual clarifications.
- It can typically analyze AI Safety Concerns through risk assessment frameworks.
- It can typically advise Philanthropic Organizations through impact evaluations.
- ...
- It can often collaborate with Research Partners through joint publications.
- It can often present at Academic Conferences through research presentations.
- It can often engage in Public Discourse through blog posts and podcast appearances.
- It can often develop Philosophical Arguments through analytical approaches.
- ...
- It can range from being a Junior Research Associate to being a Senior Research Associate, depending on its career stage.
- It can range from being a Specialized Researcher to being a Broad Researcher, depending on its research scope.
- ...
- It can have Academic Affiliations with research institutions.
- It can contribute to Research Fields including effective altruism, rationality, and AI safety.
- It can influence Policy Development through evidence-based recommendations.
- ...
- Examples:
- Carl Shulman Career Stages, such as:
- Carl Shulman Contribution Areas, such as:
- Carl Shulman Publications, such as:
- Carl Shulman Online Presences, such as:
- Carl Shulman Collaborations, such as:
- ...
- Counter-Examples:
- Nick Bostroms, which are authored by Nick Bostrom rather than Carl Shulman despite addressing similar existential risk topics.
- William MacAskills, which focus on effective altruism principles from a different philosophical perspective than Carl Shulman publications.
- Toby Ords, which emphasize existential risk prioritization rather than AI safety specific concerns often found in Carl Shulman publications.
- Stuart Armstrongs, which typically apply mathematical modeling approaches to AI safety problems rather than philosophical frameworks.
- Academic Philosophers, who typically work within traditional philosophical disciplines rather than applied ethical movements.
- See: Nick Bostrom, William MacAskill, Toby Ord, Effective Altruism, AI Safety, Research Associate.
References
2021
- (Shulman & Bostrom, 2021) ⇒ Carl Shulman, and Nick Bostrom. (2021). “Sharing the world with digital minds.” In: Rethinking Moral Status, pp. 306-326. Oxford University Press.
- NOTES:
- Explores the ethical implications of coexisting with digital minds and their potential moral status.
- Examines how human biological nature imposes practical limits on welfare promotion compared to potential digital mind capabilities.
- Cited by 52 related articles as of 2024.
- NOTES:
2016
- (Armstrong, Bostrom & Shulman, 2016) ⇒ Stuart Armstrong, Nick Bostrom, and Carl Shulman. (2016). “Racing to the precipice: a model of artificial intelligence development.” In: AI & Society, vol. 31, no. 2, pp. 201-206.
- NOTES:
- Presents a model of an AI arms race where multiple development teams compete to build the first AI.
- Examines risk factors and safety implications of competitive AI development.
- Explores assumptions about how the first AI might impact global stability.
- Cited by 213 related articles as of 2024.
- NOTES:
2012
- (Shulman & Bostrom, 2012) ⇒ Carl Shulman, and Nick Bostrom. (2012). “How hard is artificial intelligence? Evolutionary arguments and selection effects.” In: Journal of Consciousness Studies, vol. 19, no. 7-8, pp. 103-130.
- NOTES:
- Analyzes evolutionary arguments about the feasibility of artificial intelligence.
- Evaluates claims that because evolution produced human intelligence on Earth, human engineers should be able to create artificial intelligence.
- Examines selection effects in comparing natural intelligence evolution to artificial intelligence development.
- NOTES: