2023 EmergentAutonomousScientificRes

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Author Keywords

Abstract

Transformer-based large language models are rapidly advancing in the field of machine learning research, with applications spanning natural language, biology, chemistry, and computer programming. Extreme scaling and reinforcement learning from human feedback have significantly improved the quality of generated text, enabling these models to perform various tasks and reason about their choices. In this paper, we present an Intelligent Agent system that combines multiple large language models for autonomous design, planning, and execution of scientific experiments. We showcase the Agent’s scientific research capabilities with three distinct examples, with the most complex being the successful performance of catalyzed cross-coupling reactions. Finally, we discuss the safety implications of such systems and propose measures to prevent their misuse.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2023 EmergentAutonomousScientificResDaniil A. Boiko
Robert MacKnight
Gabe Gomes
Emergent Autonomous Scientific Research Capabilities of Large Language Models10.48550/arXiv.2304.053322023