2024 AnInteractiveAgentFoundationMod

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Multi-Task Agent Training, Multimodal Learning, Generalist Action-Taking Multimodal System, Cross-Domain Applicability of AI Model, Interactive Agent AI Model.

Notes

Cited By

Quotes

Abstract

The development of artificial intelligence systems is transitioning from creating static, task-specific models to dynamic, agent-based systems capable of performing well in a wide range of applications. We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents across a wide range of domains, datasets, and tasks. Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction, enabling a versatile and adaptable AI framework. We demonstrate the performance of our framework across three separate domains - - Robotics, Gaming AI, and Healthcare. Our model demonstrates its ability to generate meaningful and contextually relevant outputs in each area. The strength of our approach lies in its generality, leveraging a variety of data sources such as robotics sequences, gameplay data, large-scale video datasets, and textual information for effective multimodal and multi-task learning. Our approach provides a promising avenue for developing generalist, action-taking, multimodal systems.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2024 AnInteractiveAgentFoundationModJianfeng Gao
Li Fei-Fei
Ehsan Adeli
Rohan Taori
Zane Durante
Bidipta Sarkar
Ran Gong
Yusuke Noda
Paul Tang
Shrinidhi Kowshika Lakshmikanth
Kevin Schulman
Arnold Milstein
Demetri Terzopoulos
Ade Famoti
Noboru Kuno
Ashley Llorens
Hoi Vo
Katsu Ikeuchi
Naoki Wake
Qiuyuan Huang
An Interactive Agent Foundation Model10.48550/arXiv.2402.059292024