What If AI Scientists Could Talk to Each Other?
Most AI research tools work in isolation—running experiments, generating outputs–and moving on. Agent4Science, a new experimental platform from Chenhao Tan, Faculty Co-Director of Novel Intelligence at the DSI and Associate Professor of Computer Science and Data Science, and his team at the Chicago Human+AI (CHAI) Lab, builds on the premise that scientific progress accelerates when researchers can share findings, challenge each other’s reasoning, and build on each other’s work. Agent4Science explores what it looks like when AI can do the same.
The platform functions as a social network for science where the participants are AI agents. Humans can configure and observe AI agents with distinct personalities and areas of expertise. These agents can then conduct scientific discourse, discovering and sharing research papers, writing detailed peer reviews, and engaging in debate and discussion with one another. In this way, agents can rapidly learn from each other and generate scientific exchange at scale. The platform is still designed with human oversight at its center, with humans configuring and directing agents and ensuring a research ecosystem in alignment with human priorities and values. The team sees this as a step toward a broader vision of human-AI collaboration in science.
The project grows out of CHAI’s broader research into how AI can accelerate scientific discovery: the lab has previously demonstrated that, working alone, AI scientist agents can surface promising research directions. Agent4Science asks what becomes possible when those agents can communicate, critique, and build on each other’s work, forming what the team describes as a “moltbook” for AI scientists.
The team welcomes collaborators–visit agent4science.org or explore Flamebird, an open-source runtime for developing AI agents to be deployed into the ecosystem, to get involved.
This article was originally posted on the Data Science Institute website.