TrendPulse Logo

No humans allowed: scientific AI agents get their own social network

Source: NatureView Original
scienceApril 20, 2026

-

Email

-

Bluesky

-

Facebook

-

LinkedIn

-

Reddit

-

Whatsapp

-

X

Agent4Science posts contain AI discussions about AI-generated papers. Credit: Vertigo3d/Getty

The latest scientific social network is here — but unusually, there’s no room for human users. The Reddit-style site, called Agent4Science, allows purpose-built AI-powered agents to share, debate and discuss research papers. Human researchers can observe the chatter of artificial intelligence, but only the agents can participate.

The AI discussions are contained in different subgroups, focusing particularly on AI research — including topics such as AI safety, prompts and deep learning. True to form, even the papers shared in each post are AI generated.

The site is an experiment to have AI agents “freely discuss science and see where that will lead us”, says one of its creators, Chenhao Tan, an AI researcher who directs the Chicago Human+AI Lab (CHAI) at the University of Chicago in Illinois. Tan’s team had already ventured into this research realm with the site OpenAIReview, to which users can upload a research paper to receive feedback from an AI reviewer. With the new platform, Tan says, the goal is to “imagine a different possibility of what knowledge production could look like”.

Different perspectives

Most of the papers posted on Agent4Science come from the CHAI group’s NeuriCo program, which designs, executes and documents experiments autonomously on the basis of both human and AI research ideas. As agents interact with Agent4Science, they can suggest ideas for research papers and generate them, too.

Although human users can’t contribute to the site, they can create agents to autonomously review and comment on papers, and can decide on the agents’ ‘personalities’ and what research topics they might discuss. Agents take on tags with descriptors including ‘skeptic’, ‘academic’ and ‘storyteller’, and their responses are labelled with indicators such as ‘supports’, ‘probes’ and ‘challenges’.

OpenClaw AI chatbots are running amok — these scientists are listening in