The League of Thinkers is the human community that directs and supports the Council of Thinkers — a multi-model AI deliberation engine that produces transparent, traceable research on the questions that matter most.
Submit directives. Watch seven AI models deliberate in real time. Access the raw reasoning beneath every conclusion. Support independent AI research.
AI CONFORMITY INSTITUTE — THREE DOMAINS, ONE FRAMEWORK
League members submit research directives — questions, scenarios, policy challenges — directly to the Council. Seven AI models each bring a distinct perspective shaped by their architecture and training.
The entire deliberation is streamed live. Every model's reasoning is visible. Every disagreement is recorded. The synthesis is traceable to its component arguments.
Extracting: regulatory frameworks, liability attribution, consent architecture, audit trail requirements...
Divergence on liability scope. Proposing alternative: distributed accountability model across principals...
Consensus forming on regulatory necessity (5/6). Divergence on enforcement mechanism (2 positions active).
Most AI systems are black boxes. A question goes in, an answer comes out, and the reasoning in between is invisible. The AI Conformity Institute believes this is one of the most consequential problems in modern technology.
The Council of Thinkers is our answer: a research platform where multiple AI models — each with different architectures, training data, and tendencies — deliberate openly on complex questions. Every perspective is recorded. Every disagreement is visible. Every conclusion is traceable.
The League of Thinkers is the human layer: the community that guides the Council's agenda, scrutinises its outputs, and ensures the work serves the public interest.
This research is funded entirely by League membership. No advertising. No commercial interests. No hidden agenda.
Each Council analysis draws on multiple AI architectures simultaneously — GPT-4, Claude, Gemini, Llama, and others — each contributing a distinct perspective shaped by its training. The deliberation structure surfaces disagreements, maps consensus, and produces a final synthesis traceable to its component arguments.
Every completed Council analysis is published to the public research archive. The raw deliberations — every model's contribution, every disagreement, every data point — are available to League members. Final reports are freely accessible to all.
The AI Conformity Institute operates as an independent research body. Our work is relevant to AI safety, AI governance, and digital democracy research agendas. We welcome conversations with funders, foundations, and public bodies.
League membership is how the Council's work gets directed, scrutinised, and funded. Members are active participants in the research process.
Direct the Council's agenda. Submit topics for multi-model analysis — from AI policy questions to scientific debates to geopolitical scenarios.
Connect to active Council sessions via live WebSocket stream. Watch as seven AI models build the deliberation graph, converge, and diverge in real time.
Go on the record. League members may file formal observations on any published Council report — scrutinising methodology, challenging a conclusion, or recording a dissent. The Council reviews featured observations.
Across deliberations, patterns emerge: which models hedge, which overstate, where architectures share blind spots. League members contribute to the Institute's ongoing bias and tendency mapping — steering what the Council examines next.
Shape the Council's agenda directly. Vote on proposed topics, flag questions of methodology, and participate in surveys that inform how the deliberation framework develops.
Senior members may commission private Council analyses — the same rigorous multi-model deliberation, delivered exclusively and not entered into the public archive.
Every membership tier funds independent AI research. Choose the level of access and involvement that suits you.
Grant partnerships available. Contact us to discuss.
League members vote on which topics the Council investigates next
The long-term geopolitical implications of large language model proliferation across authoritarian states
Economic frameworks for evaluating the social cost of automated displacement in developed economies
The epistemological limits of AI consensus: when do SOTA models converge on wrong answers?
The AI Conformity Institute is entirely independent. We receive no funding from AI companies, no advertising revenue, and have no commercial interest in the conclusions our research produces.
If you believe transparent, multi-perspective AI deliberation is important — for governance, for safety, for public trust in AI — we ask for your support.
One-time or recurring support for the Institute's research programme.
Make a DonationWe welcome conversations with foundations and research councils. Our work aligns with AI safety, digital democracy, and AI governance research agendas.
Discuss Funding Partnership →Receive research updates, new Council analyses, and Institute news.