Convictional in the Age of AI Agents
"We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies... With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity."
- Sam Altman, CEO of OpenAI, January 5, 2025
As AI capabilities continue to expand, we're witnessing a fascinating shift in how organizations think about work. Altman's prediction about AI agents joining the workforce isn't just speculation—we're already seeing early signs of this transition (for example, ~25% of code at Google, and many other software organizations, is now being delegated to AI). Regardless of how we define AGI, we can more easily agree on measuring the ability of AI on whether it can complete commercially valuable tasks that take X “equivalent human expert time” (EHET). In code domains, this can vary greatly but would currently still be measured in minutes or maybe hours; in non-academic research and writing, this is measured currently in hours. Soon we will be trusting AI Agents with tasks measured in days, weeks and potentially longer.
The Promise and Reality of AI Agents
First, what do we mean when we say “agent”? Agents, like AGI, can have a bit of a fuzzy definition, but most major providers agree that they are generally intelligent systems capable of completing tasks autonomously on behalf of a user (e.g. Microsoft, IBM, Amazon). Using EHET we measure how ‘big’ an agentic task might be.
The landscape of artificial intelligence is evolving rapidly. New reasoning models, augmented with tools like Anthropic's Machine Context Protocol (MCP), test time reasoning tokens a-la OpenAI’s o1 (and o3) models, and more sophisticated RAG pipelines are pushing the boundaries of what's possible. These advances are pushing through the scaling walls starting to be observed in pre-training compute and making it more reasonable to expect a near future where AI agents EHETs are being measured in days and weeks across many knowledge work domains.
Yet, despite the impressive capabilities of these systems, we've observed something interesting: no company seems to have, as yet, fully replaced human workers with AI. Instead, the most successful implementations we've seen focus on collaboration—humans and AI working together, each bringing their unique strengths to the table and accelerating human output.
This makes intuitive sense. While AI currently excels at these smaller tasks, we see them struggle with long context, limiting their EHET. This makes human judgment crucial for piecing together and planning given context, nuanced decision making, and ensuring alignment with organizational goals. The challenge isn't about replacement—it's about finding the right balance of human oversight and AI assistance.
The Collaboration Challenge
While we see many technical platforms for deployment and orchestration of Agents (e.g. Microsoft's Autogen, CrewAI, LangGraph) we see a void in the type of collaborative, business user focused platforms for collaborating with AI (agents and/or abilities). This leads to many common challenges that are only exacerbated by organizational scale:
Context is crucial but complex. AI systems often struggle to maintain awareness of organizational context across multiple interactions due to the “single player” nature of many assistant platforms and limitations in attention across long context windows. Put practically, this means that a decision that makes sense in isolation might conflict with broader company goals or overlook important historical context.
Oversight needs to be meaningful but manageable. As human output scales with AI advances, and AI agents “become part of the org chart”, business leaders need tools to ensure alignment around company goals, strategy and decisions.
Alignment with goals requires active management. As AI agents take on more responsibilities, ensuring their actions consistently align with organizational objectives becomes increasingly important. This isn't just about setting initial parameters—it's about maintaining alignment over time as goals and contexts evolve. This is exacerbated by the velocity at which these Agents will operate - completing tasks in a fraction of the time that an equivalent human expert could. Software engineers already feel this pain as we see more time being devoted to reviewing the increased volume of code being generated through AI accelerated workflows.
Convictional's Vision for Human-AI Collaboration
At Convictional, we believe the future of work isn't about AI replacing humans—it's about creating environments where humans are accelerated by AI without becoming scattered and chaotic. Further, and importantly, our platform is built on the principle that human judgment should remain at the center of business decision-making, with AI amplifying rather than replacing this role.
This philosophy shapes how we approach the challenge of integrating AI agents into organizational workflows. Rather than treating AI agents as independent actors, we see them as team members who need to be aligned with company goals and accountable to human oversight.
Our platform continues to grow, but at a very high level creates a structured environment where:
Goals remain human-defined but AI-enhanced. Leaders set clear objectives and success criteria, while AI helps to track progress, align decisions and identify opportunities for improvement.
Decisions maintain clear accountability. Whether proposed by humans or AI, every decision is logged with its context and rationale, creating a clear chain of responsibility.
Context flows naturally between team members. By preserving and sharing context across both human and AI participants, we ensure everyone—regardless of their nature—works from a shared understanding of organizational knowledge.
Looking Ahead
As AI agents become more capable, the need for effective collaboration tools will only grow. We're already seeing this in our community, where organizations are experimenting with various approaches to human-AI collaboration. The most successful implementations share a common thread: they maintain human judgment as the cornerstone while leveraging AI to accelerate knowledge work tasks.
At Convictional, we're committed to building the infrastructure that makes this collaboration possible, ensuring that as AI capabilities expand, organizations can maintain control while maximizing the benefits of AI integration.
The path forward isn't about choosing between human judgment and AI capabilities—it's about finding ways to combine them effectively. By providing the tools and infrastructure for meaningful human-AI collaboration, we're helping organizations build toward a future where this partnership can flourish.