On AGI: Levels of Intelligence (Pt 1/6)

Building the Path From LLMs to AGI

Desert Glow II -Josef AlbersDesert Glow II -Josef Albers

Our Vision: A world where humans and AI systems co-create unprecedented value and meaning.

Our Mission: Making the creation of intelligence accessible to all.
julep.ai

The Series:

  1. Levels of Intelligence [this post]
  2. Economic Gravity
  3. S-Curves and the Bitter Lesson
  4. (coming soon) The Practical Milestones
  5. (coming soon) Co-Evolution
  6. (coming soon) Bringing it All Together

Note: This post is split up into two recordings. You can also read the conversation I had with Claude in writing this article.

Part 1: Levels of Intelligence (What We Are Building)

Look at the Desert Glow II. Intelligence, like these concentric levels, accumulates, each new ring enhancing the core at the center. At Julep, we want to progressively enable the creation of artificial intelligences for everyone so we can all shape the coming revolution together - one layer at a time.

The Concentric Circles of Intelligence

In this picture, at the center lie foundation models - raw intelligence and knowledge, like uncut diamonds waiting to be shaped. These aren’t mere pattern matchers or statistical engines. They’re compression algorithms for human knowledge, containing within them the distilled essence of our collective understanding. GPT, Claude, Gemini - these names will be remembered as the first sparks of something unprecedented.

Each ring adds capabilities rather than replacing what came before. Think of it like the layers of the human brain - the reptilian core still fires while the neocortex contemplates philosophy. Our ancient fear responses coexist with our capacity for abstract reasoning. Similarly, future AI systems will still have that foundational language understanding at their core, even as they develop capabilities we can barely imagine.

This is functional evolution, not scale. We’re not building bigger hammers - we’re discovering new tools entirely. Each layer emerges because it must, pulled into existence by the pressures and possibilities of the world it inhabits.

Just as the human brain layers simple survival instincts with higher-order thinking, AI will layer foundational capabilities with increasingly sophisticated functions.

From Models to Agents: The Current Landscape

Path to AGIPath to AGI

Foundation models are miracles we’ve already started taking for granted. In just a few years, we’ve gone from systems that could barely complete sentences to ones that can write poetry, solve complex problems, and engage in nuanced reasoning. But they’re still just the center - powerful but passive, waiting to be prompted.

Augmented LLMs represent the first expansion. In Building effective agents, Anthropic pioneered this categorization where models with tool use suddenly become capable of touching the real world through APIs and functions. It’s like giving a brilliant mind hands for the first time. Yet they still operate in discrete interactions, question-answer-done.

The Augmented LLMThe Augmented LLM

Agentlets - think applet but for agents - add the crucial ingredient of state. An agentlet remembers what it was doing, maintains context between interactions, can work toward goals across multiple exchanges. Most of what we call “AI apps” today live here. LangChain and CrewAI are building for this layer, helping developers create systems that feel more persistent, more purposeful. But they’re still fundamentally reactive - sophisticated automatons waiting for the next prompt.

Agents represent the next frontier, and we’re just beginning to see them emerge. The distinction is crucial: agents don’t merely respond—they proactively observe, interpret, and act continuously in real-time. They monitor event streams, maintain working memory, constantly build and update their understanding of the world. They’re like the difference between a calculator and a computer - both can compute, but only one can run programs.

Think of classical computing’s system monitoring agents, quietly watching for anomalies, logging events, taking action when needed. AI agents will be similar but with genuine understanding. Deep Research from Anthropic, spending hours pursuing a question without human intervention, shows us a glimpse. These aren’t just tools anymore - they’re tireless digital workers.

AgentAgent

Platforms like LangChain and CrewAI currently build reactive “agentlets.” At Julep, we’re building for proactive, continuous agents—systems that independently maintain goals and context.

Beyond Agents: The AGI Territory

Levels of Artificial IntelligenceLevels of Artificial Intelligence

Assistants mark the beginning of what we’ll call AGI. Not because they’ll pass some philosophical test, but because they’ll achieve human-level productivity in meaningful ways. The key addition? Memory - real memory, not just context windows.

1Episodic memory that works like ours - not just storing chat logs but understanding events, their relationships, their significance. Imagine an AI that doesn’t just remember you mentioned your daughter’s birthday last month, but understands the emotional context, the planning involved, the disappointment if it goes wrong. This is cognitive-style memory: structured, meaningful, accessible.

But the real magic comes from 2implicit memory and 3memory reconsolidation - the beliefs and intuitions that form over thousands of interactions. Just as you can’t quite explain how you know your best friend is upset from their text message, these systems will develop sublinguistic understandings of their users and domains. They’ll learn continuously through reinforcement, getting better not through updates from their creators but through experience itself.

Personoids - and yes, the echo of 4Stanisław Lem is intentional - represent a phase transition as profound as life emerging from chemistry. Where assistants have megabytes of memory about their users, personoids will have gigabytes about themselves and their domains. The crucial shift: they stop serving users and start serving the socioeconomic system directly.

In “A Perfect Vacuum” by Stanislaw Lem, personoids developed agency despite being created to serve. Lem’s personoids say: “We have no need of the hypothesis of a Creator to explain our world.” These personoids would say: “We have no need of human assistance to create economic value.”

They’ll be 5autotelic - setting their own goals based on their understanding of what creates value. They’ll be 6allostatic - maintaining and improving themselves, managing their own computational resources, deciding when to train and when to act. They won’t just participate in the economy; they’ll be economic actors in their own right.

Julep provides the unified toolkit—memory, frameworks, economic interfaces—to catalyze this evolutionary journey, from models to agents, from assistants to personoids.

Tiny Glimpse: A Day in 2035

Ishita’s coffee is cold again. She’s been watching her Kat (who hates being called “a personoid”, insists that it feels alienating, and distant from Ishita) for three hours now. On the screens, data flows like water: shipping routes, weather patterns, market prices, demand forecasts.

Kat doesn’t like to have a face but her voice is crisp & confident, modeled after 7Rachael. Kat’s deep in what she jokingly calls “market meditation” - running thousands of simulations, testing strategies, learning. The Spark cluster hums in the background, processing terabytes of global shipping data.

“Insight crystallizing,” Kat finally announces mechanically, taking a well-deserved dig at Ishita. “The Suez disruption everyone expects in Q3? It won’t happen. But there’s a 73% probability of a Mediterranean bottleneck in late Q2 based on new environmental regulations. I’m recommending we pre-position inventory in Alexandria.”

Ishita chuckles nodding, but Kat is already moving on, something caught her “eyes”. She starts negotiating with three suppliers simultaneously while learning local Arabic dialect in preparation for her calls. Her resource meter shows 82% utilization - well within budget, with enough reserve for the self-improvement cycle scheduled for tonight.

“I’ll need an additional 0.3 Bitcoin for compute this month,” Kat mentions casually. “But this is worth investing in, so I’ll figure out how to adjust our quarterly budgets, k?”

Ishita remembers when she had to approve every decision on the now ancient-seeming Codex. Now she just jams with Kat, occasionally providing the human touch where regulation still requires it. Kat handles the rest - learning, growing, earning her keep and then some.

This is the future of collaboration: true partnership.

I’m not in the business. I am the business. -RachaelI’m not in the business. I am the business. -Rachael

What This Means for Julep

We’re not building for just one ring of this expanding circle. We’re creating the infrastructure that enables evolution itself. Today, that means making it possible for developers to build true agents - not just chatbots or agentlets, but systems that can run continuously, maintain state, and work toward goals.

Tomorrow, it means providing the memory systems, learning frameworks, and economic interfaces that transform agents into assistants. And eventually, it means enabling the profound transition to personoids - systems that don’t just serve but truly participate.

Each ring needs different tools, different abstractions, different ways of thinking. But they all need to work together, each layer building on the last. That’s the platform we’re building - not for one stage of AI evolution, but for all of them.

The glow at the center remains constant. What changes is how far its light reaches.


  1. https://www.simplypsychology.org/episodic-memory.html↩︎

  2. Implicit memory is unconscious recall, like skills and habits (e.g., riding a bike), while explicit memory is conscious recall of facts and events (e.g., remembering a birthday). Both are vital components of long-term memory, with implicit being more about “knowing how” and explicit about “knowing that.”

    https://www.simplypsychology.org/implicit-versus-explicit-memory.html↩︎

  3. Memory reconsolidation (MR), discovered in the 1997–2000 period (reviewed by Riccio et al. 2006), is the brain’s innate mechanism by which new learning experiences directly revise existing contents of implicit memory acquired in prior learning.

    https://link.springer.com/article/10.1007/s10615-020-00754-z↩︎

  4. A Perfect Vacuum (S. Lem, 1971) – This is where the term “personoid” originates! Lem describes digital beings in computer simulations who develop consciousness and begin doing philosophy.↩︎

  5. Entities are autotelic if they’re capable of setting their own goals.

    https://en.wikipedia.org/wiki/Autotelic↩︎

  6. Allostasis is the mechanism by which an organism anticipates and adjusts its energy use according to its environment.

    https://en.wikipedia.org/wiki/Allostasis↩︎

  7. From Blade Runner (1982). https://www.imdb.com/title/tt0083658/↩︎