Daniel Miessler published PAI as an extension of his Fabric project, turning Claude Code into a personal operating environment through hooks, skills, agents, and persistent memory. Three weeks ago,
@J.G.Montoya.Hodl published Shaka: a ground-up reimplementation of the same architecture with a different philosophy. Montoya studied PAI, then rebuilt it from scratch with different tradeoffs on scope, provider support, code quality, and dependency philosophy while keeping all five structural primitives intact. The rewrite is more interesting than a fork because it tests whether the architecture holds up when someone with different priorities reconstructs it from first principles.
I wrote about PAI in February when comparing it to OpenClaw, a daemon-based personal AI project. The conclusion then was that file-based systems beat server applications for personal AI because files fail transparently and version-control with git. Shaka confirms that conclusion from the opposite direction: Montoya started from PAI's architecture and rebuilt it with provider flexibility as the primary constraint.
The Shared Skeleton
PAI established five structural primitives for personal AI infrastructure. Shaka's reimplementation kept all five, which tells you they are load-bearing.
Skills are Markdown documents describing domain capabilities: workflow instructions the model follows at invocation time. Hooks are TypeScript functions that fire on lifecycle events. PAI runs 21 across five lifecycle stages; Shaka ships four covering the same critical events with a fifth of the surface area but the identical architectural pattern. Agents are personality definitions loaded when the model spawns specialized workers. PAI defines eleven with distinct tool permissions; Shaka ships thirteen, several sharing names with PAI's originals, because the useful specializations map to the same roles whether you design from scratch or rebuild an existing system.
Memory systems capture session history and extract persistent learnings. PAI uses four tiers from hot session context to cold learning signals accumulated over months. Shaka implements session summaries alongside continuous learning with rolling daily/weekly/monthly rollups. Both score learnings by recency and reinforcement, loading a budget-constrained subset at each session start. Security validation screens every tool call before execution: a TypeScript function intercepts the model's tool calls and decides whether to proceed or block based on pattern-matching against destructive operations.
Five primitives that survived a complete rewrite, which is the test of an architecture.
Where They Diverge
PAI is Claude Code native. Its entire surface area assumes Claude Code's specific event model and delegation primitives, enabling 63 skills and 180 workflows that compose naturally with Claude Code's tool environment. The tradeoff is lock-in: when Anthropic changes Claude Code's architecture, PAI's entire hook system needs updating.
Shaka treats provider support as a first-class architectural concern. Provider-specific adapters in separate directories serve both Claude Code and opencode from a single configuration. The inference module tries Claude CLI first, falls back to opencode CLI, returns a unified result type without requiring API keys because both CLIs manage their own authentication. If you switch from Claude Code to opencode tomorrow, your Shaka configuration moves with you.
The dependency philosophy diverges sharply. PAI integrates ElevenLabs for voice output, ntfy for push notifications, Discord for team alerts, a rating system for user sentiment signals, and memory layers that persist across sessions. Shaka has three runtime dependencies: commander, eta, yaml. Voice and notifications are entirely absent, which is not a gap in the roadmap but a deliberate design boundary. PAI chose to be a full operating environment. Shaka chose to be a thin enhancement layer. Neither project is on a path toward becoming the other.
Code Quality as Architecture
Shaka's codebase makes a case through specific engineering choices. Error handling uses a Result monad with Result<T, E> types that force callers to handle both success and failure paths explicitly, making error propagation visible in type signatures. PAI, operating at five times the codebase scope, uses a mixture of exception throwing and error return conventions across different subsystems.
The upgrade mechanism is symlinks. Shaka's system directory is a symlink to the framework's defaults directory. Running shaka update replaces the symlink target; everything the user customized lives in a separate customizations/ directory the update never touches. PAI uses an installer script with migration tooling and documented migration paths from v2.5 and v3.0. Fewer things can go wrong during an upgrade when the upgrade is a pointer swap.
Context injection is measurable. Running shaka doctor --context produces a per-component breakdown of how many tokens your configuration injects at session start. PAI acknowledged its own context overhead when v4.0 reduced the footprint by 19%, compressing 38 skill directories into 12 hierarchical categories. Shaka makes the problem visible to the user, which means the user can make informed tradeoffs about what to include. PAI's stop orchestrator runs synchronously, rebuilding context files before returning control; Shaka's session-end hook dispatches a background worker and returns instantly. A small design choice that compounds across every session.
The Identity Question
PAI includes a structured identity system called TELOS: ten dedicated files covering mission statements, mental models, long-term strategies, and personal narratives, loading approximately 10,000 tokens of personalization at session start. Miessler describes this as the difference between a coding tool and a personal assistant: Claude Code asks "what code should I write" while PAI asks "who are you and what are you about."
Shaka includes user identity files rendered through Eta templates. Unmodified defaults are detected and skipped during context injection, saving tokens. A lightweight approach: files you fill in if you want to, not a structured exercise the system requires. I find PAI's ambition attractive and Shaka's restraint admirable, and I suspect most users will gravitate toward whichever project matches how much they want their AI assistant to know about them.
Maturity and Momentum
PAI has shipped nine major releases in three months with breaking changes at each boundary: 9,400 GitHub stars, 1,300 forks, 237 closed pull requests. Miessler has discussed the architecture on the Cognitive Revolution podcast, and community members have ported the system to other platforms independently.
Shaka is 24 days old with 71 commits and 9 stars. I read the changelog expecting a scrambled prototype and found a developer executing a disciplined roadmap: session memory in v0.2.0, continuous learning in v0.3.0, slash commands in v0.4.0, rolling summaries in v0.4.1. Each release builds on the previous one without rewriting the foundation. The bus factor is one.
Users who want a complete system today choose PAI. Users who value clean architecture and provider flexibility watch Shaka.
The Rewrite as Validation
When someone rebuilds your architecture from scratch and keeps every structural primitive, the architecture was right. Montoya could have dropped hooks, merged agents into skills, replaced memory with simple file reads, or collapsed security validation into a single permission prompt. He did none of that. His reimplementation pressure-tested PAI's five primitives and found them all load-bearing. A separate community PAI-opencode port follows the same pattern again. Five requirements that every serious attempt at this problem preserves.
Configuration lives in files you own. Every instruction your AI assistant follows is something you can read and change. That is what it means for infrastructure to answer to its user.