HoEasy, your multi-agent workspace.
Dispatch specialist agents across code, docs, calendars, channels, and research while a dedicated PM agent gathers every thread into one clear briefing for you.
Research, code review, scheduling, docs, and channel replies are coordinated before the human handoff.
From a message in any channel
to work that actually ships.
Watch a request travel through HoEasy: it lands on a channel, the agent recalls relevant memory, plans, runs your tools, fans out to teammates when needed, files progress to a project, and reports back—on the same channel it came from.
Memory, planning, and a context budget
that doesn't run out.
Under the hood, HoEasy treats tokens like cash. It recalls only what's relevant, plans before it acts, and quietly keeps the conversation tidy—so threads can run for hours without losing the plot.
Auto recall, auto save.
Before every turn, the agent pulls relevant facts from session and permanent memory. After every turn, a cheap model extracts what mattered—decisions, preferences, key IDs— and stores them, deduped by similarity.
A real plan, lived live.
HoEasy proposes a plan, marks the active step, and updates progress as it goes. You can intervene any time—approve, redirect, or cancel—without losing the thread.
Context, kept healthy.
A built-in context guard keeps the conversation tidy as it grows, and auto memory quietly remembers the things that matter. Hours-long threads, week-old projects, picked-up-mid-sentence chats—nothing falls through.
One agent for fast jobs. A team for the hard ones.
When work fans out, HoEasy spawns sub-agents—researcher, writer, reviewer, or whatever role you've set up. They run in parallel, share progress, and roll their results back to the lead.
A swarm tool, not a separate product.
Sub-agents are just sessions—same memory, hooks, and permissions. Spawn them with task_create, watch progress with task_get, stop them with task_stop. Define a team and the leader instinctively knows who can do what.
- ↳ Parallel work, with progress streaming back to the lead.
- ↳ Shared workspace—files, projects, and memory all live in the same account.
- ↳ Survive restarts: in-flight tasks reconcile against persisted run status.
Work shows up in a project,
not a wall of chat.
Every long-running ask becomes a project with tasks, comments, and a live timeline. You see exactly what shipped, what's blocked, and what your agent is touching right now.
warehouse.query · 312 rows · 1.4s/q3-review/deck.pptx with 3 placeholder slides.slides.update_chartsSandboxed, governed, observable—
so you can actually let it run.
Every account gets its own container, its own sandbox roots, and its own permission rules. Hooks let you veto a tool call before it fires; logs let you see exactly what happened.
Run in default, ask-on-unknown strict, or full-trust bypass—per account.
Pin the agent to specific paths. Mark roots read-only. bwrap enforces it at the process level.
Allow / deny / ask on tool name, file globs, or shell command regexes. Sensitive paths and dangerous commands are blocked by default.
Plug in pre_tool_use, pre_compact, and friends—HTTP or shell. Block, log, or page yourself.
Pay for usage, not seats.
Start free on cloud. Self-host on your hardware whenever you're ready. Bring your own LLM keys, or use ours.
Hobby
- 1 account, 1 default node
- BYO LLM keys
- WhatsApp + Web channels
- Community plugins
Team
- Multi-agent teams & delegation
- All channels (Slack, Email, WS, WhatsApp)
- Cron jobs, projects, KB
- Shared governance & hooks
- SSO & audit logs
Self-host
- Run in your VPC / on-prem
- Per-account Docker isolation
- Custom plugin marketplace
- Priority support
Things people ask before they trust it.
What kind of automation is this actually good for?
Anything that involves reading messages, doing a few steps with your tools, and reporting back. Customer triage, vendor follow-ups, weekly reports, on-call summarisation, content drafting, project bookkeeping, scheduled audits—if it's "messages in, work out", HoEasy fits.
Does it work with my existing systems?
Yes. The default node ships with shell, files, web fetch, RAG, and coding tools. You can also write custom skills at runtime, install plugin packages, or connect external nodes via the SDK. Everything talks over a small WebSocket protocol.
How does it stay safe when it can run shell commands?
Three layers: per-account Docker containers, per-skill bubblewrap sandboxing, and a permission layer that evaluates every tool call against your allow/deny rules and sandbox roots. Sensitive paths and dangerous commands are blocked by default.
Will I lose context after a long conversation?
No. The harness applies a context guard before every LLM call and compacts older turns at 60% of the model's window. Memory is automatic—relevant facts are recalled before each turn and key decisions are saved after.
Can I bring my own model?
Yes. Configure OpenAI, Anthropic, z.ai, MiniMax, or any compatible provider. The runtime fails over between providers automatically and cools down failing ones for five minutes.
What happens to the agent when I'm offline?
It keeps working. Cron schedules trigger sessions, channels still receive messages, sub-agents run in the background. When you come back, the project view shows you exactly what happened.
Hire the teammate
who never drops the ball.
Spin up a workspace, paste an LLM key, and connect a channel. You'll be delegating real work in under ten minutes.
