Describing bugs is the worst part of debugging
You hit a weird network error. The request body looks fine in your head, the status code makes no sense, and there's a JS exception somewhere that may or may not be related. You open lac agent, and then you spend five minutes typing out a description of what happened — pasting curl commands, copying headers, explaining the sequence of clicks that got you there.
That whole ritual is the part I hate most. Not the bug itself — the overhead of translating what I just saw into text that an AI can actually reason about.
/watch removes that step entirely.
What /watch actually does
When you run /watch inside a lac agent session, it spins up a browser monitoring layer that tracks everything happening in your active browser session. We're talking:
- Every click and input event, with element context
- Full navigation history
- XHR and
fetchrequests — including headers and request/response bodies - JavaScript errors with stack traces
- A timestamped timeline of the whole session
It also activates voice recognition, so you can narrate while you're clicking around — something like "okay this is the step where it breaks" — and that commentary gets woven into the timeline too.
Then, whenever you're ready, there's a floating Send to AI button overlaid on your browser. One click, and the entire session timeline gets shipped to your lac agent context. The agent sees what you saw, in order, with full network detail.
A real workflow: debugging a broken API call
Here's how I used it last week. I had a Django backend with a route that was returning a 400 on a specific form submission — but only in production, not locally. Classic.
I started a lac agent session in my project directory:
lac agent
Then typed /watch to start monitoring. I opened the production URL in Chrome, walked through the form submission — filling in the fields, hitting submit, watching it fail — and hit Send to AI.
The agent immediately had the full picture: the POST request to /api/submit, the exact request body, the 400 response with the validation error message buried in JSON, and a JS exception that fired afterward when the frontend tried to parse the response wrong.
Without /watch, I'd have had to manually copy the request from DevTools, paste the response, describe the JS error, and explain the click sequence. Instead, I just used the app like a user and let the monitoring do the work.
The agent traced it to a CSRF token field that wasn't being included in the production form — a one-line fix. Total time from /watch to fix: under four minutes.
The network capture is the killer feature
Most browser devtools will show you network requests. What they won't do is automatically correlate those requests with the sequence of user actions that triggered them, attach the relevant JS errors, and then feed all of that to an AI that also has your codebase open.
That last part — the agent having your files and the session data at the same time — is what makes /watch genuinely useful rather than just a fancier DevTools. It can look at your actual route handler, your serializer, your middleware, and cross-reference that against what the network capture shows. That's the combination that saves time.
Voice narration is more useful than it sounds
I was skeptical about the voice recognition piece at first. But it turns out that narrating while you click is a natural thing to do when you're investigating a bug — you're already talking yourself through it half the time. Having that commentary timestamped in the session timeline means the agent has your intent, not just the raw events.
"Clicking submit here — this is where it should redirect but doesn't" is more useful context than just a click event on a button with ID submit-btn-3.
How to get started
You need lac-cli v0.3.5 or later. Install it with:
pip install lac-cli
Or grab the shell installer:
curl -fsSL https://lacai.io/install.sh | bash
Then inside any lac agent session, just type /watch. It'll confirm that monitoring is active. From there, use your browser normally, narrate if you want, and hit Send to AI when you've reproduced the issue.
If you want to bring your own model — Ollama for local/offline, Claude, or OpenAI — configure that in ~/.lac/config.json before you start.
One practical tip
Don't wait until you've fully reproduced the bug to send. Send early, after the first failure, and let the agent start reasoning. You can always do another /watch pass if it needs more context. Sending a shorter, focused session is usually more useful than a long one with a lot of noise around the actual problem.
The agent can ask follow-up questions and you can keep the session going — it's not a one-shot thing. Think of the Send to AI action as "here's what I've got so far" rather than a final report.