Agent Training
Improve your agent's accuracy and reliability by providing example interactions.
Why Train Agents
Browser agents learn from examples. Training helps them:
- Navigate complex UIs more accurately
- Handle edge cases you've encountered before
- Recover from failures by learning correction patterns
- Adapt to your application's specific behavior
Untrained agents rely solely on general knowledge. Trained agents understand your app's unique patterns.
Creating Training Datasets
Navigate to Datasets
- Open your project in the DebuggAI dashboard
- Select the agent you want to train
- Click Datasets in the agent sidebar
Add Example Interactions
Each dataset entry captures a complete interaction:
Scenario: "User logs in with valid credentials"
Steps:
1. Navigate to /login
2. Fill email field with test@example.com
3. Fill password field
4. Click Submit button
5. Verify redirect to /dashboard
Organize by Feature
Create separate datasets for different workflows:
| Dataset | Purpose |
|---|---|
| Authentication | Login, logout, password reset |
| Checkout | Cart, payment, confirmation |
| Navigation | Menu, search, filtering |
| Forms | Validation, submission, errors |
Labeling Data
Accurate labels are essential for effective training.
Mark Correct Actions
For successful interactions, label each step:
- Action type: click, fill, navigate, wait
- Target element: selector or description
- Expected result: what should happen next
Identify Failure Points
When interactions fail, document:
- Where it failed: which step broke
- Why it failed: element not found, timeout, wrong state
- Actual vs expected: what happened vs what should have
Provide Correction Examples
Show the agent how to recover:
Failed: Clicked "Submit" but form didn't submit
Reason: Button was disabled due to validation error
Correction: Fill required "Phone" field first, then click Submit
Running Training Sessions
Select Training Data
- Go to Agent Settings > Training
- Choose datasets to include
- Set training parameters:
- Iterations: 1-5 (more iterations = more refinement)
- Validation split: percentage held for testing
Monitor Progress
During training, track:
- Loss metrics: should decrease over iterations
- Validation accuracy: performance on held-out examples
- Step-by-step logs: detailed training decisions
Evaluate Improvements
After training completes:
- Run the same tests that previously failed
- Compare success rates before vs after
- Check for regressions on previously working tests
Best Practices
Quality Over Quantity
10 well-labeled examples beat 100 sloppy ones. For each example:
- Verify the steps are reproducible
- Include clear success criteria
- Document any prerequisite state
Include Edge Cases
Train on scenarios that commonly fail:
- Slow-loading elements
- Dynamic content
- Modal dialogs and popups
- Form validation errors
- Multi-step wizards
Retrain When UI Changes
Schedule retraining after:
- Major UI redesigns
- New feature launches
- Framework or library updates
- Changes to form fields or navigation
A good rule: if your E2E tests need updating, your agent probably needs retraining too.
Troubleshooting Training
| Issue | Solution |
|---|---|
| Training doesn't improve accuracy | Add more diverse examples |
| Agent regresses on old tests | Include old successful examples in training data |
| Training takes too long | Reduce dataset size, focus on problem areas |
| Validation accuracy low | Check for inconsistent labels |
Next: Return to the Browser Agents Overview or explore Creating Agents.