Skip to main content

Agent Training

Improve your agent's accuracy and reliability by providing example interactions.

Why Train Agents

Browser agents learn from examples. Training helps them:

  • Navigate complex UIs more accurately
  • Handle edge cases you've encountered before
  • Recover from failures by learning correction patterns
  • Adapt to your application's specific behavior

Untrained agents rely solely on general knowledge. Trained agents understand your app's unique patterns.

Creating Training Datasets

  1. Open your project in the DebuggAI dashboard
  2. Select the agent you want to train
  3. Click Datasets in the agent sidebar

Add Example Interactions

Each dataset entry captures a complete interaction:

Scenario: "User logs in with valid credentials"

Steps:
1. Navigate to /login
2. Fill email field with test@example.com
3. Fill password field
4. Click Submit button
5. Verify redirect to /dashboard

Organize by Feature

Create separate datasets for different workflows:

DatasetPurpose
AuthenticationLogin, logout, password reset
CheckoutCart, payment, confirmation
NavigationMenu, search, filtering
FormsValidation, submission, errors

Labeling Data

Accurate labels are essential for effective training.

Mark Correct Actions

For successful interactions, label each step:

  • Action type: click, fill, navigate, wait
  • Target element: selector or description
  • Expected result: what should happen next

Identify Failure Points

When interactions fail, document:

  • Where it failed: which step broke
  • Why it failed: element not found, timeout, wrong state
  • Actual vs expected: what happened vs what should have

Provide Correction Examples

Show the agent how to recover:

Failed: Clicked "Submit" but form didn't submit
Reason: Button was disabled due to validation error
Correction: Fill required "Phone" field first, then click Submit

Running Training Sessions

Select Training Data

  1. Go to Agent Settings > Training
  2. Choose datasets to include
  3. Set training parameters:
    • Iterations: 1-5 (more iterations = more refinement)
    • Validation split: percentage held for testing

Monitor Progress

During training, track:

  • Loss metrics: should decrease over iterations
  • Validation accuracy: performance on held-out examples
  • Step-by-step logs: detailed training decisions

Evaluate Improvements

After training completes:

  1. Run the same tests that previously failed
  2. Compare success rates before vs after
  3. Check for regressions on previously working tests

Best Practices

Quality Over Quantity

10 well-labeled examples beat 100 sloppy ones. For each example:

  • Verify the steps are reproducible
  • Include clear success criteria
  • Document any prerequisite state

Include Edge Cases

Train on scenarios that commonly fail:

  • Slow-loading elements
  • Dynamic content
  • Modal dialogs and popups
  • Form validation errors
  • Multi-step wizards

Retrain When UI Changes

Schedule retraining after:

  • Major UI redesigns
  • New feature launches
  • Framework or library updates
  • Changes to form fields or navigation

A good rule: if your E2E tests need updating, your agent probably needs retraining too.

Troubleshooting Training

IssueSolution
Training doesn't improve accuracyAdd more diverse examples
Agent regresses on old testsInclude old successful examples in training data
Training takes too longReduce dataset size, focus on problem areas
Validation accuracy lowCheck for inconsistent labels

Next: Return to the Browser Agents Overview or explore Creating Agents.