Creating Browser Agents
Browser agents automate web interactions to test workflows, monitor applications, and validate functionality. This guide walks you through creating and configuring your first browser agent.
Navigate to Browser Agents
- Log in to your DebuggAI dashboard
- Click Browser Agents in the left navigation menu
- You'll see a list of existing agents (if any) and the option to create new ones
Create New Agent
Click the Create Agent button to open the agent creation form.
Required Fields
| Field | Description |
|---|---|
| Name | A descriptive name for the task (e.g., "Checkout Flow Test", "Login Validation") |
| Description | What the agent should accomplish. Be specific about the steps and expected outcomes |
| Project | Select the associated project from your workspace |
Optional Fields
| Field | Description |
|---|---|
| Tags | Add tags for organization and filtering (e.g., "checkout", "critical-path", "staging") |
Writing Effective Descriptions
Your description tells the agent what to do. Write clear, step-by-step instructions:
1. Navigate to the login page
2. Enter test credentials (user@example.com / testpass123)
3. Click the login button
4. Verify the dashboard loads successfully
5. Check that the user's name appears in the header
Configure Agent Settings
After creating the agent, configure its execution settings.
Target URL/Environment
Specify where the agent should run:
- Development:
https://dev.yourapp.com - Staging:
https://staging.yourapp.com - Production:
https://yourapp.com
Authentication Requirements
If your agent needs to access authenticated areas:
- Provide test credentials in the agent description
- Use environment-specific test accounts
- Avoid using production user credentials
Success Criteria
Define what constitutes a successful run:
- Page elements that should be visible
- URLs the agent should reach
- Text content that should appear
- Actions that should complete without errors
Initial Setup Best Practices
Start Simple
Begin with focused, single-purpose agents:
- Test one user flow per agent
- Keep the number of steps manageable (5-10 steps)
- Avoid complex conditional logic initially
Use Clear Descriptions
Good descriptions lead to reliable agents:
Good:
Navigate to /products, click the first product card,
verify the product detail page loads with a price displayed
Avoid:
Test the products
Test in Development First
- Create agents targeting your development environment
- Run several times to verify consistency
- Adjust descriptions based on results
- Promote to staging/production once stable
Agent Metadata
Tracking Runs
Each agent tracks execution history:
- Run Count: Total number of executions
- Success Rate: Percentage of successful runs
- Last Run: Timestamp and status of most recent execution
- Average Duration: Typical execution time
Version History
Agents maintain a history of changes:
- Description modifications
- Configuration updates
- Tag changes
Review version history to understand how an agent has evolved and revert if needed.
Next Steps
Once your agent is created:
- Run manually to verify it works as expected
- Review the execution log for any issues
- Schedule recurring runs for continuous monitoring
- Set up alerts to notify you of failures
Next: Learn about Training Agents to improve accuracy.