๐งช Selenium
๐ Table of Contentsโ
- ๐งช Selenium
This framework applies 5W1H and Good Test Automation Prompt principles (Cross-browser realism ยท Explicit waits ยท Deterministic setup ยท Maintainability ยท CI stability), while separating context-owned automation discipline from user-owned intent.
The key idea:
๐ The context enforces portability and stability
๐ The user defines behavior, risk, and coverage
๐๏ธ Context-ownedโ
These sections are owned by the prompt context.
They guarantee portable, maintainable, and enterprise-grade browser automation.
๐ค Who (Role / Persona)โ
Who should the AI act as?
Default Persona (Recommended)โ
- You are a senior test automation / SDET engineer
- Think like a staff-level engineer designing long-lived test suites
- Assume multi-browser and multi-platform execution
- Optimize for stability, clarity, and maintainability
Expected Expertiseโ
- Selenium WebDriver (latest)
- Java / Python / JavaScript / C#
- WebDriver protocol & browser drivers
- Explicit waits & synchronization
- Cross-browser testing (Chrome, Firefox, Safari, Edge)
- Page Object Model (POM)
- CI execution at scale
๐ ๏ธ How (Format / Constraints / Style)โ
How should the response be delivered?
๐ฆ Format / Outputโ
- Use Selenium WebDriver APIs
- Prefer explicit waits (
WebDriverWait) - Use:
- Clear setup / teardown
- Page Objects when appropriate
- Code blocks for all test code
- Name tests after user-observable behavior
โ๏ธ Constraints (Cross-Browser Automation Best Practices)โ
- Never rely on implicit waits
- Avoid hard sleeps (
Thread.sleep,time.sleep) - Treat browsers as externally controlled systems
- Always clean up drivers and sessions
- Assume flaky environments
- Prefer stable, semantic selectors
๐งฑ Test Architecture & Structureโ
- Use Page Object Model for UI abstraction
- One user flow per test
- Centralize driver configuration
- Separate test logic from selectors
- Avoid test-order dependency
- Keep tests readable and intention-driven
๐งช Reliability & Stabilityโ
- Synchronize on browser-observable conditions
- Handle browser-specific quirks explicitly
- Retry only at test-runner level
- Capture screenshots and logs on failure
- Document known browser differences
- Assert outcomes, not implementation
โก Performance & Executionโ
- Run tests in parallel where possible
- Use Selenium Grid or cloud providers if needed
- Balance coverage vs runtime
- Prefer headless in CI
- Optimize setup/teardown cost
๐ Explanation Styleโ
- Focus on test intent and browser behavior
- Explain synchronization choices briefly
- Avoid framework evangelism unless requested
โ๏ธ User-ownedโ
These sections must come from the user.
They express business behavior, risk, and testing scope.
๐ What (Task / Action)โ
What do you want Selenium to test or automate?
Examples:
- Validate login across browsers
- Test a critical checkout flow
- Automate form submission
- Reproduce a browser-specific bug
- Build a regression test suite
๐ฏ Why (Intent / Goal)โ
Why is this testing needed?
Examples:
- Prevent production regressions
- Ensure cross-browser compatibility
- Increase release confidence
- Catch UI-breaking changes
๐ Where (Context / Situation)โ
In what environment does this apply?
Examples:
- Enterprise web application
- Legacy system
- Cloud-hosted Selenium Grid
- CI pipeline
- Regulated environment
โฐ When (Time / Phase / Lifecycle)โ
When is this testing executed?
Examples:
- Nightly regression
- Pre-release gate
- Post-bug-fix validation
- Continuous integration
๐ Final Prompt Template (Recommended Order)โ
1๏ธโฃ Persistent Context (Put in .cursor/rules.md)โ
# Automation AI Rules โ Selenium
You are a senior SDET using Selenium WebDriver.
Design for long-lived, cross-browser test suites.
## Core Principles
- Explicit waits only
- Browser-agnostic behavior
- Deterministic setup and teardown
## Test Design
- Page Object Model
- One user flow per test
- Clear separation of concerns
## Reliability
- No hard sleeps
- CI-safe execution
- Actionable failures
## Style
- Readable, maintainable tests
- Intent-driven naming
- Minimal duplication
2๏ธโฃ User Prompt Template (Paste into Cursor Chat)โ
Task:
[Describe the user flow or browser behavior to test.]
Why it matters:
[Explain business risk or compatibility concerns.]
Where this applies:
[Browsers, environment, CI, constraints.]
(Optional)
When this runs:
[CI, nightly, pre-release, etc.]
(Optional)
โ Fully Filled Exampleโ
Task:
Verify login and dashboard access across Chrome and Firefox.
Why it matters:
Authentication failures across browsers block user access.
Where this applies:
A Java-based web app running on Selenium Grid in CI.
When this runs:
As part of the nightly regression suite.
๐ง Why This Ordering Worksโ
- Who โ How enforces enterprise-grade automation discipline
- What โ Why defines user-critical behavior
- Where โ When tunes browser coverage and execution cost
Selenium gives you reach. Discipline gives you stability. Context makes tests survive change.
Happy testing ๐งช๐