Appev isn't just a screenshot comparator. It is a multimodal reasoning engine that combines computer vision, DOM analysis, and behavioral simulation.
We don't use hard-coded scripts. We inject System Prompts that define user behavior. This allows the agent to make autonomous decisions based on its "personality" traits.
Agents don't just follow happy paths. They explore edge cases based on curiosity parameters.
We measure "Time to Joy." If a "Novice" persona takes 4x longer than a "Pro" to find a button, we flag it.
"persona_id": "grandpa_joe_v1",
"traits": {
"visual_acuity": "low", // Flags contrast < 4.5:1
"technical_literacy": "none",
"patience_threshold": 3000 // ms
},
"simulation_rules": {
"mouse_speed": "erratic_slow",
"click_accuracy": 0.85,
"confused_by_icons": true
}
"Grandpa Joe" failed to identify the 'Hamburger Menu' icon. Suggest adding text label.
To understand an app like a human, you need more than just code. Appev synthesizes four distinct data streams in real-time.
GPT-4o Vision processes screenshots to understand layout, hierarchy, and aesthetics.
Analyzes the HTML structure for hidden inputs, aria-labels, and semantic validity.
Intersects API calls to correlate UI spinners with 500 errors or slow latency.
Parses the Accessibility Tree to ensure screen readers can navigate the flow.
name: QA Audit
on: [deployment_status]
jobs:
appev-check:
runs-on: ubuntu-latest
steps:
- uses: appev/action@v2
with:
url: ${{ github.event.deployment.target_url }}
persona: 'random_mix'
fail-on-score: 80
Appev sits directly in your pipeline. Block deployments if the UX score drops, or simply receive a report on Slack.