Alyvix is an open source APM software tool for visual monitoring.
Build end-user bots visually interacting with any Windows application like ERPs or your favourite browser. Run and measure business-critical workflows like a human would, but continuously.
Measure end-user experiences: Alyvix records the click-to-appearance responsiveness of each transaction. Report IT service quality to support technical and business actions.
Visually define end-user workflows: Alyvix Editor lets you build test cases in a visual way, interaction after interaction
Automate any GUI-based (even streamed) Windows application: Alyvix works by processing screen frames, without being hardwired to APIs
Run visual test cases that interact with a machine just as a real user would: Alyvix uses mouse and keyboard like a person
Measure transaction time click-to-appearance: Alyvix measures how long each transaction takes to complete after the last interaction
Record the availability and responsiveness of each transaction: Alyvix allows you to monitor the performance of end-user experiences
Provide demonstrable and indisputable proof with annotated screenshots that Alyvix produces whenever the visual response times out
The first step to automating end-user workflows is to visually define their transactions, and Alyvix Editor gives you all the proper tools. Just point and click to select some GUI components from an application view creating an Alyvix Object. For each component you can set up its type (image, rectangle or text), its actions (mouse and keyboard) and its click-to-appearance time thresholds (warning, critical and timeout).
Having created a set of Alyvix Objects, you can drag and drop them into the scripting panel in order to compose the desired end-user workflow, creating an Alyvix visual test case consisting of a sequence of visual transactions. Conditionals and loops help implement more complex logic.
The Alyvix Robot CLI tool runs visual test cases saved as .alyvix files: this command executes end-user bots that reproduce recorded end-user workflows. The resulting performance transaction measures are displayed both in the CLI and saved as human readable output files, with annotated screenshots that provide demonstrable and indisputable proof in case of failure.
The end goal is showing trends over time of your end-user workflow performance. This requires you to schedule test cases regularly and continuously, integrating their output into your own monitoring system and, finally, analysing latency and downtime to assess IT service quality. Contact us if you need support in test case building, its integration or its maintenance.