Senior QA Engineer (AI Systems & Automation)

🌍 Remote, USA 🎯 Full-time πŸ• Posted Recently

Job Description

About the position Essential Software Inc. is a trusted partner to federal agencies, including the National Cancer Institute (NCI), delivering secure, cloud-based platforms that support large-scale cancer data and biomedical research. As a Senior QA Engineer (AI Systems & Automation), you will lead quality strategy and test automation for critical data platforms and AI-powered experiences. You will ensure both traditional software and AI/agentic systems are reliable, explainable, and safe in a federal, mission-driven environment. You will: Own end-to-end quality for complex web, API, data, and AI/ML-powered features Design AI-aware test strategies and automation that leverage GenAI and agentic frameworks Mentor QA engineers and collaborate closely with cross-functional teams and government partners Responsibilities β€’ Develop and maintain test plans, test cases, traceability, and test data for product and AI features β€’ Execute manual and automated tests for web applications, RESTful APIs, data workflows, and AI/ML features β€’ Own automated regression suites, release readiness criteria, and provide clear go / no-go quality signals β€’ Participate in agile ceremonies, validate end-to-end functionality, and ensure user stories (including AI features) meet acceptance criteria β€’ Manage the full defect lifecycle, including triage, prioritization, root cause analysis, and verification of fixes β€’ Maintain QA documentation, runbooks, and quality dashboards β€’ Design and execute test strategies for AI/LLM-powered capabilities, including virtual agents, chatbots, copilots, and RAG-based systems β€’ Use LLM-powered tools (e.g., ChatGPT, Claude, Copilot) to accelerate test design, data generation, exploratory testing, and script authoring β€’ Build and refine QA-focused AI agents that can: Scrape UI and verify DOM structures Validate data against backend or ground-truth sources Auto-generate and maintain test scripts Run self-correcting / autonomous test flows β€’ Evaluate and integrate agentic frameworks (e.g., OpenAI Assistants API, AWS Bedrock Agents, LangGraph, MCP) into QA workflows β€’ Define and monitor AI-specific quality metrics (accuracy vs. ground truth, hallucination and error rates, safety / policy adherence) β€’ Ensure AI and virtual agent experiences are accurate, consistent, and high quality in a federal context β€’ Plan and execute performance, load, and scalability testing (e.g., JMeter or equivalent) β€’ Validate data integrity and transformation quality across complex biomedical data pipelines and AI-enhanced workflows β€’ Partner with engineers and data scientists to ensure AI/ML models and integrations are testable, observable, and measurable post-deployment β€’ Mentor QA team members in both traditional and AI-augmented QA practices β€’ Collaborate with development, DevOps, product, UX, and data teams to improve testability, shift-left quality, and increase automated coverage β€’ Integrate automation into CI/CD (e.g., GitHub Actions, Jenkins, Azure DevOps, GitLab CI), monitor test health and flakiness, and address coverage gaps β€’ Communicate quality risks, trends, and mitigation plans to technical and non-technical stakeholders, including government partners Requirements β€’ Bachelor’s degree in computer science, Information Technology, Engineering, or related field β€’ 5+ years of software QA experience (manual and automation) in production environments β€’ 2+ years providing technical or process leadership (e.g., lead QA, primary product QA owner, mentor, or manager) β€’ Strong experience with UI automation tools (Selenium WebDriver, Playwright, or Cypress) β€’ Experience testing RESTful APIs and microservices architectures β€’ Hands-on experience integrating automated tests into CI/CD pipelines (GitHub Actions, Jenkins, Azure DevOps, or GitLab CI) β€’ Professional proficiency in Python or JavaScript for test automation β€’ Hands-on use of GenAI tools (e.g., ChatGPT, Claude, Copilot) for QA tasks such as test-case generation, data creation, and exploratory testing β€’ Understanding of AI/agentic concepts: Tool-calling / function invocation Multi-step / chain-of-thought workflows Autonomous / self-healing test flows AI-driven data comparison and validation β€’ Experience with performance / load testing (e.g., JMeter or equivalent) β€’ Proficiency with Jira or similar issue tracking tools β€’ Strong written and verbal communication skills, including the ability to explain AI-related quality risks to stakeholders β€’ Ability to prioritize, multitask, and operate effectively in complex, mission-driven environments Nice-to-haves β€’ AWS Cloud Practitioner certification β€’ Experience with modern automation stacks (Playwright or Cypress) and API testing tools (Postman, REST-assured, pytest, or similar) β€’ Experience testing AI/ML-powered features (LLM applications, RAG systems, agents, recommendation engines, or chatbots) β€’ Experience with one or more: LangChain or LangGraph AWS Bedrock Agents or OpenAI Assistants API MCP (Multi-Context Protocol) or similar orchestration frameworks β€’ Experience designing or testing internal QA copilots or automation bots for test authoring or execution β€’ Familiarity with test management tools (e.g., TestRail, Zephyr) β€’ Knowledge of accessibility standards (WCAG) and basic security testing practices β€’ Prior QA experience in healthcare, life sciences, biomedical informatics, or other regulated data environments β€’ ISTQB or similar certification Benefits β€’ Competitive benefits β€’ Professional development opportunities β€’ Collaborative, supportive culture Apply tot his job

Ready to Apply?

Don't miss out on this amazing opportunity!

πŸš€ Apply Now

Similar Jobs

Recent Jobs

You May Also Like