Back to Blog/Exploratory Testing
Exploratory Testing

Session-Based Exploratory Testing: Balancing Structure with Creative Freedom

Can exploratory testing really balance creativity with structure? For years, exploratory testing was seen as an informal approach—testers would interact with the product freely, guided by instinct and curiosity. This flexibility helped uncover bugs that rigid test cases often missed. However, without structure, exploratory testing could become difficult to measure, repeat, or scale across teams. […]

Nandini Yadav
Nandini Yadav
Author
Aug 18, 2025
7 min read
Session-Based Exploratory Testing: Balancing Structure with Creative Freedom

Can exploratory testing really balance creativity with structure?


For years, exploratory testing was seen as an informal approach—testers would interact with the product freely, guided by instinct and curiosity. This flexibility helped uncover bugs that rigid test cases often missed. However, without structure, exploratory testing could become difficult to measure, repeat, or scale across teams.

This is where Session-Based Exploratory Testing (SBET) provides a solution. SBET introduces discipline into exploratory testing by time-boxing efforts, defining clear missions (charters), and ensuring that results are properly documented. Instead of replacing creativity, SBET amplifies it by giving testers focus, accountability, and visibility into their efforts.


Table of Contents

  • What is Session-Based Exploratory Testing?
  • Why SBET Works for Agile and CI/CD Teams
  • Core Elements of SBET
  • The SBET Workflow Explained
  • Examples of SBET Charters
  • Proven Impact of SBET in Real Projects
  • Challenges and How to Overcome Them
  • Tools That Support SBET
  • Best Practices for Effective SBET
  • FAQs
  • Final Thoughts
  • Contact Us

What is Session-Based Exploratory Testing?

Session-Based Exploratory Testing (SBET) is a structured approach that combines the creativity of exploratory testing with the discipline of session management. Introduced by James and Jon Bach, SBET is designed to make exploratory testing measurable, traceable, and repeatable—qualities that many teams felt were missing from traditional exploratory efforts.

A session in SBET is time-boxed, usually lasting between 60 and 90 minutes. Testers begin with a charter, which defines the goal of the session, such as “explore the new checkout flow under poor network conditions.” Within this boundary, testers explore freely, follow leads, and investigate potential problem areas.

The session ends with a debrief, where results are documented, shared, and reviewed. This ensures that findings don’t just stay with one tester but contribute to collective team knowledge.


Why SBET Works for Agile and CI/CD Teams

Agile and CI/CD environments thrive on speed and adaptability. Features are released frequently, and QA must keep up without becoming a bottleneck. Traditional scripted testing can feel too slow, while unstructured exploratory testing can lack accountability.

SBET addresses both issues by providing:

  • Focus: Every session has a clear goal, keeping exploration aligned with sprint objectives.
  • Traceability: Notes, observations, and bug reports make testing efforts visible and auditable.
  • Flexibility: Testers are free to investigate beyond the script, increasing defect discovery rates.
  • Collaboration: Findings are shared across the team, reducing knowledge silos.

In this way, SBET fits seamlessly into agile workflows and CI/CD pipelines. It provides just enough structure to be repeatable while preserving the human intuition that makes exploratory testing powerful.


Core Elements of SBET

SBET revolves around a few key elements that transform exploratory testing into a structured discipline.

  1. Charter – A clear mission that defines the purpose of the session.
  2. Timebox – A set duration, usually 60–90 minutes, that keeps sessions focused.
  3. Notes – Testers record observations, unusual behaviors, and questions in real-time.
  4. Bug Reports – Confirmed issues are logged with evidence such as screenshots or logs.
  5. Debrief – A structured review where testers share outcomes, risks, and recommendations.

These elements make sessions repeatable and accountable without diluting tester creativity.

The SBET Workflow Explained

The SBET workflow begins with planning. Testers and QA leads identify critical areas of the application to explore and craft charters that guide the session. For example, a new release might require charters around onboarding, payments, or mobile responsiveness.

Next comes execution, where the tester follows the charter but remains free to chase unexpected leads. Observations are recorded continuously, ensuring no insights are lost.

During bug logging, any confirmed issues are thoroughly documented. Testers often include reproduction steps, videos, or screenshots to support developers in fixing the problem quickly.

Finally, the debrief stage enables teams to review coverage, discuss findings, and determine whether new charters are necessary. This loop ensures that SBET continuously drives learning and improvement.


Examples of SBET Charters

A good charter balances focus with flexibility. Here are some practical examples:

  • Investigate login and password reset flows for edge-case failures.
  • Explore the checkout process under unstable mobile network conditions.
  • Test file upload and import feature for data consistency across formats.
  • Examine the admin dashboard for role-based access issues.
  • Explore the first-time onboarding journey for usability and friction points.

Well-written charters prevent aimless wandering while still allowing testers to follow their instincts.


Proven Impact of SBET in Real Projects

Real-world projects have demonstrated the value of SBET.

In one SaaS CRM platform, SBET uncovered UI freezes when switching between grid and list views in Firefox—an issue missed by automation. In a fintech application, exploratory sessions revealed a security loophole when editing profiles in multiple tabs simultaneously. In a healthcare portal, SBET detected silent data loss when switching file types mid-import, preventing potential compliance violations.

These examples prove that SBET goes beyond scripted coverage and finds critical bugs that would otherwise slip through the cracks.


Challenges and How to Overcome Them

Like any methodology, SBET comes with challenges. One common issue is time management—testers may feel rushed within a strict timebox. This can be solved by adjusting the duration based on complexity.

Another challenge is documentation fatigue. Testers sometimes resist taking detailed notes, fearing it slows down exploration. Using lightweight tools for note-taking or pairing testers can help strike a balance.

Finally, some teams struggle with adoption. Developers and managers may see SBET as less formal than scripted tests. Demonstrating SBET’s defect discovery rates and adding structured debriefs often helps gain buy-in.


Tools That Support SBET

Modern tools make SBET easier to manage and share. Platforms like TestBuddy provide charter templates, timers, and structured note-taking. Xray Exploratory App integrates directly with Jira for seamless bug reporting. Screen recording tools such as BugReplay and Loom preserve context.

Even lightweight tools like Notion or Google Docs can serve small teams by capturing charters and findings in a centralised space. The key is ensuring results are visible and reusable across the team.


💡 Best Practices for Effective SBET

To maximise SBET’s effectiveness, teams should adopt a few best practices:

  • Maintain a charter backlog aligned with sprint goals.
  • Schedule at least one exploratory session per sprint for high-risk features.
  • Encourage pair testing, where two testers (or a tester and a developer) collaborate.
  • Capture evidence—screenshots, videos, logs—during the session for smoother bug fixes.
  • Store findings in a knowledge base to inform future regression planning.

These practices ensure SBET doesn’t just find bugs but also contributes to long-term quality maturity.


FAQs

Q: Isn’t exploratory testing supposed to be unstructured?
Exploratory testing thrives on freedom, but SBET ensures that freedom delivers results. By time-boxing sessions and aligning them with charters, SBET makes testing repeatable while preserving creativity.

Q: How long should each session last?
Most sessions last between 60 and 90 minutes. This is long enough for deep exploration but short enough to prevent fatigue. Complex areas can be split into multiple sessions to ensure depth.

Q: Can SBET be integrated with automation?
Yes. Many scenarios uncovered during SBET are ideal candidates for future automation. For example, if testers repeatedly find edge cases in login flows, those scenarios can be automated for regression coverage.

Q: Who can run SBET sessions?
While QA engineers typically lead, SBET can involve developers, designers, or product owners. Diverse perspectives often reveal issues a single tester might miss.

Q: Is SBET useful for remote teams?
Absolutely. Remote teams can collaborate through shared tools, video recordings, and centralised documentation. SBET works just as effectively for distributed teams as it does for co-located ones.


Final Thoughts

Session-Based Exploratory Testing (SBET) shows that QA doesn’t need to choose between creativity and structure. It provides a framework that lets testers explore freely while ensuring results are documented, shared, and actionable.

For agile teams operating in fast-paced environments, SBET is a powerful bridge between exploratory testing and structured QA. It enhances defect discovery, strengthens collaboration, and ensures quality keeps pace with speed.


Contact Us

If your QA process feels scattered or you want to increase defect discovery without slowing development, Testriq can help. We specialise in designing SBET frameworks tailored for startups and enterprises, ensuring your exploratory testing adds measurable value.

📩 Contact Us


Nandini Yadav

About Nandini Yadav

Expert in Exploratory Testing with years of experience in software testing and quality assurance.

Found this article helpful?

Share it with your team!