Skip to content

meng-ucalgary/seng-637-assignment-1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SENG-637 Assignment 1

Topic - Introduction to Testing and Defect Tracking

Table of Contents

Introduction

In this lab work, we will be performing exploratory tests and scripted tests on the given ATM simulation system v1.0. It is available inside the zip archive assignment1-artifacts.zip, and the file name is "ATM System - Lab 1 Version 1.0.jar".

If we find any bug during exploratory/scripted testing, we will log it in Backlog. Our Backlog is available at https://seng637g5.backlog.com/dashboard.

Then for each bug found, we will perform regression testing on the updated version of the given system, and update its status in the Backlog. The updated sytem is available inside the zip archive assignment1-artifacts.zip, and the file name is "ATM System - Lab 1 Version 1.1.jar".

Before this lab, we didn't know the anything about exploratory tests and scripted tests and the differences between them. The only experience we had with testing was non-structured testing of our own programs in other courses, simply checking if the features work as intended.

Video demo

Link to the video demonstration of testing is here.

Description of exploratory testing plan

Below points summarizes the high level description of the exploratory testing plan that we followed among each pair.

Approach being taken

  1. Explore the common routines that a user would do when they visit the ATM
  2. Also explore what the user can potentially do when he visits the ATM, like pressing incorrect buttons
  3. Explore all the functions little bit
  4. After every transaction, check the balance in the accounts, and the logs to verify if everything has been recorded correctly
  5. Check the receipt generated to see if it captures the correct details

Functionalities being targetted

  1. Viewing balance of available accounts
  2. Deposit of cash
  3. Withdrawl of cash
  4. Transfer of money
  5. Verying the logs

Comparison of exploratory and scripted testing

With exploratory testing our team has found many more bugs than during scripted testing sessions. We believe the reason for this is because each of our team members have found very innovative and diverse ways in operating the program. These diverse usages have allowed us to observe more bugs than the 40 scripted test cases. This is the biggest benefit of the exploratory testing, that we are not restricted during the testing. Also. exploratory testing is effective, especially when the scripted tests are not exhaustive. For example, the given 40 manual scripted tests were simply not enough to cover most of the bugs.

The trade-off with exploratory testing - it was difficult to keep track our progress in testing out the program's functionality as nothing was planned out. Also, several bugs could possibly have been missed. Occasionally a bug would be noticed during exploratory testing but it was unable to be replicated since the steps weren't already planned out.

On the contrary, scripted testing is more efficient for keeping track of progress and for recording the bugs. The scripted tests can be easily followed by anyone to ensure that the code meets the minimum expected quality. Scripted tests, once written, do not rely on the ideas of the tester as they have to follow the exact steps during the testing.

The trade-off with scripted testing - it relies on the imagination of the writer of the tests. If a test case is left out, it will be left from all rounds of testing.

Bug report

  1. The bug report generated by export of Backlog issues is made available here.

  2. The summary of MFT tests is made available here.

Discussion of peer reviews of defect reports

The group's peer review consist of the two pairs reviewing each other's observations from the exploratory and scripted tests.

The interesting discovery that we had as a group is that each one of us has our own distinct approach in operating the ATM program. This is especially true for the exploratory testing as we each found unique bugs with little overlap. This means that we each have slightly unique approach when we go about testing the ATM.

We split our peer review into two major components; the first component being the exploratory testing and the other component being the scripted testing. For each component, we first review the list of bugs reported from the other pair and try to reproduce it in the program. After we finished reviewing, we will meet online together as a group to discuss our feedback; particularly the items that are not reproducible during the peer review. This has led to many valuable information such as the display setting differences between different team members.

For example, for the bug ATM_1-20, the group has a thorough discussion as some group members cannot reproduce the issue, which is that the word transaction appears to be truncated in certain situations. After a thorough discussion, we have discovered that our team members' uses a wide range of display resolution and scaling in which some members can reproduce the issue, while some cannot.

Managing pair testing and division of team work

For exploratory testing, each us of did individual tests for approximatly 30 min (not including the time for recording of results) We recorded all of the bugs that were found in the process. We then collected every team member's recorded bugs and compiled it into a list. Finally, the bugs that were found were also tested in version 1.1 of the SUT. General funcionality was also tested again to ensure no new bugs were introduced, with the large number of bugs in version 1.0, most of this was already covered.

For MFT and regression testing, we split the forty test cases evenly between two pairs; with cases 1-20 being done by Drew and Okeoghenemarho while cases 21-40 were completed by Bhavyai and Michael. In both the pairs, when one guy was testing, the other was recording the steps and bugs, and vice-versa. For each of the forty testcases, we first record what we observe in version 1.0 and then we record what we observed in version 1.1 of the SUT.

Finally, the two pairs would do a peer review of the MFT and regression testing. What this entails is to review the records for each cases in both version 1.0 and 1.1 to see if the records can be reproducible in the ATM program.

Difficulties, challenges, and lessons learned

  1. One of the difficulty that we faced was related to exploratory tests. Each pair of group reported some similar set of bugs. So we had multiple bugs reported twice, that too with varied description of the same bug. Some of the bugs reported could not be reproduced.

    Combining all those bugs to keep only the unique ones was a challenge. We overcame this by all coming together virtually over Discord, discussing what each pair has found, and then one person going over all issues to consolidate them in one document, while rest others helping in the process. We tested all the reported bugs during this process, and removed the bugs that couldn't be reproduced at all.

    The lesson learned from this was team work. The whole team should be on the same page. They must know what's should be done and how other team members are contributing to the same assignment. We must maintain a central document or place where each team member should look if an issue has already been reported before adding his/her own issue. This would reduce duplicacy and unnecesary efforts to remove that duplicacy.

  2. The other challenge was to make sure that all the defects are reported with same level of details. Without a proper format to follow it can get difficult to make sure every tester logs their testing in detail. For this, we used a very simple reporting format, that was followed by every tester in our group. This format was made based on the instructions and requirements present in the assignment description document.

    The lesson learned from this was to have a guideline in the process that is to be followed. This guideline/best-practice made sure that bugs we raised not only captured detailed information but also ensured that the information logged is easy to follow.

Comments and feedback

  1. Backlog was used as a tool for reporting and managing bugs. In our opinion, it was perfect for this assignment. Now we are familiar with three issue tracking tools - other two being Jira and GitHub.

  2. The SUT chosen for this assignment was a great example for both exploratory testing/scripted testing and regression testing. It was simple enough to be understood by anyone, yet it had plenty of bugs to be discovered.

  3. The assignment description document Assignment_Description.md is very detailed and comprehensive, and it was easy to follow.

Contributors

We are group 5, and below are the team members

Releases

No releases published

Packages

No packages published