Welcome to Perusal

Portfolio

MANUAL TESTING (2)

 MANUAL TESTING (2)
Perusal Tech Pvt Ltd
  • 01 May, 2021

TYPES OF MANUAL TESTING :

(1) WHITE BOX TESTING : White box Testing, also known as glass box or transparent testing, is an approach in which the QA is familiar with the internal code or structure of the application. It is primarily used for unit testing. White box Testing also covers specific techniques like data flow testing, control flow testing, decision coverage, and path testing, and a few others.

(2) BLACK BOX TESTING : Black-box testing is a test approach in which the QA doesn’t have any knowledge about the underlying code or structure of the application. The QA interacts with the software application just like an end-user to test its functional and non-functional behavior. This helps to discover some bugs typically overlooked in the earlier stages.

(3) GREY BOX TESTING : Grey-Box test approach is the combination of both white box and black box testing techniques. The main aim of this approach is to identify any bugs present either due to inappropriate usage or any structural flaws.

(4) SMOKE TESTING : It is a high-level type of manual testing used to evaluate whether the software conforms to its principal objectives without critical defects. Smoke Testing is a non-exhaustive approach because it is restricted to verifying only the core functionality of the software. It is often used to verify a build once a new functionality is introduced in a piece of software. The QA team generally determines which software parts need to be evaluated before running lot of smokes tests in a suite.

Smoke tests are a preliminary type of testing that runs forward of more critical, in-depth testing. For Example : Used for testing a new feature, such as the ability to add multiple items to a shopping cart on an e-commerce site.

(5) CROSS BROWSER TESTING : There is no surity that a website will look same or identical on each and every browser because each browser may respond differently and render the webpage according to its own interpretation. These variables makes it highly important that cross browser testing is performed before a website is brought down for Production. This testings is being done to assure a consistent experience across every browser.

Browser testing checks the design, functionality, responsiveness and accessibility of an application. Beginning cross browser testing towards the end of the development cycle is preferable, so most, if not all, core functionality can be assessed for how they render across multiple web browsers. Cross browser testing is usually conducted by the QA team and/or designers. Since the design team is intimately familiar with every pixel, it can be beneficial to have them involved. For Example : Testing that the UI responds appropriately across all browsers.

(6) ACCEPTANCE TESTING : To find bugs is the main aim of most types of manual testing, but this kind of testing is different. The purpose of this Testing is to reveal how closely the Application conforms to the users expectations and need and is usually referred to as UAT abbreviated as User Acceptance Testing. Acceptance testing is performed once all bugs have been addressed and confirmed. The product should be ready for market during acceptance testing because this type of testing is designed to give the user a clear view of how the software application will look and act like in reality. Acceptance testing is generally done by the client or an actual user of the product. It is one of the most important types of testing because it is performed after development and bug fixes, as the last testing process before going into production. For Example : Testing the end-to-end flow of a piece of software. Like a real estate application that allows users to upload photos and create real estate listings – acceptance testing should verify this can be done.

(7) BETA TESTING : This kind of testing is a common practice for obtaining feedback from actual users during a soft launch before the product is finally made available to the general public. It permits software teams to gain valuable insights from a broad range of users through real-world use cases of the application. Following the completion of testing by internal teams, the product can be sent for beta testing. At this point, the application must be assumed to be able to manage a high volume of traffic, especially if the beta testing audience is open. The practicalities involved in both closed and open beta testing can require intensive planning. Closed beta testing is where access to the application is provided to a restricted group of users that have been selected and defined, perhaps through a submission and approval process. Open beta testing means anyone interested can use the software in its unreleased form, which brings the advantage of obtaining feedback from a wide and varied group of testers. For Example : A new integration for use with a third-party localization tool is ready for launch following months of development. To beta test the integration, 100 volunteer users have signed up. As early users, they will be testing the integration and providing feedback on usability and reliability issues.

(8) EXPLORATORY TESTING : It has minimal guidelines or structure. Instead of following a set script for each test case, the tester is free to follow their own initiative and curiosity where they “explore” and learn about the application while conducting tests on the fly. Exploratory testing is a form of ad-hoc testing that can be used during the entire development and testing phase at times when the team feels it is required. Because of the lack of formality involved, it is often performed by those other than testers such as designers, product managers, or developers. For Example : A new feature is close to being released, and the support team conducts exploratory testing to discover if all scenarios have been anticipated in the test cases. Exploratory testing gives them the opportunity to identify any critical bugs or usability issues that had been missed earlier.

(9) NEGATIVE TESTING : It verifies how an application responds to the input of purposely invalid inputs. Negative testing can be conducted during various stages throughout the development and testing phases, but once error handling and exceptions have been introduced. This type of testing is typically done by the QA team or engineers and often involves working alongside copywriters to ensure proper messaging is included for each exception. For Example : To log in to a website, we would generally expect to enter a user name and a password in two data fields. Negative testing seeks to find out what happens when the enter button is deliberately pressed after only one field has been filled.

(10) USABILITY TESTING : It is the most psychologically engaging of the manual testing types because it concerns how a user feels when engaging with your product. This type of testing assesses the user-friendliness of your application by observing the behavior and emotional reaction of the user. Are they confused or frustrated? Does your product allow them to achieve their aims with minimal steps? Feedback and learnings can then be used to improve the user experience. Usability testing can take place during any phase of the development process, so specific features, or an entire application depending on the size, can be checked and assessed. When administering usability testing, engage genuine users of the application who have not been involved with its production to get real-world feedback, which you can use to improve the application. For Example : You’re developing a new game for an e-learning platform, and you want to test the user experience of starting, playing, and ending a game. Can they quickly locate what to press and when? Do they feel satisfied with the experience?

TOOLS FOR MANUAL TESTING :

(1) LOADRUNNER : It is a software testing tool from Micro Focus. One can test performance, system behavior and applications under load through Loadrunner.

It is now acquired by HP and can stimulate user activities by interactions between user interfaces. It also records and analyses the key components of the application.

Because of its ability to stimulate user activities between interfaces, many still prefer this tool for their software testing.

(2) JMETER : It is another popular and one of the most preferred open source software. JMeter is designed as a pure Java application to measure and assess the performance as well as the functions. Initially designed for web-based applications, it has now extended to other test functions as well. It can be used with both dynamic and static resources and applications.

(3) SELENIUM : It is one of the most popular open source web-based tool for testing, this provides a portable software testing framework for web applications.

One does not need to learn test scripting language but rather selenium provides playback tool for authoring the tests. It is simple, quick and easy to use because of the feature. Selenium also provides a test domain specific language. Any web developer can download and use this since it is an open source software without charge.

(4) QTP : Also known as UFT software testing, it provides regression and functional test for software applications. QTP is used for quality assurance and has graphical user interface along with supporting keyboard and scripting interfaces. This mechanism uses the scripting language to come up with test procedure, manipulate it and controls application.

(5) TEST LINK : This is a web-based software testing management system which is developed by Team test. Test Link facilitates software quality assurance and offers several support services for test suites, test cases, user management, test reports and plans along with reports and statistics. Since it is a web-based browser, one needs to have access to a web browser and database for installing and running the same.

HOW TO PERFORM MANUAL TESTING ?

• Analyze requirements from the software requirement specification document.

• Create a clear test plan.

• Write test cases that cover all the requirements defined in the document.

• Get test cases reviewed by the QA lead.

• Execute test cases and detect any bugs.

• Report bugs, if any and once fixed, run the failed tests again to re-verify the fixes.

* Checkout our previous "MANUAL TESTING (1)" blog to learn more about Manual Testing.