Testing with a Low-Code Platform, whilst using Scrum.
Fairly new Scrum Master who has a massive headache at the moment in relation to testing. I have visited the other forums however, I am often left with more questions than answers. So far, at a high-level, I have assumed:
1. There is NO formal documentation which clearly outlines the type of testing that is to be performed in order to release a successful increment during a sprint.
2. Testing and the type of testing needed, depends on what type of platform you are using to build and the type of product you are building. As we are using a low/no-code platform, can i assume there is less testing to do or we omit certain types of testing. In that case, there will never be any formal documentation in Scrum with regards to testing?
3. Testing is NOT to be done outside the Sprint.
4. There are hundred's of different types of testing for Agile Software Development, for example, TDD, BDD, Functional Tests, Examples, Story Tests, Prototype, Simulations, Exploratory Testing, Scenarios, UAT, Unit Testing, Component Testing, Performance / Load Testing and 'Illity Testing' which is slowly making me loose my mind.
How on earth are we supposed to navigate through this QA quagmire?
Any help would genuinely be appreciated
Have you identified any organizational standards for ensuring that work is of usable quality, and that company brand and reputation will not be put at risk?
The Developers will have to meet this standard as a minimum, since they are accountable for quality, by ensuring that at least one Done finished increment is provided every Sprint. Do they have any insights into this standard? If the quality isn't there, they will be collectively accountable and will have to fix it.
So, have you found anything in the Scrum Guide or forums that say what languages have to be used for writing software? Or what applications have to be used to manager the code repositories? Or any formal documentation that outlines the formatting and coding standards that must be followed in order to produce usable, valuable increments. That is because Scrum does not dictate any of those things. They are something that each organization and team have to decide on their own. Same goes with testing. Testing is something that is done by the Developers to ensure that they code works and produces the expected results. How that testing is done is entirely up to the Developers. As @Ian points out, the larger organization might have some standards around how tests are done and what qualifies as a "good test". The Developers should abide by those standards but they are free to add more stringent criteria if they choose.
How on earth are we supposed to navigate through this QA quagmire?
This "quagmire" you mention has never been a problem for any organization in which I have worked. I have spent a large part of my career with "QA" in my job title. But I have never had any issues with incorporating quality assurance into various software development models. The context of having a separate QA organization that "tests" everything after the "developers are done" worked in the days where a software release could be delivered after months of work. Today, people with QA in their titles should are mostly embedded in the team with others that have Software Engineer or Software Developer as their titles. The testing is done as the coding is done. Tests are identified before the coding begins and they are executed continuously as the coding is undertaken. It can be via automation or manually, whatever the team decides is best.
In the case you described, what has the Scrum Team discussed on the topics of what constitutes a good test and how to test? What has been tried? Are you adapting as you learn more from the attempts you have made? That is how the successful teams do this. Since you have your PSD I certification, you should understand that concept.
which is slowly making me loose my mind.
Haha, you have my sympathy 😊
As the above posts mentioned, Agile and Scrum do not dictate what testing should be done. There are recommendations, but the types of testing depends on your situation.
From your question I assume you have a large code base, but with few tests and the tests are likely not automated.
You very likely doing Manual testing and Exploratory testing already. Physically going through every page or app and test. This is tedious and will need to be repeated regularly. Therefore it is advised to automate the tests, or at least create tests that can be run on demand. Exploratory testing is still useful to try various things out that the automated tests do not (yet) cover.
I had to look up the meaning of a low-code platform. With this platform, writing tests will likely be more time consuming than developing the functionality. However, with low-code platforms, Unit testing, API testing and UI testing are still advised. These 3 form part of a suite of regression tests, to easily show if new code has broken existing functionality. Unit tests are small and test a small unit of code. API testing works via a programming interface, and UI testing can often be similar tests to API testing, but work via the User Interface or browser.
These will take work and time, but it will pay off in the long run.
In Agile/Scrum the testing needed is usually documented in the Definition Of Done (DOD). If not already, create a DOD with the team(s).
Scrum does not prohibit documentation. Document as much as the Team sees necessary. (Jira is often used and a QA comment on a Jira ticket can often be enough.)
I am unsure about meaning of “outside the Sprint”, but testing can be done over multiple sprints if the work rolls over. The DOD determines when work is done. In larger companies, the Support Team or Release Team might do their “own” testing, likely UAT, and that might be outside of the Scum Team’s domain, if that was the question.
With traditional programming languages (C#, Java etc.) the base of testing is NUnit tests, then API tests, and finally UI tests.
Since you work in a low-code environment I am not sure about the feasible, but I will recommend asking the team to investigate setting up NUnit tests and then UI testing with Selenium or PlayWright (if your UI is a web browser environment).
API tests access and test code functionality – i.e. load a client record and update the client details via code in a test.
UI tests can usually test everything accessible via the UI, but are generally longer running and takes time. NUnit tests are more economical, and small.
With TDD, the test is written first and then the coding follows. BDD you are likely to use already and can be useful in a low-code environment to have a collaborative effort to determine what should be tested, what the acceptance criteria are, and also what type of test to use in each scenario. UAT determine if the client/user can accept the functionality or application. Performance and load testing stresses the system with say may concurrent users, but can be focussed on later after other tests are more mature – you probably need some additional infrastructure for load testing anyway. Ignore the other terms for now. Don’t get caught-up in all the terminology. Just use what you need.
Maybe a bit outside of Scrum and only rough hand waving explanations, but hope is helpful.