HTML Dropdown

Saturday, 2 April 2016

How do you view the role of assurance in the design, development and delivery of technology enabled change?



       Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. The methods by which this is accomplished are many and varied, and may include ensuring conformance to one or more standards, such as ISO 9000 or a model such as CMMI.[
       SQA encompasses the entire software development process, which includes processes such as requirements definition, software design, coding, source code control, code reviewssoftware configuration managementtestingrelease management, and product integration. SQA is organized into goals, commitments, abilities, activities, measurements, and verifications.
       Software quality assurance, according to ISO/IEC 15504 v.2.5 (SPICE), is a supporting process that has to provide the independent assurance in which all the work products, activities and processes comply with the predefined plans and ISO 15504.
Role of Assurance in Software Design
        Assurance (QA) role is the role responsible for guaranteeing a level of quality for the end client, and to help the software development team to identify problems early in the process. It is not surprising that people in this role are often known as "testers". Of course, the role is more than just testing. It's about contributing to the quality of the final product.
       Assurance (QA) role is one that is focused on creating a quality deliverable. In other words, it is the responsibility of the QA role to make sure that the software development process doesn't sacrifice quality in the name of completed objectives.
       QA role works with the Functional Analyst (FA) and the Solutions Architect (SA) to convert the requirements and design documents into a set of testing cases and scripts, which can be used to verify that the system meets the client needs. This collection of test cases and scripts are collectively referred to as a test plan. The test plan document itself is often simple providing an overview of each of the test cases. The testing cases and scripts are also used to validate that there are no unexplained errors in the system.
       The test plan is approved by the Subject Matter Experts (SMEs) and represents the criteria to reach a project closing. If the test cases and scripts in the test plan are the agreed upon acceptance criteria for a project then all that is necessary is for project closure is to demonstrate that all of the testing cases and scripts have been executed successfully with passing results.
       A test case is a general-purpose statement that maps to one or more requirements and design points. It is the overall item being tested. It may be a specific usability feature, or a technical feature that was supposed to be implemented as a part of the project.
       Test scripts fit into the test cases by validating that case. Test scripts are step-by-step instructions on what to do, what to look for, and what should happen. While the test cases can be created with nearly no input from the architecture or design, the test scripts are specific to how the problem was solved by the software development team and therefore they require an understanding of not only the requirements, but also the architecture, design, and detailed design.
Assurance role is split into three parts:
       First the role creates test cases and scripts.
Second the role executes or supervises the execution of those test cases and scripts.
Third the role facilitates or performs random testing of all components to ensure that there's not a random bug haunting the system.
       In some organizations, the quality assurance role has two specializations. The first is the classic functional testing and quality assurance as described above. The second, is a performance quality assurance role where the performance of the completed solution is measured and quantified. The performance QA role is an important part of the large system development quality assurance process.
How QA fits in the Organization

Role of Assurance in Development






Role of Assurance in Software Delivery
       Everybody would agree that quality is an important part of the software development process. However, the complexity involved in delivering quality is often poorly understood and the amount of effort it requires tends to be underestimated.
      10 myths about Quality Assurance in software development

    1. Quality assurance is testing
       You need to worry if people start using “quality assurance” and “testing” as interchangeable terms. The reality is that testing is just one part of quality assurance.
       Good quality assurance should encompass the entire development process from the very start of requirements gathering all the way to maintenance. Not only does this involve a range of different test techniques but it should also take in the standards, processes, documentation and sign-off gates that are used throughout the entire development life-cycle.
     2. You can eliminate all the bugs from a system
       Expectations need to be managed. One of the great un-provable laws of computing is that all systems have bugs. You will never eliminate all of the bugs in a system, it’s just a matter of chasing them down to an acceptable level.
       The testing expert Boris Beizer estimated that his private bug rate was 1.5 bugs per line of executable code including typing errors. The majority of these bugs are found and corrected by a developer as the code is being written, but a lot of testing can be seen in the context of weeding out as many of the remaining bugs as possible.
       On larger systems, the maintenance phase of the life-cycle is principally concerned with managing an on-going bug list. There is a list of “known defects” that are tolerated because the overall level of system quality is regarded as sufficient. It’s important that everybody understands this and agrees what an acceptable level of defects might be.
  3. You should automate all of your testing
       Automated testing can accelerate the overall testing effort, but it does not eliminate the need for manual testing. Effective testing is best achieved through a combination of automated and manual testing.
       Automated tests can reduce the need for some repetitive manual testing but they tend to use the same set of inputs on each occasion. A piece of software that consistently passes a rigid set of automated tests may not fare so well once it is subjected to the more random and unpredictable inputs of human testers. The expert eye of a seasoned quality assurance professional will provide a more rigorous test than an automated script.
       It can also be very difficult to bed in any kind of reliable automated testing in the early stages of a project or for new functionality. Most development is in flux at first and it can be tough to decide when best to start building in test automation. Some software platforms suffer from a relative shortage of test frameworks which can further undermine the scope of automation.
       4. Testing is easy
       Quality assurance professionals are often under-estimated and under-valued, mainly by people who do not quite understand what value they bring to a project.
       Really good quality assurance professionals are like gold dust. They combine deep knowledge of test techniques with a genuine enthusiasm for quality. They can find faults that anybody else in the project will over-look. They will be able to make your testing more efficient by second guessing where defects are most likely to be found. They also bring a broader perspective to the project based on a deep understanding of both the business requirements and development process
       5. Only the quality assurance team need to be involved in testing
       Quality assurance professionals really add value because they care deeply about quality and have a superior grasp of what to look for when testing a system. However, quality should be something that everybody takes some responsibility for.
       It can be dangerous to leave quality assurance to a separate team of testers as it helps to enforce the idea that only a specialist can usefully test software. It also implies a sequential model of development based on functional silos where business analysis write requirements, technical architects design solutions, developers write code and quality assurance test the end result.
       This sequential model feels dated and it can encourage team members to absolve themselves of responsibility for the overall quality of the system. More modern, agile development approaches help to counter this by encouraging a more collaborative approach. Techniques such as continuous integration and iterative releases can also help to foster a shared responsibility for system quality.
     6. The more testing you do the better
       Many projects start with the intention of having 100% test coverage for the system. This can be unrealistic and is rarely achieved as coverage tends to shrink in response to changing development schedules. This can lead to decisions about which areas to test being made on-the-fly rather than using a more systematic approach to determine priorities.
       Any decisions about priority should take into account risk and business imperatives so that those areas with the greatest potential impact receive the greatest coverage. This risk-based approach assumes that complete test coverage is unrealistic but prepares you for being able to make more informed decisions about the most sensible areas to concentrate on.
   7. You can leave quality assurance to the end
       A lot of projects are planned with a certain amount of testing to be carried out once development has been completed. This can seem sensible as it allows you to test and fix the completed system in its entirety through a number of quality assurance cycles.
       The catch is that the time available for these quality assurance cycles tends to get squeezed as the project wears on. The inevitable delays that creep into development can make the later stages a rushed affair. When you are faced with the choice between a round of testing and the addition of a new feature it’s easy to skimp on the quality assurance.
       It is also a very inefficient approach to testing, as major bugs can be left to fester in the system until the later stages of the project. It is always cheaper to fix bugs earlier on in the development cycle rather than waiting towards the end where they are likely to have become more deep-seated and the code won’t be so fresh in a developer’s mind.
     8. Performance testing is only worth doing on a production environment
       Performance testing is often left to a set of load tests at the tail-end of a development schedule. This approach tends to concentrate on finding the points at which a system crashes rather than ensuring an acceptable level of performance throughout the system. This is also leaving it far too late as by this stage any serious performance problems will be costly and time-consuming to fix.
       It’s always best to work performance testing into the development life-cycle. Use code profiling tools to check for bottle-necks in code that may come back to haunt you. Define metrics for performance during the design phase and use prototypes to evaluate architectural choices. Above all, plan and monitor system performance throughout the entire development rather than waiting for the “big bang” of load testing.
    9. Security and quality are difference activities
       Security testing is often relegated to a single audit of a system just before it goes live. As with any last-minute testing, this only creates extra cost as issues are far cheaper to fix if they are caught earlier in the development process. Last-minute assessments such as penetration tests can provide a valuable assurance before go-live, but they should not provide the first test of security vulnerabilities.
       A genuine commitment security requires something more substantial than an audit. Ideally, a risk-based approach to identifying and remedying vulnerabilities should be used throughout the development process. Security audits should be built into the architecture design and code review processes. Above all, you should develop a coherent idea of what the risks are and how they have been addressed by the system.
   10. Quality assurance adds cost
       It can be tempting to see quality assurance as an overhead. When schedules start to slip then it can be tempting to cut down on quality assurance, but this is a false economy.
       I generally find that a willingness to skimp on quality assurance is a sign of inexperience. If you have ever witnessed a project sink into a quagmire of endless bug-fixing then you would never try to cut back on quality assurance. There is no such thing as a “quick and dirty” project – the “dirty” always remains long after the “quick” has been forgotten.
       A project that is beset by quality assurance difficulties is a grueling experience. It’s also an expensive one as you end up pouring resource into fixing bugs that could and should have been caught earlier in the development process. It blows a hole in profitability, damages the reputation of your business, undermines user confidence and demoralizes development teams. Quality assurance really isn’t a luxury.
What are the limits of accountability for the assurance function in delivering technology solutions?
       Defining accountability more precisely also relates to specifying accountability for what? Three general categories emerge from answering this question…
       Financial Accountability :Financial accountability concerns tracking and reporting on allocation, disbursement, and utilization of financial resources, using the tools of auditing, budgeting, and accounting.
       Performance Accountability :Performance accountability refers to demonstrating and accounting for performance in light of agreed-upon performance targets. Its focus is on the services, outputs, and results of public agencies and programs. Performance accountability is linked to financial accountability in that the financial resources to be accounted for are intended to produce goods, services, and benefits for citizens, but it is distinct in that financial accountability’s emphasis is on procedural compliance whereas performance accountability concentrates on results. Political/Democratic Accountability :In essence, political/democratic accountability has to do with the institutions, procedures, and mechanisms that seek to ensure that government delivers on electoral promises, fulfills the public trust, aggregates and represents citizens’ interests, and responds to ongoing and emerging societal needs and concerns. The political process and elections are the main avenues for this type of accountability. In many countries, both developing and developed, health care issues often figure prominently in political campaigns. Building health facilities or providing affordable drugs can be attractive options for politicians in generating electoral support. Beyond elections, however, political/democratic accountability encompasses citizen expectations for how public officials act to formulate and implement policies, provide public goods and services, fulfill the public trust, and implement the social contract. Policy-making and service delivery relate to aggregating and representing citizens’ interests, and responding to ongoing and emerging societal needs and concerns.
       Delivery function throughout the technology life-cycle providing technical direction for service improvement and upgrade projects. This will include vetting and approval of continued evolution and upgrades to existing services, reviewed against the IT road-map.
       Explore and promote business advantages of technology innovations.

3 comments: