When the Documented Requirements Just Don’t Provide the Story That Needs to Be Told and Other Not-So-Short Stories of Woe

So here’s a challenging situation that Insight Quality (IQ) came across a year or so ago when working for a large Fortune 10 client.  I want to illustrate this problem for a number of reasons.  First, I want to share with you our “smart flexibility” solutions.  Second, I want to get your feedback on the challenges that we faced as well as the solution we provided for Requirements Management.

Project Background

Insight Quality was engaged to lead the Testing strategy and implementation of a large-scale, complex COTS product that was essentially being redesigned  (read: redeveloped and coded) for use in the U.S. Market by the software vendor for IQ’s client.  The software was previously sold and used in European markets.  This system was replacing 32 legacy systems over the span of 3+ years and integrating with 10+ systems through an SOA solution that was being developed.  All-in-all, it was a large, complex $43M project.

IQ put together an overall testing strategy and began implementing it.  Part of that strategy consisted of assessing the testability of the existing requirements & design documentation, reviewing the document management process, and assessing the Change Management process.  After all, we couldn’t properly scope the test effort without first understanding the scope, quality, and accuracy of the requirements and design.

Challenge #1: Requirements Documentation Management

In our assessment, our first obstruction was being able to acquire all of the existing requirements and design documentation.  Although the organization had a central document repository in place for the project, there was no oversight or accountability to ensure the documentation stored was a) current, b) complete, or c) accurate.  Check-in and check-out procedures were not implemented nor utilized within the document repository.  Additionally, no one in the project had an overall master list of all of the documents that were approved, completed, in-process, or not started.  Therefore, no one, including management, had any idea what percentage of the requirements were completed nor how much work was left to do.  Now, this may not seem like a challenge, but let me throw out some numbers for you to put it into perspective: there were 3000 business requirements and 75 functional design documents (yes, I said seventy-five); some of the documents were 20 pages long, while some were over 100 pages long. Also, these functional design documents only covered the new functionality that was being (re)developed (the gaps between what my client needed and what the vendor’s vanilla system could do right out of the box); these functional designs did not represent the overall functionality of the system.  Sigh.

Document Repository Management

The first thing IQ tackled was the need to have a truly centralized document repository that was current, accurate, and encompassing.  The challenge that we faced when we assessed the requirements and design documentation was that, well, we couldn’t.  We couldn’t acquire the documentation in order to assess it.

Once we started interviewing the business and system analysts who were responsible for authoring the documents we came to know that many of them did not understand that other project team members, outside of them and their business users, were also consumers and needed to access the files prior to development.  They were working in their own microcosms where they emailed the documents to the business owners and then to the senior management and then to the vendor developers, etc.  Everything was emailed and therefore the analysts all thought that this was fine.  If you have read my prior blog about “Process”, then you’ll know where I stand on this:  for a small project with 2 people and a couple of documents, sure, email is fine.  But for a large, complex, constantly churning project with 60+ consumers of these documents, email is not an acceptable means of document sharing.

Unless every single person on the project team receives every single email.

And every time a new project member joins the group, then every single past email should be forwarded to that person.

Email is acceptable as a communication vehicle, but it is NOT a substitute for a repository.

We had additional conversations with the senior management outlining the pitfalls of not properly utilizing and managing the document repository. We explained that they were wasting time and money developing code and test cases against outdated information, not to mention that they were, in effect, developing defects into the system.  The senior management team took our advice.  They assigned one person to compile a list of all existing, in-process, and planned documentation, manage this list, and then ensure that the most recent documentation was maintained in the repository.

We had a small success story!

Version Control

So the next action that we took was to work with the senior management team to educate them on the need for versioning documents and keeping a history of the document changes.  When there are 60+ consumers of these documents that are relying on accuracy for development, testing, and configuration, then these documents do require oversight and more stringent management.  This helps to ensure that everyone is working off of the most recent and correct documentation in order to prevent rework due to the lack of a “managed” document management process.  It also prevents defects from being developed into the application.  “Managed” is the keyword here because if you asked the project team, they most certainly had a document management process.  It just wasn’t managed.

You might think that this conversation would be an easy, logical one to have with the senior management.  But it was not.  They agreed that going forward—for all newly created documents—this would be a good idea, but they didn’t want to invest the time to add version control to the currently “approved” documents.  We all know that “currently approved documents” can quickly become rewritten and reapproved…

Planned Release & Prioritization

Once we started performing a cursory review of the documents, we started noting and communicating discrepancies between documents—one document would state that a feature was in scope while another explicitly stated that the feature was out of scope, Both documents were considered “final and approved.” We dug a little deeper and came to find out that, although not documented in both documents, there was verbal agreement among the various authors of the documentation: the feature IS something that needs to be in the system, but NOT for the initial release of the system to the users.  The feature could wait until a later deployment. We then asked how features and requirements were marked for “current release” and for “later release”.  For the most part, this information resided in people’s heads.

We were able to successfully convince the management team to have the analysts assign an accurate release to every one of the 3000 requirements, which was no small feat. This immensely helped both the development and testing teams narrow focus for their respective workloads.

Additionally, almost all of the 3000 business requirements were marked with a “priority” of  “critical.”  Very few were marked as “High”, “Medium”, or “Low”(<17% combined).  We tried to explain that when you have a pre-designated go-live time period, which this project had, then you only have 2 other things to play with to ensure you meet the due date: scope and resources.  The project team needed to better manage the priority of the requirements in order to better maintain the timeline, i.e., reduce the priority of many requirements so that if time does not allow, some requirements would be easier to negotiate into a later deployment release.  The management needed to achieve a balance of requirements’ priorities to set the project up for success.  This not only helps to set the expectations of the end users in advance, but is a better alternative to simply not delivering critical requirements that were expected. 

However, this was a battle that we lost.  Over 83% of the initial release’s requirements remained as critical priority.

Challenge #2: Requirements and Design Explosion!

While the assessment of requirements and design management was occurring—over a couple of months, mind you—the scope exploded.  When IQ was first brought onto the project, we had to quickly provide a cursory test estimate based on interviews with the senior management—we could not wait until our requirements assessment was complete.  There was simply no time.  Our client had an extremely small budget provided for testing resources, but we believed within the known scope that was conveyed to us at the time that we could build the right team.

Well, the scope at that time was not the 3000 business requirements and 75+ documents that it ended up being during the aforementioned requirements assessment.  During our interviews with the management team, it was a little over 1000 business requirements and around 20-25 documents. So, the scope increased three-fold.  The testing budget, however, remained the same.  As did the timeline.

We were stuck with the typical dilemma: too much testing work, not enough resources, and not enough time.  We knew that there was no way that our now-too-small testing team could review and synthesize the requirements & design documentation and also design manual test cases.  Even if we reduced to the scope to focus just on the “critical” requirements, this didn’t provide any relief (remember, almost all 3000 were deemed critical, so this method didn’t help us much).

We decided that we needed to do 3 things:

1)    Shorten the synthesizing time of the requirements, design, and business information by the test team

2)    Increase the test team’s understanding in a more timely manner

3)    Minimize the back-and-forth question-and-answers between the test team and analysts to help shorten the time

How would we do this?  Insight Quality decided on a different approach and sold this to the senior management team.  We were going to take a Pairwise approach, matching up the Test Designer and the Analyst, for what we now call Test Case Modeling. (Note: This should not be confused with “all-pairs” or “pairwise” method of test design using orthogonal arrays. It’s our play-on-words of “pairwise.”)

Pairwise Test Case Modeling

Yes, Test Case Modeling.  We decided to combine static testing with high-level test case design—occurring simultaneously—in many working sessions with both a tester and an analyst. It was a combination brainstorm, requirements review, and Q&A session wrapped into a working meeting.  Each working meeting had a facilitator documenting high-level test case names on a whiteboard and tracing these to the requirements.  The goals were to utilize the analysts intimate knowledge of the business & functional design and combine this with the tester’s knowledge of the testing craft as well as have agreement on the scope of tests by these team members.

I bet you are intrigued as to how this worked out, aren’t you?  Well, there were good results and bad results.  Here is what we learned.  Bad stuff first.

1)    Our test leads were provided by the offshore testing vendor (the client had us build the team team using a pre-approved offshore vendor for many reasons, including budget).  Unfortunately, the functional test leads/managers were not skilled/qualified to be in a lead/manager role on a project of this complexity and magnitude.  This proved disastrous during these brainstorming sessions.  Instead of bringing the knowledge of the testing craft to the table, the Test Leads essentially stopped thinking analytically and started simply scribing what the analysts thought should be tested as well as how.

2)    The analysts felt they were wasting their time because they felt they were explaining the same concepts over and over again to the testers.  There was a slight communication and language barrier between the test team and the analysts.  If the analysts were not literal in their explanations and instead used allegories, metaphors, and sarcasm, the translation failed.

3)    The combination of the lack of skills and lack of support by the analysts caused irreparable damage in the ability of the functional test team to perform the task at hand.

Wow.  That’s a lot of bad, isn’t it?  What were the good things, you ask?  Well, even though the requirements and functional design had already gone though rigorous reviews and approvals by the senior management and the business end users, there were evidently many, many missed opportunities for uncovering defects designed right into the requirements and design documentation.

By having the test team, and analysts working together in performing static testing during these Test Case Modeling working sessions, we uncovered A LOT of gaps, assumptions, and alluded-to requirements.  Because these defects in the requirements and design were mitigated, we effectively prevented defects from being programmed into the system.

For those of you who aren’t familiar with the term “static testing,” it’s testing of the documentation without actually executing code.  Since requirements are the source of 80% of most defects, performing a detailed review of the documentation is an essential proactive tool in limiting defects designed into the system.  Static testing aims to identify ambiguous requirements (could be interpreted by a programmer differently than intended by the analyst), gaps, missing information, contradicting requirements, and the like.  It’s an essential process in application development.

So, although we had some personnel challenges during the Test Case Modeling effort between the client’s offshore testing team and the client’s staff, ultimately, it was successful in that we uncovered many, many critical defects.  Additionally, we will likely use the Pairwise Test Case Modeling approach again, but the team dynamic needs to be improved in terms of skills and communication.

Challenge #3: All of This Documentation Exists, But We’re Still Missing the Main Story?!?

As we stated previously, the Pairwise Test Case Modeling uncovered many critical defects.  What Insight Quality uncovered were a couple of key items:

1)    The analysts were working in functional silos and not together, so there were many major inaccurate assumptions designed into the functional requirements.

2)    The analysts were designing requirements for a complex data-driven system based on many objects, their respective states, and the relationships between objects’ states and other objects’ states.  Yet, this crucial information was not thought-through nor documented.  And this was the crux of all of the logic and rules in the application.  Yet, these requirements were missing.

What did we do?  Insight Quality decided to escalate this to the senior management team and gained their support to lead all the analysts in a 3-week effort to systematically work through the challenge.  We were well into development at this point.  How did we take this on?

State Transition & Object Interrelationship Exercise

Insight Quality first had to start out by having all of the analysts identify all of the states an object could be in.  For example, if we were talking about an order system such as Amazon.com, we would need to identify all of the states of the order object: new, in-process, saved, confirmed, billed, pending payment, paid, pending fulfillment, fulfilled, etc.  There were many objects in this system that had many different states.

The next step was to systematically identify what data fields could be modified (or not) when an object was in a specific status.  In the prior example, if a user is initially creating an order, then the user can easily go back and change items in his/her shopping cart.  This is all before placing the order.  Now, if the order has been placed, and the user decides that s/he needs to decrease the quantity of an item within the place order,  s/he will notice that the field cannot be modified.  This is because the workflow of that order is already moving through the sales process and allowing the user to make that change at that particular state of the order might have detrimental ripple affects though out the system.  This workflow, limitations, and rules should be defined in requirements and design.

The last step was to then systematically identify the relationship between the statuses of two or more objects and identify any additional layers of logic that needed be analyzed.  This proved to be the most valuable exercise as this determined the crux of the entire system’s logic.

What we found at the start of the 3-week exercise was that the analysts themselves could not explain nor define these requirements for their own subject area. The analysts had trouble not only identifying all of the data fields available for an object, but also in identifying if the data element could be modified when the state of the object changed, and the interrelationships between various data objects.

Although not all of the analysts appreciated the identification of missing crucial logic in their “final” work product, they all understood the importance of what we were trying to do—proactively prevent missing functionality and defects from being designed into the system.

Had Insight Quality not intervened with senior management, all of this missing logic would never have been provided to the developers in order for them to code.  It would never have been developed.  It would also never have been provided to the test team to validate and therefore it would have not been tested.  Since the users typically focus on happy path scenarios during User Acceptance Testing (UAT), tons of crucial logic would likely have been missed as well.  The more likely scenario would have been that this essential logic would have been amiss in Production, finally identified, and then the entire system would have to have been rolled back.  Yes, this missing logic was that crucial.

We believe that our state transition and object interrelationship exercise proved extremely valuable.

Final Thoughts

In sum, Insight Quality provided a number of “smart flexibility” solutions to address an ailing Requirements and Design Management process, including management of an existing a document management repository, adopting version control, assigning releases and priorities, experimenting with pairwise test case modeling, and conquering the state transition & object interrelationship quandary.

Now, we’d like to hear from you.  What do you think about the challenges Insight Quality faced?  What do you think about the solutions we provided?  What would you do differently?  If you have encountered any of these situations, would you apply any of the solutions that we used?

Thanks for reading.

-Virginia

This entry was posted in blog, documentation management, featured, IT process, process, quality assurance, requirements, requirements management, software testing and tagged , , , , , , , . Bookmark the permalink.

2 Responses to When the Documented Requirements Just Don’t Provide the Story That Needs to Be Told and Other Not-So-Short Stories of Woe

  1. Thanks for the interesting information. Subscribe to rss

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>