What is the Weighted Scoring Method?

What is the Weighted Scoring Method?

Practical Definition

A method of scoring options or solutions against a prioritize requirements list to determine which option best fits the selection criteria. Can be used by Project Management Office when determining suitability of a proposed project for implementation, selecting new products or services, evaluating responses to RFPs, and even when comparing various cars to replace your current vehicle.

Note: This is the first in a two part series of articles describing the Weighted Scoring Model, a.k.a., Weighted Scoring Method – a technique under the category of Multi-Criteria Decision Analysis. The second article – Plugging the Holes in the Weighted Scoring Method? of the series describes some of the shortcomings or “holes” of this method.

The Weighted Scoring Method

Weighted Scoring is a technique for putting a semblance of objectivity into a subjective process. Using a consistent list of criteria, weighted according to the importance or priority of the criteria to the organization, a comparison of similar “solutions” or options can be completed. If numerical values are assigned to the criteria priorities and the ability of the product to meet a specific criterion, a “weighted” value can be derived. By summing the weighted values, the product most closely meeting the criteria can be determined.

Ok, that sounds a bit confusing, so let’s make it simpler.

When choosing between Product A, B or C, which product most closely matches your needs? For most people and organizations, they simply guess. “This one just seems to be the best.” “This is the number one product out there, so it must be good.” “But my brother-in-law sells this one and he tells me it is really great.” There is no objectivity or way to tell what is fact and which is fiction.

This Weighted Scoring Method can be used when selecting projects or anything where we must compare one item to another.

For example, when purchasing a new car, how do you pick the one you want? You might make a list of items the car must definitely have to be considered. Then you write down additional options you’d like to have. And you leave a few spaces to note features one car has the others don’t.

After trips to the various dealers, you tally up the list of matches and buy the one which meets the list the best. While you might not be this formal, you do it mentally. You are simply weighting some features and functions of the car of higher importance than others and if a car does not meet one of those important criteria, it is thrown out of the running.

When selecting technology for your organization or projects in your portfolio, you probably have a lot more money and impacts on the business involved. A wrong decision can have dire consequences.

Real-Life Example

Here’s an example which describes how powerful this approach is.

We were called by a potential client who wanted to select some technology for their company. They had formed a taskforce from three different areas in the company. For the past 18 months, each area had different ideas as to the right technology to select. None were willing to compromise. They had actually become hostile towards one another.

Our job was to get this task force to consensus on a solution within two days. If we were successful, we got paid. If we failed, there would be no check in the mail.

Using a weighted scoring model and our prepared requirements list, we had the client review and weight the requirements’ priorities. Using the products selected by the three factions, we compared the functionality of each to the requirements and gave them a score. We multiplied the priority with the score to compute the weighted value, summed the weighted values and determined which product best fit their organization.

After careful analysis of the results, each member of the taskforce agreed we had our selection and to move forward with the implementation. Total time to consensus: 1.5 days. We collected our check.

The Method

The Weighted Scoring Method is best done in a spreadsheet where the requirements can be listed, a priority entered, and the products to be compared recorded.

Here’s a screen shot of a typical spreadsheet format:

Weighted Scoring Model, Weighted Scoring Method, Multi-criteria Decision Model, Multi-criteria Decision Analysis
Simple Weighted Scoring Sheet

 Simplified Weighted Scoring Template

Column A: Requirement number – uniquely identifies each requirement. Numbering can be strictly numeric or alpha-numeric to indicate the requirement’s business unit, initiative classification, budget account, etc.

Column B: Requirement – a short “title” which describes the need to be met. A simple title makes it easier to discuss the requirement rather than having to recite the entire description.

Column C: Requirement description – The requirement description should be descriptive enough so others understand the requirement. This description may be tied to more complete descriptions in other documents via the requirement number.

Column D: Requirement Category – Good business analysts will create a Requirements Breakdown Structure which breaks the requirements into categories. The categories help those involved in the requirements definition and elicitation sessions consider various aspects of the system being discussed.

Column E: Requirements Priority  – The value assigned by the team for the requirement. Priority values are the backbone of this method. Requirements should align with the company’s objectives and goals, match key stakeholders’ expectations and requests, and be prioritized and agreed to by the team of key stakeholders.  Priority values can be either numeric (cardinal scale) or descriptive (ordinal scale). Let’s describe both formats for clarity.

Numeric or Cardinal Priority Scale

Priority values can be numeric. Based upon practical experience, values of 0, 1, 3, 5 work best.

  • 0 (zero) means the requirement or criterion does not apply to this particular study or project. Philosophically, why would the list contain a requirement of zero? Once built, the template used for product, project or technology selection can be re-used for many purposes. The standard set of criteria may contain some values that are not applicable for this particular application of the weighted scoring method. If the requirement list is standardized, then removing the requirement rather than rating it a zero may generate some questions later about it being missing. The 0 (zero) documents it was purposely removed from the rating.
  • 1 means the requirement is of low importance at this time.
  • 3 means the requirement must be met.
  • 5 means the requirement is essential.

We have seen implementations of this methodology using scales of 0 – 10. If too many choices are given in the prioritization, the task of ranking the requirements will bog down into bickering between one value and the next. Fewer choices results in a faster and just as accurate results.

Descriptive or Ordinal Priority Scale

Over the years, we have grown fond of a descriptive scale. With the numeric scale, we had to remind the team repeatedly 5 was the highest value while 1 was the lowest. Using a descriptive scale, we eliminate the confusion. Since we use Microsoft Excel to implement the rating system, we can easily translate to a numerical value so the scoring can be easily calculated. Here is the scale we like to use with their corresponding numerical value:

  • Not Applicable (0): the requirement or criterion does not apply to this study.
  • Nice to Have (1): the requirement is lower in need and is nice to have. If the solution or option being score contains it, it is a bonus point.
  • Important (3): the requirement is important, the solution or option should contain it. If the option does not, it will impact the score.
  • Essential (5): the solution must contain this requirement. If the solution or option doesn’t at least meet the requirement, the lack must be highlighted, discussed and resolved for it to be considered viable.

Listing The Options To Be Compared

The template uses two columns for each option compared. The first column is the “raw score” or the score. The score is based upon the team’s judgment of how “closely” the option meets the criterion listed in Column B. The second column under an option stores the weighted score, which is the priority values multiplied by the raw score. We use multiplication rather than addition because if provides a better differentiation between requirements met versus requirements not met.

Column F: Option 1 Raw Score –  Score is the first column of Project A or Product A or whatever is being compared. The product or project is scored against the requirement independently of the priority. Does the item meet or exceed the requirement? Does the requirement even show up on the radar for this item or is it not even an honorable mention? The score can be either numeric or descriptive.

The scoring values are No Support (0), Supports Criterion (2), Meets Criterion (4), and Exceeds Criterion (6).

  • No Support (0) means the item doesn’t include the requirement at all.
  • Supports Criterion (2) means the item may meet some of the requirement, but not all of it.
  • Meets Criterion (4) means it meets the requirement.
  • Exceeds Criterion (6) means the item exceeds the requirement.

Again, we limit the number of choices to speed the rating process. We use the values 0, 2, 4, and 6 so people don’t get them confused with the priorities. Yes it happens and so to avoid the confusion, it is simplest to change the values. The final results are the same. The better method is to use the descriptive (ordinal) terminology to avoid confusion. The numerical values is hidden underneath and used to calculate the next column, the weighted score.

Column G: Option 1 Weighted Score – Weighted Score simply multiplies the requirement or criterion priority with the raw score resulting in the weighted score.

Conduct the scoring for each requirement as a team and let the spreadsheet calculate the weighted score. When all requirements have been considered for each candidate, simply add the weighted scores for each item compared. Again, the spreadsheet can do this easily. The product or project with the highest score most closely matches the prioritize requirements.

Understanding the Result

Once all the options (projects, products, services, decision points, etc.) are compared, the spreadsheet sums the weighted scores. The option with the highest weighted score is the one which most closely fits the requirements. At the same time, we compare the value against the other weighted score. Options whose weighted score is similar or close to the highest can be candidates for consideration. Obviously, options with low scores can be discarded for further consideration. If this solution is used inside a Project Management Office (PMO) to determine which projects should be implemented, then a score threshold needs to be defined and projects with a weighted score above that threshold should be implemented.

When interpreting the results, the differences between the weighted score values is called the “velocity”. The larger the difference, the greater the velocity (high velocity) of the option to meet the list of prioritized requirements. If the difference is smaller (low velocity), further study to differentiate the options needs to be considered, in other words, the options are essentially the same.

We say to look less at the actual score and more at the velocity of the score compared to other options.

To better understand the velocity of the scores, here is an example we used to help a team make a decision between three options. The project was a software development project which need to implement two portions of the software. Management would prefer we release the two portions in the same year. The developers, because the decision was dragging and the requirements for the second portion were not defined, preferred a release of the defined portion that year and release the other portion after further definition early the next year (thus, the values of 2016 and 2017 in the picture below).

We did a study based upon several factors:

  • Impact on Development
  • Impact on Testing
  • Impact on Risks.

We developed three options:

  • Option 1: release both portions in 2016
  • Option 2: release one portion in 2016 and the second in early 2017
  • Option 3: release both portions in 2016 with a back-out plan if negative risks impacted development or test.

In the picture below, the first set of columns shows the criteria and the best ratings for each criteria along with the Confidence score (the weighted score). The three options were listed with the ratings (raw score) and their respective weighted score. As can be seen, the maximum score an option could obtain is 55. The second option, releasing the portions separately scored the highest at 49 while the other two options score 17 and 23, respectively. The velocity between the two options and the high scorer clearly shows which option to select.

After discussing the method of analysis and scoring for each option, the team concluded splitting the two portions was the wiser way to implement. For this example, that would seem to be the obvious choice, but until we had conclusive evidence to that fact, management was not willing to entertain any other option except releasing both in 2016. Showing the data won the day and the weighted scoring method was the determining factor.

Caution: After picking the top scorers, a review of the scores against the requirements is necessary. The top scorer may have some low or zero scores against high priority items. Just because the winner scored zero against a high priority requirement doesn’t knock it out of consideration necessarily. Knowing it doesn’t meet or exceed a requirement is crucial for future planning. Since its deficiency is known, work-arounds and other approaches can be taken or planned proactively rather than retroactively when discovered later.

See our article, Plugging the Holes in the Weighted Scoring Model, to learn about the deficiencies of the Weighted Scoring Model and how to overcome them.

Conclusion

The Weighted Scoring Method is a powerful but flexible method of comparing similar items against a standard, prioritized list of requirements or criteria. We’ve used this method in less formal ways when buying personal items without even recognizing it.

It provides a level of objectivity in matters where subjectivity can have major negative consequences. It can be used for technology, project and product selection, risk response analysis and solution design. The method described here has been proven in real-world scenario and is structured for efficient use of time for those involved.

In the end, the choice is yours. The weighted scoring model is simply a tool, a technique to help guide your decision making.

Plug the holes in this model for better decision making.

PMBOK® Guide is a registered trademark of the Project Management Institute. All materials are copyright © American Eagle Group. All rights reserved worldwide. Linking to posts is permitted. Copying posts is not.