In various languages there are different sayings for the attempt to compare incomparable items. English and French refer to the comparison of apples and oranges where Dutch and German refer to the more similar apples and pears. Spanish is still in the fruit and vegetable department comparing potatoes and sweet potatoes, Serbian compares grandmothers and toads (?!), and Welsh emphasizes the difference between honey and butter. It shows that there are many ways to make the same point. And none of them seems to be made by sourcing consultants. Would that have been the case the saying would probably have been ‘comparing quality with price’.
To me that would be an eloquent way of stating the message although admittedly the clarity of the statement would be somewhat compromised for those without any background in tender processes.
A key prerequisite in a tender is the ability to compare
A significant challenge in any tender process is to come up with an objective framework for the rating of proposals of different vendors. Especially so in public tenders since EU-legislation dictates that the rating, weighing and comparing of proposals have to be objective, transparent and to some extent has to be communicated upfront to all participants. This legal requirement has created an almost scientific approach towards a consistent rating methodology that can stand up in court. Although quite complex in its elaboration the main principles are quite simple: agree on minimum requirements, appreciate only what has true business value and allow yourself to be surprised in the process.
For each question in a Request for Proposal a list of topics is created that can earn points for a vendor when incorporated in his response. As long as you only list and reward the topics that add real value to your specific requirements, you will end up with the vendors ranked in the order in which their offer will support your business. A lot of the topics you expect in a supplier response can (and in a EU tender should) be pre-listed but you should also keep some ‘spare points’ in case for unexpected over-score. For instance on a 10 point scale, 8 points are for a vendor that gives the right answer, the remaining 2 are for the topics in an answer you didn’t predict but only if these positive surprises add real value.
When adding up the individual ratings for all replies, weighing factors become important. It is good practice to first determine a number of points per category before attributing these amounts to individual questions in that area. That way you avoid putting too much weight on a topic just because you have a high number of questions on it.
But some stuff is just incomparable
The real challenge is in assessing both quality and price in one evaluation model. Arithmetically the easiest solution is to divide the total price by the total number of points for quality and compare the quality/price ratio’s for different vendors where the highest ratio wins. Pre-requisite for this approach is that all essential requirements are listed as knock-out criteria (and addressing these will not discriminate between vendors) and that additional points for quality represent real added value.
A less crude but slightly more complex method is to translate prices to points by plotting the most expensive offer on the low end of a scale and the least expensive offer on the high end of that same scale. This way a relative difference in price can be translated into a distinct number of points for each vendor and this allows for calculation of a weighted average of price and quality all expressed in the same units (i.e. points).
While the above principles are simple, the devil is in the detail. Getting the weighing factors correct for all components in a vendor proposition will require some creativity. Drafting a pricing and evaluation model that cannot be tricked by attempts to manipulate the outcome of a selection process is certainly an art-form. So be sure you don’t try this at home without some proper guidance from an experienced consultant in this field!