Evaluating bids from multiple vendors can be a daunting process unless you plan your strategy carefully. A streamlined evaluation methodology developed by the RGC Consulting Group can be a real time-saver and help assure that your decision is the best
Evaluating bids from multiple vendors can be a daunting process unless you plan your strategy carefully. A streamlined evaluation methodology developed by the RGC Consulting Group can be a real time-saver and help assure that your decision is the best one.
Choosing a vendor for PACS is a multifaceted decision that needs to incorporate the perceived goals, objectives, and constraints of the stakeholders. Sifting through tomes of vendor responses to Requests for Proposals (RFPs) in order to determine which vendor's attributes will influence the decision can be a daunting task. Organizing the results into a concise framework that enables the decision makers to quickly compare and score each vendor across key attributes can be challenging. A streamlined method for modeling the vendor selection decision process addresses this problem.
CONTROLLING THE RFP RESPONSE
Typically, vendors respond to RFPs with a two to three-inch binder of information. Usually, there is a CD that is intended to make the process less painful. The problem with most vendor responses is that they tend to be standardized and may not be in a format that facilitates comparison with other vendors. Many times, the vendor will respond to a requirement by simply referring to a section of the RFP response that has a preprinted slick describing some aspect of the product or service (e.g., training) rather than addressing your specific requirements.
Unless you require the vendor to respond to a specific question in a specific format, you will receive as many different formats and interpretations of your requirement as the number of RFPs that you sent out. Be specific. For example, a series of training requirements that includes the number of days of onsite training and type of training (i.e., train the trainer, classroom-based) will elicit a response that can be directly compared with your criteria and other vendors' responses.
In addition to the full RFP response, functional and technical requirements that are most important to your decision makers should be included in a set of summary response templates that each vendor is required to complete. Each template will require the vendor to answer questions about each attribute in a specific format. Each response template should be limited to the four or five questions that most influence the attribute.
Typical attributes include but are not limited to pricing, training, support, vendor viability, client base, architecture, and technical parameters. For example, a pricing attribute template may solicit specific, comparable line item pricing for each of the PACS hardware elements. In addition, there may be a specific line item for support and service.
THE EVALUATION
The first step in assimilating the RFP results is to incorporate the responses to the summary templates into a decision matrix. Use of a single worksheet per attribute works well, with vendors defined across the vertical axis and the specific questions for the attribute outlined along the horizontal axis. Each cell in the matrix is then populated by the vendor's response to each requirement for each of the attributes.
The matrices can then be transformed into a Multi-Attribute Utility Model (MAU). The steps for developing the model are outlined below:
1. Assign a relative weight to each attribute. Gain consensus on which attribute is most important. Assign a weight of 10 to the most important attribute.
2. Facilitate the discussion by asking the decision makers to determine the second most important attribute. Ask the participants how important the attribute is relative to the most important attribute. For example, if the second attribute is half as important as the first, assign it a weight of 5. Complete the exercise for the remaining attributes.
3. Weight each of the questions that define each attribute. Allocate a total of 100 points across the questions that define each attribute. For example, if the training attribute has three questions - number of days onsite, training materials, and type of training - the assigned weights might be 50, 20, and 30, respectively. If an attribute is based on two questions, a weighting might be 70 and 30, respectively. In either case, the total weight for an attribute's questions must equal 100.
SCORING THE MODEL
With the model constructed, the decision makers can be brought together to begin the scoring process. First, obtain consensus on the assigned weights. Refinements to the initial weighting schemes might be necessary. Once the model has been finalized, each decision maker is polled on each question of the attribute for a score.
The scoring scale should range from 1 to 5, with 5 being the highest score. The process continues until there is agreement on the score for each question within each attribute. For certain questions, it may be useful to define scores in advance based on predefined response ranges. For example, a vendor meeting total pricing under $2,000,000 may receive a 5, whereas a vendor over $2,000,000 may receive a 3.
Scoring for each question may be based on consensus or on the average score of the respondents. Results are calculated by summating the total score for all questions contained within each attribute. Next, the result is multiplied by the relative weight for the question. The process is then repeated for each of the attributes. Finally, the weighted scores for all attributes are totaled for each vendor. The vendor(s) with the highest scores typically proceed to the next round: vendor site visits. It is useful to keep two vendors in the running throughout the contract process.
CONCLUSION
Developing a decision model for selecting a PACS vendor can simplify the decision process and rationally account for the specific biases that each organization has regarding the decision criteria that matter most. Creating standardized and specific response templates that obligate each vendor to respond to the questions in the same format and at the same level of aggregation allows the responses to be compared with little or no ambiguity.
At the RCG, we recommend that a maximum of four vendors be included in the selection process. After the model has been selected, the top two vendors are contacted for site visits. Typically, the model yields few surprises and validates the decision maker's gut feelings. When there are surprises in the outcomes, however, a second iteration of the weighting and scoring may need to occur. The modeling process described ensures that all decision makers are literally on the same page throughout the decision.
Mr. Levine is senior business systems consultant for The Radiology Consulting Group and senior project manager in informatics in the radiology department at Massachusetts General Hospital. He can be reached by e-mail at llevine@partners.org.
Siemens Healthineers Debuts New Photon-Counting CT Systems at RSNA
December 2nd 2024Debuting at the Radiological Society of North American (RSNA) conference, the new photon-counting computed tomography (PPCT) scanners Naeotom Alpha.Pro and Naeotom Alpha.Prime reportedly combine rapid scan times with high-resolution precision.