An important part of any system selection process is when the vendor is asked to demonstrate their products. This is a pivotal time, when the dry responses to the RFP become something that is seen and the staff can begin to visualize themselves using the system in their daily work. Selection Team members walk out of a demonstration with their preconceptions turned into expectations of what the product can or cannot do, and what benefits it may bring to the organization. These impressions stick with the audience; it is hard to move someone away from what they’ve seen or heard during a demonstration.
I’ve always considered the demonstration, as well as the set-up and coordination activities around this meeting, as where I earn most of my fee for managing a selection process. It is important not to view this as a one-off meeting, or standalone activity, but to view it as integral to the overall selection process using information already collected and providing output to the next steps, as well as the final decision.
Steps prior to the demonstration including defining and prioritizing the business requirements, creating a potential product list, developing/distributing an RFP and assessing the vendor responses. That assessment should narrow down the field to those 2-4 vendors that best meet your baseline requirements and are most worthy of being invited in for a demonstration.
Recommended activities to surround the demonstration, include:
Schedule: I try to group the demonstrations within a 1-2 week time period, without significant time gaps between sessions. This is rough on the individual calendars of those attending the meetings, but worth it to keep the purpose, critical requirements and comparisons top of mind throughout.
Agenda: Using the most critical requirements identified previously, the agenda is set to walk through all key aspects of the functionality, with a focus on any particular area where the selection committee is particularly concerned. The agenda is also set up to allow users to manage their time, so they are only present when the demo is covering their functional areas, without tying them up for the full session. Importantly, a well thought out agenda ensures the vendor spends adequate time on all the aspects of the system the team is interested in, with little opportunity to gloss over areas of weakness.
Scorecard: Any attendee in the demonstration should complete a scorecard for the parts of the demo they participated in. The scorecards must be completed before the participant exits the room, as their thoughts quickly get mixed between systems, and other priorities occur that take attention and time away from completing the scorecard. The scorecard is never overly long, but serves to provide a quantitative view of the participant’s impression of specific functionality in the system, and to capture any comments or questions that may be pending at the end of the demo. To avoid skewing the quantitative results, participants should only score those sections with which they have expertise. Entries on the scorecard are aligned with the agenda for easy following, and are weighted based on priority for quantitative comparison across products.
Attendees: I discourage the selection team members from looking at the systems early in the process, before their requirements are known and prioritized, to avoid any preset leanings in one direction or another. The size of the group varies on the size of the organization, breadth of functionality for the system being selected and amount of time devoted to the selection. The preference is to keep the participating audience at a manageable size and consistent across all systems being considered. All audience members should be prepped beforehand, as to how the meeting will run, the agenda and the scorecard.
Facilitation: The facilitator role is an active one, ensuring the focus remains on the agenda and covers all the topics in the scorecard. Questions may be tabled, conversations cut short (particularly those that serve a small part of the audience present at that time), and information prompted out of the audience or the vendor. Another role is that of translator and interpreter. It always stuns me how we all say the same things in entirely different ways within and across financial sectors. It is important that the vendor’s presentation is translated into the audience’s terminology whenever possible for maximum appreciation of what is being presented. It is equally important to also interpret what the vendor says into how the audience members think. The facilitator’s knowledge of the industry, the available products, implementation, maintenance, etc. are all leveraged to steer the discussion such that the audience will appreciate not only what they are seeing, but what they will need to contribute for configuration and maintenance and whether the system has the flexibility to meet their needs in different ways. This leads to a more mutually fulfilling discussion between the vendor and the audience, as everyone speaks from the same page.
Post-Meeting Roundtable: A facilitated session of key audience members should quickly follow each demonstration (to mitigate crossover confusion with what functionality went where or when a particular comment came up). A review of the scorecards should be completed prior to this session, so disparities can be addressed. This meeting is the opportunity to discuss the demo, questions raised, and establish a general consensus about where the product stands and that the functionality represents similar things to everyone. It is not unknown to find a score of 1 and a score of 5 (using a 1-5 range) for the same functionality line item on the scorecards of two different participants. There is no expectation that everyone will score things the same, rather that scores should be in a similar ballpark. Large disparities like this one indicate misunderstandings by one or both team members, and those need to be put on the table for clarification as soon as possible, before perceptions are cemented and expectations set in one’s mind that cannot be met.
I know companies who failed to follow one or more of the steps above during their selection process and the result was typically missed expectations and buyer’s regret. Allowing the vendors free roam for their demonstrations causes confusion when comparing products, as the vendors may approach the discussion from totally disparate functional areas. Lack of a schedule requires a larger investment of time, as people with only a small area of functionality to observe are sitting in for much longer time periods (or the meeting is stopping and starting, while new people are called in and others leave). Most importantly, what someone hears versus what was intended may be completely different messages that were not caught prior to a final recommendation. That “results in not getting what you thought you were getting”.
While key to the overall selection process, the demonstration is not the final task in the process. A quantitative comparison of RFP responses and demo results can be used to further reduce the short list of potential candidates prior to moving into an in-depth due diligence process. Targeted system demonstrations, or question/answer sessions with the vendor may occur during this period to collect additional information or clarify any points.
Once the due diligence is completed, the qualitative and quantitative results are assessed to identify the final recommendation from the selection process.
More from the blog
View All Blog PostsMake Your System Demo Count!
Continue ReadingThe Art of Planning an LOS Implementation Budget
Continue ReadingProduct Demos – Not Just for the Customer Anymore? Part 2
Continue ReadingSubscribe to Our Blog
Fill out your email address to receive notifications about new blog posts from CC Pace!