corner
Healthy Skepticism
Join us to help reduce harm from misleading health information.
Increase font size   Decrease font size   Print-friendly view   Print
Register Log in

Healthy Skepticism Library item: 17622

Warning: This library includes all items relevant to health product marketing that we are aware of regardless of quality. Often we do not agree with all or part of the contents.

 

Publication type: news

Roner L
Demonstrating the impact of training and development programs on SFE
Eyeforpharma.com 2009 Aug 9
http://social.eyeforpharma.com/story/demonstrating-impact-training-and-development-programs-sfe


Full text:

Surprisingly, only three to 10 percent of companies actually evaluate the impact of their training and development programs on sales force effectiveness, says Nick Pope, global director of learning and sales force training at Bausch & Lomb. But now, more than ever, it’s important to demonstrate the value of anything companies are spending significant chunks of money on, he told attendees at eyeforpharma’s recent Sales Force Effectiveness USA 2009 conference in Princeton.

So how can and should pharmas be evaluating their training and development programs?

“We often know intuitively that things we do make a difference,” says Pope. “We see the change we bring about in people with training. We know we’re making a difference, but we can’t prove it.”

But Pope says with the average company spending more than $7 million annually on average on training, it’s not unreasonable for the spotlight to be on exactly how effective such programs are.

Stumbling blocks

“Seven million dollars buys 46 new reps or gives a $552 bonus to every employee in the average sized company,” Pope says. “And especially in a climate where merit increases are being frozen, that’s a lot of money.”

But companies that want to measure the effectiveness of their programs face a series of obstacles, including:

Time – Pope says that by the time a program is finished, training teams are well on their way to delivering the next one, making it difficult to time assessments
Methodology – With no one, single, reliable methodology, program designers are left to struggle with to find a consistent way to evaluate training approaches
No request – More often than not, Pope says, managers don’t ask for program evaluation and so there’s no top level support for conducting such analysis
Attribution of results – Pope says it’s often difficult to prove a link between results and the training that’s been given
Fear of results – Because training teams and managers are often unsure of the impact of training programs, they hesitate to find out
But Pope says if those making training budget decisions in pharma would begin to treat their expenditures as if they were pension investments or decisions about their children’s college funds, they would make better decisions.

Tips and tricks for assessing training programs

In practical terms, Pope suggests that companies begin by categorizing their training into two types. The first is training on required skills, such as initial training, product training, foundation selling skills and coaching sales performance skills. These are areas that are quite basic in nature, Pope says, and for which there are proven impacts of training on results.

“This kind of training is, quite simply, a transfer of learning,” he says. “And because there’s a ton of data out there to provides proof of concept, this is the kind of training for which we probably don’t need or want to measure results. For instance, there’s already a proven relationship between how much time a manager spends in the field and the average sales versus target of his reps. With these programs, you want to confirm learning transfer, but not evaluate the effectiveness.”

Learning transfer with Type 1 programs can be confirmed with what Pope calls “happy sheets” or post-course surveys of participants and end-of-course skills assessments. At Bausch & Lomb, he says, that for senior programs the company always does line manager pre- and post-course briefings, since the role of the line manager is proven to be the single most important factor in influencing training transfer.

“Eighty-three percent of programs will fail to demonstrate ROI if the line manager isn’t briefed before and after the program,” Pope says.

Tips for Type 1 evaluations, he says, include:

Timing – Pope suggests waiting a week to ask participants to complete “happy sheets” to avoid the desire to just tick off what they think you want to hear in order to make a quick exit
Scales – Using a 1-4 assessment scale eliminates any “middle ground” and results in a truer assessment of the real impact of programs
Question types – Pope suggests asking participants to rate their skills before and after the course to get an idea of the baseline starting point
The second type of training, however, is value-added in nature and is used to teach things such as negotiation, advanced selling, category management and leadership skills. To fully assess the impacts of such programs, it is necessary to examine return on investment – or better yet, Pope says – return on expectation.

Assessing return on expectations is advantageous because it:

Doesn’t usually require a complex calculation
Forces pharmas to set a baseline of activity in advance of the training
Focuses the training objectives and design
Provides greater flexibility to incorporate qualitative measures
But to Pope says to measure and record return on expectation, stakeholders must not only measure the baseline before training but agree on a minimum threshold of return and a reasonable time frame.

“You must all understand and agree that no method of assessing the impact of Type 2 programs is ‘perfect” and instead you must look for trends toward the desired direction,” Pope says.

And when calculating the impact pharmas must consider both the benefit and the confidence interval, for example, noting $75,000 extra business that is 80% likely due to the training.

Practical magic

Pope points to an instance in which a group at Bausch & Lomb sought to enhance the negotiation skills of key account managers. The team had two expectations: increased profit margin and reduced payment terms. With £12,000 invested in the program, they expected a £48,000 return in six months. Instead, the group saw an actual £211,000 increase that could be attributed to the training.

“Training budgets are under pressure and what our internal customers expect for their investment is growing,” Pope says. “So understanding and being able to demonstrate the impact of training and development programs is more crucial than ever.”

 

  Healthy Skepticism on RSS   Healthy Skepticism on Facebook   Healthy Skepticism on Twitter

Please
Click to Register

(read more)

then
Click to Log in
for free access to more features of this website.

Forgot your username or password?

You are invited to
apply for membership
of Healthy Skepticism,
if you support our aims.

Pay a subscription

Support our work with a donation

Buy Healthy Skepticism T Shirts


If there is something you don't like, please tell us. If you like our work, please tell others.

Email a Friend








Cases of wilful misrepresentation are a rarity in medical advertising. For every advertisement in which nonexistent doctors are called on to testify or deliberately irrelevant references are bunched up in [fine print], you will find a hundred or more whose greatest offenses are unquestioning enthusiasm and the skill to communicate it.

The best defence the physician can muster against this kind of advertising is a healthy skepticism and a willingness, not always apparent in the past, to do his homework. He must cultivate a flair for spotting the logical loophole, the invalid clinical trial, the unreliable or meaningless testimonial, the unneeded improvement and the unlikely claim. Above all, he must develop greater resistance to the lure of the fashionable and the new.
- Pierre R. Garai (advertising executive) 1963