Some people are pretty bad at describing what they do. I count myself in this category, especially when it comes to translating the meaning and value of research to a broader demographic.
Measuring impact in the academy is baked into several processes, appraisals, evaluations, and performance expectations. Yet the language we use, and the assumptions we make, don’t always serve our individual or collective purposes. Take it from someone who works in the humanities, which needs a rallying cry just to gain others’ attention. It’s hard enough to attract interest internally about the importance of our work; it’s extra hard to disseminate that knowledge to the communities and societies we purport to be influencing and serving.
Having worked in a niche research field for decades, I’ve always considered it unrealistic to expect people to understand or care about the inner-most thoughts of my mind. We need to show why our work matters, and why anyone should care. Its importance can’t be taken for granted.
Let me offer a classic example. When I lived and worked overseas, I frequently submitted research grant applications to the one national funding agency. It was part of my job to do so, or at least a strong suggestion and demonstration of research ‘activity’. And, of course, I both wanted and needed the money to conduct the proposed projects.
My work competed against every other academic discipline and scholar. I was scarcely surprised, once the results were announced, to see bio-medical and health research funded over my curiosity-driven work. How could I blame an agency or branch of government for not handing over hundreds of thousands of dollars to help me travel the Old World, visiting archives, libraries, monuments, and the like?
The point is: I couldn’t fairly demonstrate impact. The shortcoming was mine alone. I always struggled to define the benefit of my research to the wider community of citizens and tax payers. I didn’t have the language to explain it to myself sometimes, let alone a broader audience.
Teaching is another good example of where academics often fail to show their work. In my current role, I chair and (co-)evaluate colleagues for pay increases, tenure, promotion, and other awards. In the traditional academic position, teaching accounts for forty percent of the job. Yet the true weighting of our achievements is usually better measured and more easily shown in the areas of research and service – to our detriment and the detriment of our programs and students.
Faculty teach and usually teach well. I can honestly say that, over the years and across the many institutions where I’ve worked, I’ve only encountered a few individuals who don’t take pride in the instructional side of work. Their expertise informs the classroom and the overall student experience. Their knowledge holds the power to influence and change lives. Imparting wisdom is a large part of why they do what they do.
So why, then, are we so bad at demonstrating our teaching impact?
First of all, while weighted equally alongside research at forty percent of the role, teaching is seldom counted in the same way. There are scholars who specialize in the scholarship of teaching and learning, which crosses boundaries, but for the most part teaching is so expected – an assigned part of being an academic – that it’s just done. It’s expected, assigned, and performed. It’s a primary duty and responsibility of the role. It’s part of a collective endeavour to deliver an academic program. And, in very real financial terms, it is a principal reason that universities exist.
Whether it’s done well is a whole other story. Where I currently work, there is no mechanism for evaluating courses, classes, or teachers. The truth is that faculty members have fewer methods for assessing their progress, for seeking constructive feedback, and ultimately for demonstrating the impact of their teaching.
We document quantity but cannot account for quality. The narrative of teaching is often absent or diminished as a result. There might be a solid teaching philosophy, which is itself a noble and worthy cause, but it still doesn’t demonstrate ‘impact’ in any explicit or objective way.
Someone can say they’re a good teacher. They can claim to use up-to-date and cutting-edge pedagogy. And they can espouse a command of the discipline that they believe delivers exactly what students want and need. But can they really prove any of it? Not in a system that doesn’t train teachers, that doesn’t develop them professionally as an expectation of the role, or that isn’t allowed to evaluate them in any formal manner for the sake of improvement.
What remains are words alone – verbal and sometimes poorly-written claims of strength and value, which again, don’t translate into measurable impact. Promotion and tenure applications that list the number of courses taught, with no other supporting evidence, fall well short in their convictions.
The burden of proof remains largely unaddressed. Committees need evidence to make an informed decision. Academic programs need assurances of quality in their accountability to students, taxpayers, and government. And colleagues need to know that the rigour and standards of their discipline are being advanced holistically and collectively by all members of their respective department.
I can hear the naysayers contesting everything I’ve just stated. And so they should. Because what’s needed is a change to the system. As it stands, most internal university processes don’t help faculty develop their full potential. And by putting limits on them as teachers, the real – almost palpable impact – is on the students in their classrooms. The ones on the receiving end. The biggest stakeholders in the whole damn enterprise.