Research Excellence that Matters

Dr. Ayman ElTarabishy, President & CEO, International Council for Small Business Excellence, Deputy Chair, Department of Management at the George Washington University School of Business (GWSB).

 And  Dr. Herman Aguinis is the Avram Tucker Distinguished Scholar and Professor of Management at the George Washington University School of Business (GWSB).

 

What does it mean to say “scholarship excellence and impact for tenure and promotion,” specifically in small business and entrepreneurship research?

 

Most universities use a combination of the following criteria to assess excellence in research:

 

  1. Publications in high-quality journals,
  2. Citation metrics, and
  3. Impartial evaluations by recognized experts in the candidate’s field

 

Yet, there is no agreed-upon guidance on what “high-quality journals” are. As a result, junior faculty have often asked for advice on the definition of “high-quality journals.” Because of a lack of clear guidelines, many schools choose to use journal lists such as the UT-Dallas list or the FT-50 list. We all know this is a critical choice because it dictates decisions about tenure, promotion, and salary increases, as well as other important rewards such as teaching reductions, additional financial support in the form of summer salary, and research funds.

 

I asked Dr. Herman Aguinis, one of the leading experts in the world on scholarly impact who has been ranked among the world’s top 100 most influential Economics & Business researchers every year since 2018, to help shed light on these issues.

 

Dr. El Tarabishy: The UT-Dallas journal list seems to be used by many universities. Who created it?

 

Dr. Aguinis: I like reading history, so I looked into this issue a few years ago. Interestingly, this list was created in a study published in a 2000 Academy of Management Journal article. If you read this article closely, you will see that the authors defined “high-quality journals” based on which ones had the highest impact factors in the years 1995, 1996, and 1997. I am sure many business school deans have no idea they are imposing a list of journals based on citation data from the late 1990s. I don’t think we should use impact factor data from 25 years ago because many journals are no longer as impactful as they used to be. Others were very young at that time, but today are just as impactful or more compared to those on the list. Would you make decisions about valuing a car, a house, or anything else based on data that are a quarter of a century old? Probably not. So, I don’t think we should not value journals based on data that are a quarter of a century old, either.

 

Dr. El Tarabishy: How about the FT-50 list? This one is a lot more current, right?

 

Dr. Aguinis: Yes, it was created in 2016, and it was an update of the previous “UT-45” list that started in 2007 and then was updated in 2010. As its name indicates, this list was created by the Financial Times. The Financial Times compiled this list based on a journal reputation survey involving about 200 business schools participating in the FT Global MBA, Executive MBA, or Online MBA rankings. In addition, journalists at the Financial Times decided to drop some journals from the original FT-45 and add others to create the FT-50 list—but we don’t know the criteria used. I am not sure it is a good idea to assess the value of our research based on the opinion of journalists, especially when the criteria used to select journals are not sufficiently clear. I believe that many business school deans push faculty to publish in journals on this list because this positively impacts a business school’s FT ranking, which is an essential component of a dean’s performance evaluation. But, this does not mean that articles published in these journals are necessarily of higher quality or have a greater impact than those in other journals. I can see why many deans promote the use of this list based on their own self-interest, though.

 

Dr. El Tarabishy: Regardless of specific lists, is there any value in having them? Or should we eliminate them?

 

Dr. Aguinis: This question needs to be answered within the strategic goals of each specific business school. First, if there are no research standards whatsoever, a list may be useful. Also, having a list protects junior faculty who have published a sufficient number of articles in journals on the list when they go through the tenure review against vague reviewer statements such as “this research is not of sufficient quality to merit tenure.” On the other hand, pushing faculty to publish in a small number of journals only “by all means necessary” creates problems because faculty, who are very smart, will obviously be highly motivated to find ways to publish in those journals. This changes the goal from making important research contributions that will produce meaningful and important improvements in society to just “getting another hit.” We discussed pros and cons of journal lists in detail in our Academy of Management Perspectives article “An A is an A:” The new bottom line for valuing academic research.

 

Dr. El Tarabishy: Many schools have moved away from lists and use journals’ impact factors to decide whether a specific article is impactful. What are your thoughts on this practice of evaluating research?

 

Dr. Aguinis: The impact factor is the average number of citations received by articles in a journal based on a particular time window. So, it is a measure of average citations for each journal. In other words, it is a journal-level measure of impact, not an article-level measure. A small minority of articles published in top journals account for the lion’s share of those journals’ citations. Most articles in leading journals are cited just as often, if not fewer, than those published in journals not considered “top” journals. We make too many mistakes when we confuse the journal and article levels of analysis. We collected quite a bit of data that supports this conclusion empirically in our Academy of Management Learning and Education article titled Defining, measuring, and rewarding scholarly impact: Mind the level of analysis. Also, you can watch this 5-minute video explaining these issues as well.

 

Dr. El Tarabishy: What is your advice for those of us interested in improving the impact of our research?

 

Dr. Aguinis: In thinking about the impact of our research, we need to consider the broader impact of our work across multiple stakeholders: Other researchers, students, and society at large. Fortunately, university administrators including deans and department chairs and we, researchers, have several tools to do just that. These are tools derived from the performance management and talent management literature. We described many of these in our 2021 article titled How to enhance scholarly impact: Recommendations for university administrators, researchers, and educators.

 

Dr. El Tarabishy: Any final thoughts you would like to share with us?

 

Dr. Aguinis: Scholarly impact is a journey—not a destination. We need to careful think about impact before, during, and after our research is completed. We need to ask questions such as: Who are we trying to benefit from our research, and why? What can I do to improve the impact of my research in the future? How can I make my research more relevant and valuable for society? If you are interested in answering these and other related questions, please see our Business & Society article titled If you are serious about impact, create a personal impact development plan.

 

Dr. El Tarabishy: Thank you for sharing your insights with us!

 

Dr. Aguinis: You are most welcome. Thank you for the opportunity to talk about these issues, which I believe are critical for the success and long-term sustainability of business schools.

 

 

The post Research Excellence that Matters appeared first on ICSB | International Council for Small Business.