Home

  • 36.2k views
  • Research Ethics

Q: What does good research mean?

avatar mx-auto white

Asked by Editage Insights on 04 Jul, 2019

Good quality research  is one that provides robust and ethical evidence. A good research must revolve around a novel question and must be based on a feasible study plan. It must make a significant contribution to scientific development by addressing an unanswered question or by solving a problem or difficulty that existed in the real world.

A good research involves systematic planning and setting time-based, realistic objectives. It entails feasible research methods based upon a research methodology that best suits the nature of your research question. It is built upon sufficient relevant data and is reproducible and replicable. It is based on a suitable rationale and can suggest directions for future research.

Moreover, all relevant ethical guidelines must be practiced while conducting, reporting, and publishing good quality research . A good research benefits the various stakeholders in society and contributes to the overall development of mankind.

Related reading:

  • VIDEO: What every researcher should know to conduct research ethically

Answered by Editage Insights on 09 Jul, 2019

  • Upvote this Answer

good research work meaning

This content belongs to the Conducting Research Stage

Confirm that you would also like to sign up for free personalized email coaching for this stage.

Trending Searches

  • Statement of the problem
  • Background of study
  • Scope of the study
  • Types of qualitative research
  • Rationale of the study
  • Concept paper
  • Literature review
  • Introduction in research
  • Under "Editor Evaluation"
  • Ethics in research

Recent Searches

  • Review paper
  • Responding to reviewer comments
  • Predatory publishers
  • Scope and delimitations
  • Open access
  • Plagiarism in research
  • Journal selection tips
  • Editor assigned
  • Types of articles
  • "Reject and Resubmit" status
  • Decision in process
  • Conflict of interest

good research work meaning

  • Research Process

The Top 5 Qualities of Every Good Researcher

  • 3 minute read
  • 142.2K views

Table of Contents

What makes a good researcher? Is it some undefinable, innate genius, or is it something that we can practice and build upon? If it was just the former, then there would be far fewer innovations in the history of humankind than there have been. A careful look at researchers through the ages reveals that they all have certain attributes in common that have helped contribute to their success.

The characteristics of a good researcher:

1. curiosity.

They ask questions. An endless thirst for knowledge is what sets the best of the best apart from the others. Good researchers constantly strive to learn more, not just about their own field, but about other fields as well. The world around us is fascinating, be it the physics behind the way light refracts, or the anthropological constructions of our society. A good researcher keeps exploring the world and keeps searching for answers.

2. Analytical ability and foresight

They look for connections. Information is useless without interpretation. What drives research forward is finding meaning in our observations and data. Good researchers evaluate data from every angle and search for patterns. They explore cause and effect and untangle the tricky web that interconnects everyday phenomena. And then take it one step further to ask, ‘What is the bigger picture? How will the research develop in the future?’

3. Determination

They try, try, and try again. Research can be a frustrating experience. Experiments may not pan out how we expect them to. Even worse, sometimes experiments may run smoothly until they are 95% complete before failing. What sets an average researcher apart from a truly good one? The truly good researcher perseveres. They accept this disappointment, learn from the failure, reevaluate their experiment, and keep moving forward.

4. Collaboration

Teamwork makes the dream work. Contrary to the common perception of the solitary genius in their lab, research is an extremely collaborative process. There is simply too much to do for just one person to do it all. Moreover, research is becoming increasingly multidisciplinary. It is impossible for just one person to have expertise in all these fields. In general, research is conducted in teams , with each researcher having their individual roles and responsibilities. Being able to coordinate, communicate, and get along with team members is a major factor that can contribute to one’s success as a researcher.

5. Communication

They get their message across. Communication skills are an essential asset for every researcher. Not only do they have to communicate with their team members, but they also have to communicate with co-authors, journals, publishers, and funders. Whether it is writing a crisp and effective abstract, presenting at a conference, or writing a persuasive grant proposal to secure research funding, communication appears everywhere in a researcher’s life. The message in the old adage, ‘If a tree falls in the forest, but no one is around to hear it, does it make a sound?’ applies to research too. A discovery could be groundbreaking, but what is the use if the researcher can’t communicate this discovery to the rest of the world?

These are just a few of the skills required by researchers to make it to the top of their field. Other attributes like creativity and time management are also worth mentioning. Nevertheless, having one or more of these top five characteristics will make the research process smoother for you and increase the chances of positive results. Set yourself up for success by building up these skills, focusing on excellence, and asking for help when you need it. Elsevier Author Services is here to aid you at every step of the research journey. From translation services by experts in the field, to preparing your manuscript for publication, to helping you submit the best possible grant proposal, you can trust us to guide you in your journey to doing great research.

what-background-study-how-to-write

  • Manuscript Preparation

What is the Background of a Study and How Should it be Written?

Writing an Effective Cover Letter for Manuscript Resubmission

  • Publication Process

Writing an Effective Cover Letter for Manuscript Resubmission

You may also like.

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

good research work meaning

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

choosing the Right Research Methodology

Choosing the Right Research Methodology: A Guide for Researchers

Analytical Method Validation

Navigating the Reproducibility Crisis: A Guide to Analytical Method Validation

Why is data validation important in research

Why is data validation important in research?

Writing a good review article

Writing a good review article

Input your search keywords and press Enter.

Enago Academy

What Constitutes a Good Research?

' src=

The Declining Art of Good Research

We seem to be compromising our commitment to good research in favor of publishable research, and there are a combination of trends that are accountable for this.

The first is the continued pressure of “publish or perish” for young academics seeking to move forward on the track for fewer and fewer tenured positions (or increasingly draconian renewable contracts).

Secondly, the open access model of research publication has created a booming population of academic journals with pages to fill and new researchers willing to pay article publication fees (APFs).

Thirdly, budget-strapped institutions have been aggressively targeting doctoral research candidates and the higher fees they bring to the table.

When these three trends are combined, the resulting onslaught of quantity over quality leads us to question what “good” research looks like anymore.

Is it the institution from which the research originated, or the debatable rank of the journal that published it?

Good Research as a Methodological Question

When looking to learn how to recognize what “good” research looks like, it makes sense to start at the beginning with the basic scope of the project:

  • Does the research have a solid hypothesis?
  • Is there evidence of a comprehensive literature review from reputable sources that clearly defines a target area for valuable research?
  • Is the research team allocating sufficient time/resources to do the job properly, or were compromises made in order to accommodate the available funding?
  • Is there evidence of a willingness to refine the hypothesis and research strategy if needed?
  • Are the expectations of the implications of the research realistic?

Characteristics of a Good Research

For conducting a systematic research, it is important understand the characteristics of a good research.

  • Its relevance to existing research conducted by other researchers.
  • A good research is doable and replicable in future.
  • It must be based on a logical rationale and tied to theory.
  • It must generate new questions or hypotheses for incremental work in future.
  • It must directly or indirectly address some real world problem.
  • It must clearly state the variables of the experiment.
  • It must conclude with valid and verifiable findings.

Good Research as an Ethical Question

The question as to whether or not the research is worth conducting at all could generate an extended and heated debate. Researchers are expected to publish, and research budgets are there to be spent.

We can hope that there was some degree of discussion and oversight before the research project was given the green light by a Principal Investigator or Research Supervisor, but those decisions are often made in a context of simple obligation rather than perceived need.

Consider the example of a less than proactive doctoral student with limited time and resources to complete a dissertation topic. A suggestion is made by the departmental Research Supervisor to pick a dissertation from a decade ago and simply repeat it. The suggestion meets the need for expediency and simplicity, but raises as many questions as it answers:

  • What is the validity of the study – just because it can be repeated, should it?
  • What was the contribution of the original study to the general body of knowledge? Will this additional data be an improvement?
  • Given the lack of interest among academic journals in replicated studies, is the suggestion denying the student the opportunity to get published?
  • Is directing a student to replication in the interests of expediency meeting a broader academic goal of graduating proficient researchers?

The Building Blocks of “Good” Research

There is no shortage of reputable, peer-reviewed journals that publish first-rate research material for new researchers to model.

That doesn’t mean you should copy the research topic or the methodology, but it wouldn’t hurt to examine the protocol in detail and make note of the specific decisions made and criteria put in place when that protocol was developed and implemented.

The challenge lies in sticking to those tried-and-true methodologies when your research data doesn’t prove to be as rich and fruitful as you had hoped.

Have you ever been stuck while in the middle of conducting a research? How did you cope with that? Let us know your approach while conducting a good research in the comments section below!

You can also visit our  Q&A forum  for frequently asked questions related to different aspects of research writing and publishing answered by our team that comprises subject-matter experts, eminent researchers, and publication experts.

Rate this article Cancel Reply

Your email address will not be published.

good research work meaning

Enago Academy's Most Popular

lack of literature review

  • Manuscript Preparation
  • Publishing Research

3 Quick Tips on How Researchers Can Handle Lack of Literature in Original Research

Many a times, I have heard fellow researchers saying that they were unable to find…

Latin Terms

  • Manuscripts & Grants
  • Reporting Research

How to Turn Your Thesis Into a Journal Article

In many cases, publishing thesis is often one of the requirements for graduate students to…

thesis advisor

  • Career Corner
  • PhDs & Postdocs

When Your Thesis Advisor Asks You to Quit

I was two months into the third year of my PhD when it happened. In…

Top 5 Online Thesis Defense Tips

Virtual Defense: Top 5 Online Thesis Defense Tips

A Master’s or Ph.D. research defense is that momentous event you have been waiting for!…

Global health survey

  • Industry News
  • Publishing News

Enago Releases Global Survey Report on Research Labs Health

New York, USA: Enago, a global leader in editing and publication support services, recently conducted a comprehensive global health…

3 Critical Tips to Maximize Your Potential As an Academic Researcher

How to Get Hired in Your Dream Positions: 4 Quick Tips for Enterprising Researchers!

good research work meaning

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

good research work meaning

According to you, which is/are the major drawbacks in making open access initiatives sustainable?

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

good research work meaning

Home Market Research

What is Research: Definition, Methods, Types & Examples

What is Research

The search for knowledge is closely linked to the object of study; that is, to the reconstruction of the facts that will provide an explanation to an observed event and that at first sight can be considered as a problem. It is very human to seek answers and satisfy our curiosity. Let’s talk about research.

Content Index

What is Research?

What are the characteristics of research.

  • Comparative analysis chart

Qualitative methods

Quantitative methods, 8 tips for conducting accurate research.

Research is the careful consideration of study regarding a particular concern or research problem using scientific methods. According to the American sociologist Earl Robert Babbie, “research is a systematic inquiry to describe, explain, predict, and control the observed phenomenon. It involves inductive and deductive methods.”

Inductive methods analyze an observed event, while deductive methods verify the observed event. Inductive approaches are associated with qualitative research , and deductive methods are more commonly associated with quantitative analysis .

Research is conducted with a purpose to:

  • Identify potential and new customers
  • Understand existing customers
  • Set pragmatic goals
  • Develop productive market strategies
  • Address business challenges
  • Put together a business expansion plan
  • Identify new business opportunities
  • Good research follows a systematic approach to capture accurate data. Researchers need to practice ethics and a code of conduct while making observations or drawing conclusions.
  • The analysis is based on logical reasoning and involves both inductive and deductive methods.
  • Real-time data and knowledge is derived from actual observations in natural settings.
  • There is an in-depth analysis of all data collected so that there are no anomalies associated with it.
  • It creates a path for generating new questions. Existing data helps create more research opportunities.
  • It is analytical and uses all the available data so that there is no ambiguity in inference.
  • Accuracy is one of the most critical aspects of research. The information must be accurate and correct. For example, laboratories provide a controlled environment to collect data. Accuracy is measured in the instruments used, the calibrations of instruments or tools, and the experiment’s final result.

What is the purpose of research?

There are three main purposes:

  • Exploratory: As the name suggests, researchers conduct exploratory studies to explore a group of questions. The answers and analytics may not offer a conclusion to the perceived problem. It is undertaken to handle new problem areas that haven’t been explored before. This exploratory data analysis process lays the foundation for more conclusive data collection and analysis.

LEARN ABOUT: Descriptive Analysis

  • Descriptive: It focuses on expanding knowledge on current issues through a process of data collection. Descriptive research describe the behavior of a sample population. Only one variable is required to conduct the study. The three primary purposes of descriptive studies are describing, explaining, and validating the findings. For example, a study conducted to know if top-level management leaders in the 21st century possess the moral right to receive a considerable sum of money from the company profit.

LEARN ABOUT: Best Data Collection Tools

  • Explanatory: Causal research or explanatory research is conducted to understand the impact of specific changes in existing standard procedures. Running experiments is the most popular form. For example, a study that is conducted to understand the effect of rebranding on customer loyalty.

Here is a comparative analysis chart for a better understanding:

It begins by asking the right questions and choosing an appropriate method to investigate the problem. After collecting answers to your questions, you can analyze the findings or observations to draw reasonable conclusions.

When it comes to customers and market studies, the more thorough your questions, the better the analysis. You get essential insights into brand perception and product needs by thoroughly collecting customer data through surveys and questionnaires . You can use this data to make smart decisions about your marketing strategies to position your business effectively.

To make sense of your study and get insights faster, it helps to use a research repository as a single source of truth in your organization and manage your research data in one centralized data repository .

Types of research methods and Examples

what is research

Research methods are broadly classified as Qualitative and Quantitative .

Both methods have distinctive properties and data collection methods.

Qualitative research is a method that collects data using conversational methods, usually open-ended questions . The responses collected are essentially non-numerical. This method helps a researcher understand what participants think and why they think in a particular way.

Types of qualitative methods include:

  • One-to-one Interview
  • Focus Groups
  • Ethnographic studies
  • Text Analysis

Quantitative methods deal with numbers and measurable forms . It uses a systematic way of investigating events or data. It answers questions to justify relationships with measurable variables to either explain, predict, or control a phenomenon.

Types of quantitative methods include:

  • Survey research
  • Descriptive research
  • Correlational research

LEARN MORE: Descriptive Research vs Correlational Research

Remember, it is only valuable and useful when it is valid, accurate, and reliable. Incorrect results can lead to customer churn and a decrease in sales.

It is essential to ensure that your data is:

  • Valid – founded, logical, rigorous, and impartial.
  • Accurate – free of errors and including required details.
  • Reliable – other people who investigate in the same way can produce similar results.
  • Timely – current and collected within an appropriate time frame.
  • Complete – includes all the data you need to support your business decisions.

Gather insights

What is a research - tips

  • Identify the main trends and issues, opportunities, and problems you observe. Write a sentence describing each one.
  • Keep track of the frequency with which each of the main findings appears.
  • Make a list of your findings from the most common to the least common.
  • Evaluate a list of the strengths, weaknesses, opportunities, and threats identified in a SWOT analysis .
  • Prepare conclusions and recommendations about your study.
  • Act on your strategies
  • Look for gaps in the information, and consider doing additional inquiry if necessary
  • Plan to review the results and consider efficient methods to analyze and interpret results.

Review your goals before making any conclusions about your study. Remember how the process you have completed and the data you have gathered help answer your questions. Ask yourself if what your analysis revealed facilitates the identification of your conclusions and recommendations.

LEARN MORE ABOUT OUR SOFTWARE         FREE TRIAL

MORE LIKE THIS

Explore the world of numerical data – learn its features and types, and see real-life examples in this informative blog.

Numerical Data: What Is It, Characteristics, Types & Examples

Nov 3, 2023

Companies use post-test surveys to learn how well their products or services are doing. Figure out what's working and what needs improvement.

Post-Test Surveys: Definition, Elements & How to Create One

The 5 C's of Customer Experience

The 5 C’s of Customer Experience: Your Roadmap for Success

Nov 2, 2023

Usability testing templates

Usability Testing Templates: Key Components, Types + Benefits

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

good research work meaning

Explore as a

  • Our history
  • Our rules & codes
  • Our reports
  • Our expert advice
  • Our funds and opportunities
  • Our journals
  • Our medals and awards

Research practice

  • Supporting teaching and learning
  • Our facilities hire
  • Supporting international research collaboration
  • Alert newsletter
  • Videos of our talks
  • Te Tapeke Fair Futures
  • Plastics in the Environment
  • Blue light Aotearoa
  • Gene editing in Aotearoa
  • Climate change and New Zealand
  • Join as a Friend
  • Join as a Member
  • About the ECR Forum
  • Forum Committee Members
  • Te Whitinga mai o te rā
  • Past and future
  • New Zealand ORCID Consortium
  • What is ORCID?
  • Research Updates

Share our content

  • What is good research practice?

Research practice encompasses the generic methodologies that are common to all fields of research and scholarly endeavor. The term 'good research practice' describes the expected norms of professional behavior of researchers.

As Royal Society Te Apārangi, we are legislated to “provide infrastructure and other support for the professional needs of scientists, technologists and humanities scholars”.  

This section of the website seeks to be a point of reference for researchers and scholars seeking to identify good  quality research practices to follow.

  • Ethics and integrity
  • Research Charter for Aotearoa New Zealand
  • Researcher identification
  • Research workforce issues
  • Mentoring guidelines for researchers
  • Public engagement guidelines
  • Dealing with issues and concerns about conduct
  • Publishing guidance
  • Enhancing evidence-based decision making
  • Field of Research Calculator
  • Socio-Economic Objective Calculator
  • Plastics in the Environment Understanding Aotearoa's plastics problem

General enquiries: +64 4 472 7421 | e[email protected]  | PO Box 598, Wellington 6140

View more on how to  contact us  or use this form to send us an email. 

good research work meaning

What is meaningful research and how should we measure it?

  • Open access
  • Published: 05 August 2020
  • volume  125 ,  pages 153–169 ( 2020 )

You have full access to this open access article

  • Sven Helmer   ORCID: orcid.org/0000-0002-9666-1932 1 ,
  • David B. Blumenthal   ORCID: orcid.org/0000-0001-8651-750X 2 &
  • Kathrin Paschen 3  

10k Accesses

11 Citations

2 Altmetric

Explore all metrics

Cite this article

We discuss the trend towards using quantitative metrics for evaluating research. We claim that, rather than promoting meaningful research, purely metric-based research evaluation schemes potentially lead to a dystopian academic reality, leaving no space for creativity and intellectual initiative. After sketching what the future could look like if quantitative metrics are allowed to proliferate, we provide a more detailed discussion on why research is so difficult to evaluate and outline approaches for avoiding such a situation. In particular, we characterize meaningful research as an essentially contested concept and argue that quantitative metrics should always be accompanied by operationalized instructions for their proper use and continuously evaluated via feedback loops. Additionally, we analyze a dataset containing information about computer science publications and their citation history and indicate how quantitative metrics could potentially be calibrated via alternative evaluation methods such as test of time awards. Finally, we argue that, instead of over-relying on indicators, research environments should primarily be based on trust and personal responsibility.

Avoid common mistakes on your manuscript.

Introduction

Quantitative metrics are used for managing education, evaluating health services, and measuring employee performance in corporations, e.g. see Austin ( 1996 ). This trend does not spare academia: there is a perception among researchers that appraisal of their work is focusing more and more on quantitative metrics. Participants in a series of workshops organized by the Royal Society reported that “current measures of recognition and esteem in the academic environment were disproportionately based on quantitative metrics” such as publication and citation count, h-index, i10-index, number of Ph.D. students graduated, and grant income (Royal Society 2017 ). These numbers are used to rank individuals in job applications or promotion procedures, departments in national research quality assessments (e.g. the Research Excellence Framework (REF) in the United Kingdom, the Valutazione della Qualità della Ricerca (VQR), or Research Quality Assessment, in Italy, and the Excellence in Research for Australia (ERA) in Australia), or even universities in international ranking lists (e.g. the Times Higher Education (THE) or Quacquarelli Symonds (QS) World University Rankings). This is because research funding needs to be managed, often by people unfamiliar with the research itself. Metrics provide the necessary simplification, and they promise to be impartial, deterministic, and decision-friendly.

However, there is rising recognition that these metrics do not adequately capture research excellence and are not effective at promoting it (Royal Society 2017 ). In this paper, we provide arguments for this view. In the “ Current trends ” section, we argue that using metrics in performance management in an area as risky and as unclearly defined as research has an impact: researchers adapt to the metrics used to manage them and this adaptation changes the research practice. For instance, the most interesting research questions often tend to be challenging and therefore involve a high risk of failure. The current system discourages researchers from tackling these questions, as the short- and mid-term rewards under a metric-based scheme are low.

Our investigation of research evaluation processes was triggered by a discussion in which we tried to imagine academia as a topic of a “Black Mirror” episode. “Black Mirror” is a dystopian science-fiction series that originated on British television in 2011, often describing scenarios in the near future employing technology that is already available or might soon be reality. According to Singh ( 2014 ) “Black Mirror” stands in the tradition of the best science-fiction works, which “study life as it is now, in our time, through the speculative lens of changing technology”. We found this setting a fitting way to illustrate what being an academic could look like in a dystopian environment in the near future if the current development of relying increasingly on quantitative metrics continues. In such a scenario (depicted in the “ A dystopian future ” section), the process of conducting and publishing research would be entirely gamified. In the “ Meaningful research as an essentially contested concept ” section, we look at the notion of research from the point of view of essentially contested concepts. We argue that people disagreeing on how to define high-quality research is not due to inconsistent use of of terminology or the failure to understand each other’s definitions, but is an inherent property of complex and abstract concepts, such as research. This implies that we cannot expect to find an all-encompassing definition, but need to engage in an ongoing discussion. We come back to quantitative metrics in the “ Quantitative metrics as proxy indicators ” section, arguing that the most widely used metrics for research quality are proxy indicators, i.e., indirect measures. We understand that it is very difficult to come up with direct measures, but it would already help if the people designing indicators are aware of this fact and put mechanisms in place to evaluate the quality of the indicators at regular intervals to see if they are fit for purpose. Additionally, we argue that using fewer performance measures and avoiding to over-regulate an environment can have beneficial effects. In the “ An analysis of quantitative publication metrics ” section we look at concrete numbers and analyze citation numbers for a computer science bibliography dataset. Although our findings are inconclusive, we think that contextual information, such as the test of time awards we investigate, can help in designing better indicators. We give an overview of several initiatives arguing for a more conscientious use of indicators and show how they relate to our work in the “ Related Initiatives ” section. Finally, in the “ Conclusions and outlook ” section, we conclude by recommending evaluation practices based on trusting the researchers and discussing the goals of research explicitly to improve the management of research.

Current trends

Since evaluation metrics affect who will have a successful career in science, as well as which research gets funded, individuals and institutions adapt to these metrics, in order to receive a better evaluation. Parnas ( 2007 ) points out that measuring research output by numbers rather than peer-assessed quality has a profound impact on research itself: there is a tendency to produce more publications, as this will increase the measured values. This increase is mainly accomplished by writing shallower papers without investing a lot of time in carefully conducted research. Common techniques are doing small empirical studies based on a few observations, specifying systems and languages without actually implementing them, going for the “smallest publishable unit” [already mentioned in Broad ( 1981 )], and working in large groups, adding the names of group members without actual contributions to publications. For instance, Ioannidis et al. ( 2018 ) have identified scientists who publish a paper every 5 days, calling them hyperprolific authors. Between 2001 and 2014 the number of hyperprolific authors increased from 4 to 81, while the total number of authors increased by a factor of 2.5. Footnote 1 The methods for gaming the system go even further: Weingart ( 2005 ) mentions that standards for PhD candidates have been lowered, and that there is a tendency towards unambitious but safe research proposals.

There are also cases of highly questionable behavior. Hvistendahl uncovered practices involving researchers buying author slots on papers written by others or buying papers from online brokers (Hvistendahl 2013 ). Another practice is researchers turning to predatory publishers to get low-quality work published, i.e., by paying dubious publishers and bypassing the peer-review process (Beall 2012 ). Plagiarism and duplicate publication is also on the rise; Steen et al. ( 2013 ) report increasing numbers for these types of offenses. Gewin ( 2012 ) discusses forging experimental data, creating fake data, and similar misconduct: according to the Thomson Reuters Web of Science database (formerly known as Web of Knowledge), 381 journal articles were retracted in 2011, up from 22 in 2001. Noorden ( 2011 ) reports a ten-fold rise in the number of retractions between 2001 and 2010 in Thomson Reuters’ Web of Science, even though the number of published papers only went up by 44%.

A dystopian future

While attending a Re-coding Black Mirror workshop organized by Troullinou et al. ( 2018 ), we identified an underlying pattern of many “Black Mirror” episodes: take a scenario involving human interactions that is awkward, annoying, or creepy when encountered in a face-to-face situation and use technology to amplify the effect and/or scale it up to millions of people. Below, we sketch a few scenes extrapolating the impact of technology and methods from not so far in the future on evaluating research work. Let us look at the typical day of a researcher, called Dr X, in the near future.

Scene 1: At the desk with a PhD student

Morning Anna, how’s your first month?

[ Dr X doesn’t wait for Anna’s answer .]

So, your draft ...

Did you like my ideas?

Well, I ran your draft through A4 ...

Sorry, “A” what?

Academia 4.0. A tool to help you polish your paper, it’s connected to a huge database of publications and their stats.

[ Turns towards monitor , points at a window on the screen .]

Here you see some general stuff about the paper: probabilities of getting accepted at different venues, likelihood of attracting citations, just everything. And even better: it comes with a recommender system and uses machine learning to improve your stats.

As you can see, these numbers don’t look too great at the moment. So, let’s get started.

You see, when I hover over this paragraph, it suggests a few ways to increase impact. Use active voice, cut down on adverbs, that sort of thing. It can also generate text passages for you.

Here, for example, this is a pretty weak statement. “Possible to achieve a speedup”? Your work is better than that, so say it!

Well, um, I guess ...

Well, as it stands, this section has a very low likelihood of attracting citations, see that gauge up there? Can you make the statement stronger?

[ Dr X highlights the section; a gauge at top of the page flutters briefly and settles to a low reading .]

So, the speedup, it’s, in some cases, it’s as much as 8.

Ah. So, write “we’ve observed a speedup by as much as one order of magnitude”!

Eight, not ten ...

[ Dr X gives Anna a look , Anna shuts up . Dr X changes the text , highlights the modified section . The gauge at the top flutters again and settles at a higher reading .]

See, now that’s much better. Ok, ...

[ Dr X scrolls through the text , stopping at parts that the tool is highlighting .]

...ah, right, definitely need to be citing Prof. W’s group here. Make a note of that.

I read their work, it’s not really relevant for our work.

Well, we owe them.

[ Anna makes a note on her tablet . Dr X notes a red warning on the screen .]

I almost missed that. Need to improve your RGE factor.

Respectable graphs and equations, throw a few more fancy mathematical formulas into the text.

I tried to keep the paper accessible...

You do want an A* paper, don’t you? So, think of something.

[ Anna hesitates for a moment .]

Um, one more thing ...I know it isn’t really related to my PhD, but I had a look at this new algorithm ...

Not now, first you have to hit your targets. Look:

[ Dr X brings up another screen .]

This is your current likelihood of continued funding.

And besides, not hitting your target will make me and the department look bad in the next evaluation.

[ Dr X ’ s smartphone chimes .]

Got to go, see you later!

Scene 2: Hiring committee meeting

Thanks for joining me to discuss the application of Dr M. What are your opinions?

[ A group of academics sits around a table in a meeting room to discuss a job applicant . A pan around the room shows everyone using a tablet or a laptop (or both) , usually with some form of dashboard showing some metrics .]

I had a look at her work, seems very interesting. I also liked her presentation and she has 3 years of industry experience. Our students could really profit from someone like this.

That’s all well and good, but A4 tells me that her h-index is rather low.

[ Dr X clicks on a button .]

And the amount of funding she has attracted so far, let’s not even go there.

Still, the topics she is working on...

[ Prof Y , who has been typing on his laptop, interrupts .]

I just ran the numbers through A4. If we hire her, our average quality level will drop by a few points, meaning our department will lose two ranks in the national ranking.

I double-checked our planned application to the new excellence framework with A4 yesterday. Department-wide we’re short four A* publications ...Not pointing any fingers here, but I guess everyone knows their own numbers...Anyway, Dr M does not have enough of those. So this will jeopardize our application.

How come the target was 20, though? Wasn’t it 15 last year?

We opened two new researcher positions, they do come with obligations.

[ Prof Y looks at his colleagues .]

So, I think you’ll all agree that hiring Dr M will not work out.

Scene 3: Back at home

How was your day, honey?

[ Dr X is seen at the dinner table with their partner . Dr X sighs .]

Nothing special really ...

Oh, ok, ...do you think we could ...

[ The smartphone of Dr X makes a sound .]

Sorry, got to take this.

[Dr X unlocks phone and selects the Academia 4.0 app. Suddenly a fanfare sounds and Dr X punches the air in triumph.]

Yes, A4 tells me that I’ve been promoted to researcher level 2.4.1!

Congratulations, I guess ...

I can finally apply for positions of category C3.

[ A window pops open in the app stating the targets for the next level: must have published at least ten more papers, must have published at least three more papers in A * venues, must have submitted grant applications for a total of $2,000,000, must have secured grants for a total of $500,000.]

Meaningful research as an essentially contested concept

As exemplified in the story about Dr X, one striking feature of purely metric-based research evaluation practices is that both researchers and evaluators have less and less autonomy and personal responsibility. On the one hand, researchers are institutionally mistrusted in that they are denied the capacity to decide autonomously whether a research project is worth pursuing. On the other hand, evaluators tend to assess research by blindly checking if it fulfills a set of predefined criteria, and hence abdicate their own authority to actually judge the content of research.

We do not propose to abolish all forms of research evaluation and to let researchers pursue whichever paths they wish to follow without any accountability. The resources for funding research are limited after all, so someone has to decide which research is worth financing (Armstrong 2012 ). However, we believe that before developing evaluation tools for decision-makers, we have to think about what we value. What should be the role of universities and academic research? In “The Guardian”, Swain ( 2011 ) quotes Stephen Anderson, director of the Campaign for Social Science, saying that while the government has created a market economy in higher education it is not yet clear how that constantly moderated market will work. He suggests that potentially far-reaching changes are being made for reasons of financial expediency, without any thought of what their wider effect will be and goes on to state “what we are looking for is a greater vision for what the end product might look like” and asking “what is it we are all trying to do?”

  • Essentially contested concepts

We believe that the goal of managing research should be to make sure that meaningful research is done. However, it can be argued (Ferretti et al. 2018 ) that the concept of meaningful research is essentially contested in the sense introduced by Gallie ( 1955 ). According to Gallie, a concept is essentially contested if its “proper use [...] inevitably involves endless disputes about [its] proper uses on the part of their users” (Gallie 1955 ), and he mentions the concepts of art, democracy, and social justice as typical examples. It is important to note that the root cause of the disputes does not lie in an inconsistent use of terminology or the failure of people to understand someone else’s definitions, but is intrinsic to the concept. In the following, we also refer to an extensive discussion of Gallie’s work by Collier et al. ( 2006 ). According to Collier et al. the goal of Gallie was “to provide a rigorous, systematic framework for analyzing contested concepts”; although this framework is sometimes discussed controversially, it offers important tools for making sense of complex concepts.

Let us now have a closer look at the framework. Gallie defines seven conditions that a concept has to satisfy to be considered essentially contested. First, it has to “accredit some kind of valued achievement”. In the words of Collier et al. this means that a concept “generally implies a positive normative valence”. In the case of meaningful research, we claim that it has value in itself, by being intellectually stimulating, and/or that it has a positive impact on society in the form of practically useful research, so this clearly meets the first criterion. Second, the concept has to be internally complex, i.e., it is made up of multiple components. This is also fulfilled by (meaningful) research, as it is not a simple and straightforward task, but made up of many interrelated activities. Third, the concept has to be describable in various ways, which is closely related to the second criterion: if a concept comprises many different components, it is likely that users put different emphases on these components. This also holds for research: the methodologies used in physics differ from the ones used in social sciences or literary research. Even in a single field, the approaches taken by different researchers vary, as research is made up of many different interrelated activities. Fourth, the meaning of a contested concept is open and may evolve. Gallie states that “accredited achievement must be of a kind that admits of considerable modification in the light of changing circumstances” or as Collier et al. formulate it: they are “subject to periodic revision in new situations” and that this “revision cannot be predicted in advance”. Many of the ground-breaking results in research have triggered a paradigm shift, changing the way that other researchers conduct their work. Gallie states that the first four conditions are the most important and necessary ones for a concept to be called essentially contested, but that they do not provide a sufficient definition yet. He goes on to describe three additional conditions. The fifth condition asserts that different persons or groups not only have their own opinions about the correct use of a concept, but that they are aware of other uses and defend their way of doing things against these alternatives. This is true for meaningful research as well: researchers have their reasons for applying certain methodologies, or for pursuing certain theories, and will justify their choices. Gallie introduces the final two characteristics to distinguish essentially contested concepts from situations in which a dispute is caused by confusion about terminology. The sixth characteristic assumes that the concept originates from an authority, or exemplar, that is acknowledged by all the users of a concept, even by groups that disagree on its proper use. According to Collier et al. ( 2006 ), “the role of exemplars in Gallie’s framework has generated much confusion. This is due, in part, to his own terminology and to inconsistencies in his presentation”. In its narrow interpretation, this refers to a single, original exemplar. In its wider interpretation, Gallie asserts that it can include “a number of historically independent but sufficiently similar traditions”. The important point here is that the concept under question has one underlying idea as a common core and is not rooted in multiple different ideas. We believe that this point is covered by a long list of well-known researchers and role models in history, who have employed a wide range of methodologies, some of them outdated by now, and who may have also propagated some erroneous views, but there is consensus about those researchers having conducted meaningful research that has profoundly advanced their field or even society as a whole. Finally, the seventh characteristic maintains that different uses of a concept competing against and acknowledging each other can advance and improve the use of a concept as a whole. Collier et al. also call this progressive cooperation and allege that this may (but does not have to) lead to an eventual decontestation by initiating more meaningful discussions. We believe that this is true of the concept of meaningful research: in order to properly understand what meaningful research is, we continuously have to engage in discussion with others and adapt our current understanding to a changing sociocultural environment.

As we can see, the notion of meaningful research satisfies all the conditions formulated by Gallie. Next, we discuss some of the implications [Gallie states them as outstanding questions in Gallie ( 1955 )]. The most obvious consequence is that once we have identified an essentially contested concept, we know that it may not be possible to find a single general principle or best use of it. However, that does not mean that it is impossible to do meaningful research on an individual level. Gallie asserts that for an individual there can very well be rational arguments to use a certain variant of a concept and even switch to another variant when the circumstances change. In an optimistic setting, a participant recognizing a concept as essentially contested allows them to respect a rival use rather than discrediting it. In turn, this could help raise a discussion about different aspects of such a concept to a higher level, acknowledging that different strands may actually help in advancing the whole concept. In a pessimistic setting, this may lead towards more aggressive behavior of a participant who may try to sideline other approaches after realizing that they cannot convince their adversaries by reasoning. This may result in a situation in which different parties revert to political campaigning to gain influence.

Concepts and conceptions

There is also a different school of thought postulating that the dispute identified by Gallie is actually caused by superimposing two different meanings in the term concept. On the one hand, there is the concept itself, which is an abstract and idealized notion of something, while, on the other hand, there are different conceptions, or instantiations, of this concept. Dworkin ( 1972 , 1978 ) uses “fairness” as an example and goes on to explain the important distinction between concept and conception: “members of [a] community who give instructions or set standards in the name of fairness may be doing two different things”. If they ask people to treat others fairly and do not give specific and detailed instructions, they use the idealized concept and it is up to each person to decide how to actually act fairly. On the other hand, they could formulate specific instructions on how to behave fairly; Dworkin mentions the application of the utilitarian ethics of Jeremy Bentham here. In this case a particular conception of fairness is used to instruct people. Dworkin emphasizes that this is a difference “not just in the detail ” of the instructions given but in the kind of instructions given. In the case of invoking the concept of fairness, the instructor does not attach particular value to their own views, i.e., they do not deem their views to be superior. When specifying a particular conception of fairness, this sends out the signal that the instructor believes that their views have a higher standing. Dworkin argues that “when I appeal to fairness I pose a moral issue; when I lay down my conception of fairness I try to answer it”.

Criley ( 2007 ) proposes that “a concept F simply is a cluster of norms providing standards for the correct employment of the corresponding linguistic term F ”. This raises the question which norms belong to or are part of a concept? For an essentially contested concept, this question can of course never be answered conclusively; otherwise, the concept would not be essentially contested. On the contrary, it is crucial for the understanding of essentially contested concepts that they always involve a normative dimension which cannot be laid down into fixed principles and guidelines. According to Criley, a conception is also a cluster of norms. However, in contrast to a concept the provided norms are much more concrete, resolving some of the vagueness or conflicts found in concept clusters, even up to the point of stating explicit rules. Criley ( 2007 ) goes on to discuss the relationship between concepts and conceptions: “Notice, however, that the point of having a distinction between concepts and conceptions becomes much clearer once we turn our attention to the possibility of distinct rival candidate conceptions of a concept. If we focus exclusively on those concepts that have a single, uncontroversial, determinate conception that is implicit in any thinker who is competent with respect to that concept, then it becomes hard to see the importance of a distinction between concepts and conceptions”.

We find this observation particularly useful when trying to understand the concept of meaningful research. As we have already seen in the last section, the concept of meaningful research is multi-faceted and highly complex. In their work, researchers are striving for something that cannot be defined conclusively, as meaningful research is an essentially contested concept. In essence, researchers follow different conceptions of meaningful research, depending on their research area and their personal experience. We would even expect a certain degree of rivalry between groups using different conceptions. Also, individual researchers might switch from one conception to another during their careers. Following the interpretation of Dworkin, encouraging researchers to do meaningful research by invoking the abstract concept hands the responsibility on how to achieve this to the individual researchers. If a certain conception of meaningful is laid down, though, to describe what high-quality research looks like, then this particular conception is enforced as a standard, pushing researchers into a certain direction.

Multidimensionality of meaningful research

In a series of studies, meaningful, high-quality, or excellent research is characterized as a multidimensional concept. In particular, various empirical studies (Aksnes and Rip 2009 ; Bazeley 2010 ; Hug et al. 2013 ; Mårtensson et al. 2016 ) have revealed that “researchers’ conceptions of research quality [include] a multitude of notions [which] span from correctness, rigor, clarity, productivity, recognition, novelty, beauty, significance, autonomy, difficulty, and relevance to ethical/sustainable research” (Aksnes et al. 2019 ). Which of these dimensions of quality or meaningfulness is predominant in a specific assessment of research quality depends of the context of the assessment. Moreover, characterizing the concept of meaningful research as multidimensional implies that it has no abstract meaning detached of the specific dimensions. Rather, it should be viewed as a “boundary object that [...] offers some ground for constructive discussion via a shared framework” (Hellström 2011 ).

This characterizing of meaningful research as multidimensional nicely fits within the conceptual framework developed in the previous sections: since meaningful research is essentially contested, researchers in practice adhere to different conceptions of meaningful research, which, in turn, emphasize and deemphasize different dimensions of research quality. The concept of meaningful research hence becomes meaningless if we abstract from these dimensions. However, characterizing meaningful research as essentially contested also implies that it is more than an umbrella term for the various, often pairwise incompatible dimensions of research quality. Rather, it entails that researchers following a specific conception of meaningful research must in principle be willing to defend their conception in discussion with others. Or put differently: a researcher who states that they simply follow their conception of meaningful research and does not care about what the community says or thinks about it is no longer involved in the progressive cooperation of doing research.

A conception of meaningful research

As a basis for discussion, let us briefly give a, necessarily inconclusive, description of our conception of meaningful research. For a start, we would like researchers to work on questions the answers to which would help solve major problems faced by society. Also, we would like researchers to attempt to solve the really challenging problems and not get sidetracked by minor details that have no or very low impact. We call such research practically useful . Identifying meaningful with practically useful research would be too narrow, though. It would rule out a lot of fundamental research that helps us gain a deeper understanding of a field. Someone reading about fundamental research of this kind should be intellectually stimulated or conceptually enriched by it. We call this research stimulating . Ideally, we would like research to be stimulating also for readers from a different research community, otherwise we could end up with closed research communities who develop very esoteric questions that thrill their members but are irrelevant or even unintelligible to the rest of the world. So, from our point of view, a promising starting point for characterizing meaningful research could be to require that it should be stimulating or practically useful.

Note that this description leaves out important dimensions such as ethics (according to our characterization, the Manhattan Project or a project on drone warfare would be classified as meaningful), as well as methodology (we assume that researchers adhere to sound and scientific methods). Moreover, recall again that our characterization of meaningful research as practically useful or stimulating should not be misread as an attempt at defining meaningful research. Rather, it should be understood as a starting point for a discussion in which stakeholders should engage continuously.

After discussing meaningful research from a theoretical point of view, we now turn to practical aspects of evaluating research, namely the use of metrics, which we view as proxy indicators, and the issues associated with them.

Quantitative metrics as proxy indicators

Metrics are quantitative measures used for evaluating work artifacts such as academic publications. We focus on publication- and citation-based metrics, which are very common, see Aagaard et al. ( 2015 ). We argue that these metrics are proxy indicators —indirect measures—which need to be applied and interpreted carefully, and propose a feedback loop approach.

If we are right in characterizing meaningful research as essentially contested, then there cannot be a metric that captures it. A set of reasonably good proxy indicators is the best we can get. Moreover, proxy indicators target a particular conception instead of measuring the general concept. Trying to explicitly formulate the conception that forms the basis of an indicator adds context and helps in understanding what we are actually measuring. This also brings hidden conflicts between different conceptions into the open, and can be used as a starting point for discussions.

Publication- and citation-based metrics can be weighted based on the type of publication (journal article, book chapter, monograph, etc.), a quality rating of the publication venue (which can be decided by committee or taken from a trusted source such as the ISI Web of Science by Thomson Reuters), and the number of authors. Different weighting schemes are in use, and a lot has been written about how to implement them fairly, for instance by Aagaard et al. ( 2015 ) and Piro et al. ( 2013 ). It is plausible for publication and citation-based metrics to be positively correlated with other metrics targeting research quality, and indeed this correlation has been shown to exist, e.g. by Jarwal et al. ( 2009 ). However, that study also showed that while there is a correlation, the bibliometric values they studied did not account for the full variance of the quality ratings assigned by independent peers. Michels and Schmoch ( 2014 ) have shown that citation counts are generally lower for journals published in non-English languages. In certain areas, such as linguistics and cultural studies, it makes perfect sense to publish in another language, but this does not say anything about the quality of a publication. Additionally, as Nygaard and Bellanova ( 2017 ) point out, publication metrics crucially depend on the criteria for deciding what counts as a scientific publication and its value. For example, among computer science researchers conference papers, especially those published in top-tier conferences, are considered to be on par with journal papers, whereas designers and artists are often evaluated by their portfolios and not their publications. Numerous other studies have complained that bibliometrics do not capture research quality fully or fairly [see Aksnes et al. ( 2019 ) as well as Grimson ( 2014 ) and papers cited there], and moreover, that bibliometric processes influence the object of their measurements. In the “ Current Trends ” section we come back to this and Michels and Schmoch ( 2014 ) also discuss that effect.

The considerations above underline that metrics should be chosen and applied carefully. But what does this mean? First, we need to specify what we want to measure. It is very hard to define a metric if the goals we want to achieve are vague and unclear. For instance, in their report about the development of indicators for research excellence by the European Commission, Ferretti et al. ( 2018 ) highlight that, when asked to define research excellence, many stakeholders were not able to give an answer. This is not an ideal starting point for choosing or defining metrics. We also have to be aware that often metrics are applied by administrators or scientists rather than bibliometricists. This is sometimes called citizen bibliometrics (Leydesdorff et al. 2016 ). According to Hammarfelt and Rushforth ( 2017 ), a considerable number of citizen bibliometricists are aware of the shortcomings, and apply metrics thoughtfully. However, we believe that they could get better operational support, e.g. in the form of training, frameworks, and documented best practices. We agree with Wang and Schneider ( 2020 ), who, in the context of interdisciplinary measures, argue that “the operationalization of interdisciplinary measures in scientometric studies is relatively chaotic”.

Our recommendation is to evaluate quantitative metrics periodically to ensure they are well chosen to meet requirements, and that they are being applied correctly. This is the feedback loop idea formulated by O’Neil ( 2016 ), who argues that we need to evaluate an algorithmic evaluation process itself from time to time to check if it is (still) fit for purpose by comparing metric outcomes with other assessment techniques. This allows us to notice when metric results are wrong and to take corrective action. The idea of a feedback loop also appears in the context of concepts and conceptions: Criley ( 2007 ) calls this reflective equilibrium , which is “a method for inducing or restoring coherence between general principles and particular judgments through a process of mutual adjustment of the conflicting principles and particular judgments”. Proposing a feedback loop raises the question: how do we evaluate proxy indicators against a notion of quality? This implies we need an alternative way of assessing how well given research work matches our conception of research quality. This is difficult; some authors use independent peer review (e.g. Jarwal et al. ( 2009 )) but of course peer review itself is a fraught metric (see Brezis and Birukou 2020 ; Krummel et al. 2019 ). We investigate test of time awards in the “ An analysis of quantitative publication metrics ” section; in some fields, retractions may also provide a useful signal. Rafols et al. ( 2012 ) speak out for indicators that do not reduce the quality of research to a single number, but provide contrasting perspectives. They call this opening up the decision-making process and argue that these indicators should be embedded into an assessment or policy context, so they can be used to interpret the data and not as a substitute for judgment. We believe that such indicators could also be helpful in implementing a feedback loop.

Moreover, we advocate more personal responsibility for researchers, creating space for them to apply their own intellectual judgment. Luhmann ( 2017 ) states that the two most important concepts making complex social systems feasible are trust and power. It follows that the alternative to putting faith in researchers is to have an authority deciding unilaterally and controlling the researchers’ actions. However, Pollitt ( 1993 ) points out that treating staff as “work units to be incentivized and measured” instead of as “people to be encouraged and developed” will lead to demoralized and demotivated employees by taking away their intrinsic motivation (cf. also Shore and Wright 2000 ). In their review on the evolution of performance management, Pulakos et al. ( 2019 ) come to the conclusion that “formal performance management processes disengage employees, cost millions, and have no impact on performance”. In the context of taxpayer honesty, Kucher and Götte ( 1998 ) have shown that observing tax laws and regulations is not just a matter of how strictly taxpayers are controlled. When taxpayers have trust in a government and in return are trusted by the government, taxpayers feel more obliged to follow the rules. Additionally, when people have the possibility to participate in a decision process and have some control over the outcomes, they are more willing to accept these decisions and outcomes. We believe that leveraging intrinsic motivation and allowing researchers to act more independently is the way to go, since demotivating staff will hardly result in meaningful research. Dance ( 2017 ) reports that a number of academics have already taken matters into their own hands by not joining traditional academia at all or leaving and working as independent researchers.

An analysis of quantitative publication metrics

We want to round off our work by taking a closer look at some concrete numbers, i.e., analyze citation numbers and investigate alternative evaluation methods. As two authors belong to the computer science research community, we decided to use the data set provided by the dblp computer science bibliography originally created by Ley ( 2009 ), which tracks all major computer science publications. At the time of writing, dblp indexed over 4.7 million publications. For our analysis, we used a version with citation numbers created for ArnetMiner by Tang et al. ( 2008 ) containing more than 3 million publications. Footnote 2 In a first step, we did some exploratory data analysis to get a better feel for the data and to check its consistency. Figure  1 shows the average and maximum citation counts for papers arranged by year since their publication. The older papers have a higher average citation count, but there is some bias here. First, they have been around longer, which means they had more opportunities to be cited and, second, there are fewer older papers, so outliers have more influence. The different number of papers is also illustrated in the graph for average citation counts: it is very smooth on the left-hand side and gets more and more ragged towards the right, as the number of papers in the denominator used for computing the average decreases with the years. Additionally, dblp did not collect old papers as systematically as newer ones, so it is likely that only influential old papers made it into the collection. While many papers reach their maximum citation count in the first 10 years after publication, Footnote 3 there is a considerable number of papers that do not. Most evaluation schemes, such as REF and VQR, only look at papers published in the last 5–7 years, though, which means that they may overlook important publications. What we do not show in a graph here is the (unsurprising) fact that the citations are not uniformly distributed: a small number of papers get the most citations.

figure 1

Average versus maximum citations

For the next step, we asked ourselves how we could analyze the suitability of citations for determining the quality of publications and look at an alternative. The validation of metrics performed in Jarwal et al. ( 2009 ) uses independent peer reviews for this purpose. This is useful but labor intensive; Tennant ( 2018 ) estimates that more than 2.5 million English language research papers are published annually and that the rate is still increasing. Considering that usually two to three reviewers are needed for each review and that the number above reflects the number of accepted publications, i.e., the number of submissions going through the process is even higher, this puts an increasing burden on reviewers [sometimes referred to as reviewer fatigue ; see Breuning et al. ( 2015 ) and Fox et al. ( 2017 )]. Not only is there a lack of time: Tennant ( 2018 ) goes on to state that there is no proper incentivization for reviewers to do a good job. Moreover, even if a peer review is done to a high standard, some research needs time to be appreciated; a peer review performed now may come to different conclusions than one performed 5 or 10 years later.

A number of publication venues have introduced test of time awards , which retrospectively identify important and influential work done ten or even 20 years ago. We found this an interesting approach by the scientific community to acknowledge quality in hindsight. Footnote 4 For our study, we selected the Test of Time Award of the International Conference on Very Large Data Bases (VLDB), which is a prestigious publication venue in the area of database research. Figure  2 shows the citation counts for the award winners compared to the average citation count of all VLDB papers.

figure 2

VLDB test of time award winners versus all VLDB papers. One paper has so many citations that including its peak would make the bottom third of the graph unreadable

While award winners tend to be cited more often than the average VLDB paper, there are also award winners whose citation numbers are below the average. This shows that citation numbers are not the only criterion used by the committee for selecting the winners, otherwise the papers with the lower citation counts would not have received the award. This is also reflected by the number of times that the award winners were cited after receiving the award; for VLDB the award is usually given 10–12 years after publication and many papers still have a considerable number of citations after this period.

While not conclusive, we believe this data supports the hypothesis that citation counts are not a complete measure of research quality on their own. We think that other indicators, such as test of time awards, should be investigated and that this would be a promising area for future research. However, since these awards provide recognition for research with a time delay, there are many contexts where they cannot be applied directly. They can be used for calibrating other proxy indicators, though.

Related initiatives

We are by far not the only ones criticizing the effects of quantitative metrics on research and calling for improvements in the area of research evaluation. For instance, there are the San Francisco Declaration on Research Assessment (DORA) (DORA 2012 ), the Leiden Manifesto for research metrics (Hicks et al. 2015 ), and the Metric Tide report (Wilsdon et al. 2015 ).

While we agree with the sentiments expressed in DORA, we want to focus on the operationalization of research metrics and we believe this to align with arguments made in the Leiden Manifesto and the Metric Tide report. The authors of the Leiden Manifesto call for open and transparent high-quality processes in the context of decision-making, part of which are the regular scrutinization of indicators and their improvement if they are found lacking. This is in line with the feedback loop we are proposing. Other important points are that quantitative assessment should go hand-in-hand with qualitative evaluation and that it should consist of a suite of indicators to reflect different aspects of research. Similar points were made in the Metric Tide report and its notion of responsible metrics . The report noted, that “there is potential for the scientometrics community to play a more strategic role in informing how quantitative indicators are used across the research system and by policymakers”. We think that a useful next step toward operationalizing bibliographic metrics is to propose and evaluate feedback loop mechanisms. These mechanisms should not only consist of quantitative evaluation methods, but also include qualitative ones.

Conclusions and outlook

Applying purely metric-based indicators for evaluating research has the potential to lead to a dystopian research environment in the style of a “Black Mirror” episode. There is a hidden danger in just accepting certain indicators, such as citation numbers in Google Scholar or Scopus, because they are convenient. Many researchers are left with the impression that important decisions are taken over their heads and that they have no say in what is happening. This situation is at risk of devolving into a low-trust environment, which cannot be in anyone’s interest. In our view, over-reliance on indicators also hands too much power to the institutions managing these indicators and makes it difficult to introduce changes, locking in a particular conception of research. Stilgoe ( 2014 ) expressed this provocatively: “[research] excellence tells us nothing about how important the science is and everything about who decides”.

Given the diversity of academic disciplines, it is difficult to come up with a set of universally applicable methods for assessing research. Nevertheless, the question of what we want research to achieve has to be asked and discussed explicitly. Over the course of history, the roles that universities played have already changed before: for example, in the nineteenth century they were transformed from educational institutions to organizations that also pursued research (Willetts 2017 ). It seems that at the moment universities are undergoing another transformation, but it is far from clear where they are heading (Swain 2011 ).

Moreover, the discussion we are asking for is not a one-off event: because research is an essentially contested concept and the environment is constantly changing, this needs to be an ongoing public debate. As we expect quantitative metrics to stay with us as an evaluation tool for some time to come, we at least need more transparency in how they are created and which policies drive them.

Publications done by large international teams, which tend to have a large number of authors, as well as Chinese and Korean authors whose names could not be disambiguated were excluded.

DBLP-Citation-network V10 is at https://aminer.org/citation , the Jupyter notebooks used for the analysis can be found at https://github.com/kpaschen/spark/tree/master/dblp/jupyter .

Please note that the average and maximum citation counts use different scales. Clearly, the maximum count, which reaches ten for 80 years since publication, is always greater than the average count.

Note that we are still using publications as a proxy for scientific accomplishment here.

Aagaard, K., Bloch, C., & Schneider, J. W. (2015). Impacts of performance-based research funding systems: The case of the Norwegian publication indicator. Research Evaluation , 24 , 106–117.

Google Scholar  

Aksnes, D. W., Langfeldt, L., & Wouters, P. (2019). Citations, citation indicators, and research quality: An overview of basic concepts and theories. SAGE Open , 9 (1), 1–17.

Aksnes, D. W., & Rip, A. (2009). Researchers perceptions of citations. Research Policy , 38 (6), 895–905.

Armstrong, J. (2012). A question universities need to answer: Why do we research? https://theconversation.com/a-question-universities-need-to-answer-why-do-we-research-6230 .

Austin, R. D. (1996). Measuring and managing performance in organizations . New York: Dorset House Publishing.

Bazeley, P. (2010). Conceptualising research performance. Studies in Higher Education , 35 (8), 889–903.

Beall, J. (2012). Predatory publishers are corrupting open access. Nature News , 489 (7415), 179.

Breuning, M., Backstrom, J., Brannon, J., Gross, B. I., & Widmeier, M. (2015). Reviewer fatigue? Why scholars decline to review their peers’ work. Political Science and Politics , 48 (4), 595–600.

Brezis, E. S., & Birukou, A. (2020). Arbitrariness in the peer review process. Scientometrics , 123 , 393–411.

Broad, W. J. (1981). The publishing game: Getting more for less. Science , 211 (4487), 1137–1139.

MathSciNet   Google Scholar  

Collier, D., Daniel Hidalgo, F., & Olivia Maciuceanu, A. (2006). Essentially contested concepts: Debates and applications. Journal of Political Ideologies , 11 (3), 211–246.

Criley, M. E. (2007). Contested concepts and competing conceptions . Ph.D. thesis, University of Pittsburgh.

Dance, A. (2017). Flexible working: Solo scientist. Nature , 543 , 747–749.

DORA. (2012). San Francisco declaration on research assessment . https://sfdora.org/ .

Dworkin, R. M. (1972). The jurisprudence of Richard Nixon. The New York Review of Books , 18 , 27–35.

Dworkin, R. M. (1978). Taking rights seriously: New impression with a reply to critics . Oxford: Duckworth.

Ferretti, F., Pereira, Â. G., Vértesy, D., & Hardeman, S. (2018). Research excellence indicators: Time to reimagine the ‘making of’? Science and Public Policy , 45 (5), 1–11.

Fox, C. W., Albert, A. Y. K., & Vines, T. H. (2017). Recruitment of reviewers is becoming harder at some journals: A test of the influence of reviewer fatigue at six journals in ecology and evolution. Research Integrity and Peer Review , 2 , 3.

Gallie, W. B. (1955). Essentially contested concepts. Proceedings of the Aristotelian Society , 56 , 167–198.

Gewin, V. (2012). Research: Uncovering misconduct. Nature , 485 , 137–139.

Grimson, J. (2014). Measuring research impact: Not everything that can be counted counts, and not everything that counts can be counted. In W. Blockmans, L. Engwall, & D. Weaire (Eds.), Bibliometrics use and abuse in the review of research performance (Vol. 87, pp. 29–41)., Wenner–Gren international series London: Portland Press.

Hammarfelt, B., & Rushforth, A. D. (2017). Indicators as judgment devices: An empirical study of citizen bibliometrics in research evaluation. Research Evaluation , 26 (3), 169–180.

Hellström, T. (2011). Homing in on excellence: Dimensions of appraisal in center of excellence program evaluations. Evaluation , 17 (2), 117–131.

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden manifesto for research metrics. Nature , 520 , 429–431.

Hug, S. E., Ochsner, M., & Daniel, H.-D. (2013). Criteria for assessing research quality in the humanities: A Delphi study among scholars of English literature, German literature and art history. Research Evaluation , 22 (5), 369–383.

Hvistendahl, M. (2013). China’s publication bazaar. Science , 342 (6162), 1035–1039.

Ioannidis, J. P. A., Klavans, R., & Boyack, K. W. (2018). Thousands of scientists publish a paper every five days. Nature , 561 , 167–169.

Jarwal, S. D., Brion, A. M., & King, M. L. (2009). Measuring research quality using the journal impact factor, citations and ’ranked journals’: Blunt instruments or inspired metrics? Journal of Higher Education Policy and Management , 31 , 289–300.

Krummel, M., Blish, C., Kuhns, M., Cadwell, K., Oberst, A., Goldrath, A., et al. (2019). Universal principled review: A community-driven method to improve peer review. Cell , 179 , 1441–1445.

Kucher, M., & Götte, L. (1998). Trust me—An empirical analysis of taxpayer honesty. Finanzarchiv , 55 (3), 429–444.

Ley, M. (2009). DBLP: Some lessons learned. Proceedings of the VLDB Endowment , 2 (2), 1493–1500.

Leydesdorff, L., Wouters, P., & Bornmann, L. (2016). Professional and citizen bibliometrics: Complementarities and ambivalences in the development and use of indicators–a state-of-the-art report. Scientometrics , 109 , 2129–2150.

Luhmann, N. (2017). Trust and power . Cambridge: Polity.

Mårtensson, P., Fors, U., Wallin, S.-B., Zander, U., & Nilsson, G. H. (2016). Evaluating research: A multidisciplinary approach to assessing research practice and quality. Research Policy , 45 (3), 593–603.

Michels, C., & Schmoch, U. (2014). Impact of bibliometric studies on the publication behaviour of authors. Scientometrics , 98 , 369–385.

Noorden, R. V. (2011). Science publishing: The trouble with retractions. Nature , 478 , 26–28.

Nygaard, L. P., & Bellanova, R. (2017). Lost in quantification: Scholars and the politics of bibliometrics. In M. J. Curry & T. Lillis (Eds.), Global academic publishing: Policies, perspectives and pedagogies (pp. 23–36). Bristol: Multilingual Matters.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy . New York: Crown.

MATH   Google Scholar  

Parnas, D. L. (2007). Stop the numbers game. Communications of the ACM , 50 (11), 19–21.

Piro, F. N., Aksnes, D. W., & Rørstad, K. (2013). A macro analysis of productivity differences across fields: Challenges in the measurement of scientific publishing. Journal of the American Society for Information Science and Technology , 64 , 307–320.

Pollitt, C. (1993). Managerialism and the public services: Cuts or cultural change in the 1990s? . Oxford: Blackwell.

Pulakos, E. D., Mueller-Hanson, R., & Arad, S. (2019). The evolution of performance management: Searching for value. Annual Review of Organizational Psychology and Organizational Behavior , 6 (1), 249–271.

Rafols, I., Ciarli, T., van Zwanenberg, P., & Stirling, A. (2012). Towards indicators for ‘opening up’ science and technology policy. In The internet, policy and politics conference 2012 . Oxford, UK.

Royal Society. (2017). Research culture embedding inclusive excellence . https://royalsociety.org/-/media/policy/Publications/2018/research-culture-workshop-report.pdf .

Shore, C., & Wright, S. (2000). Coercive accountability: The rise of audit culture in higher education (pp. 57–89). London: Routledge.

Singh, G. (2014). Recognition and the image of mastery as themes in black mirror (channel 4, 2011-present): An eco-jungian approach to ’always on’ culture. International Journal of Jungian Studies , 6 , 120–132.

Steen, R . G., Casadevall, A., & Fang, F . C. (2013). Why has the number of scientific retractions increased? PLoS ONE , 8 (7), e68397:1–9.

Stilgoe, J. (2014). Against excellence . https://www.theguardian.com/science/political-science/2014/dec/19/against-excellence .

Swain, H. (2011). What are universities for? https://www.theguardian.com/education/2011/oct/10/higher-education-purpose .

Tang, J., Zhang, J., Yao, L., Li, J., Zhang, L., & Su, Z. (2008). ArnetMiner: Extraction and mining of academic social networks. In Proceedings of the 14th ACM international conferecne on knowledge discovery and data mining (SIGKDD’08), Las Vegas, Nevada, (pp. 990–998).

Tennant, J. P. (2018). The state of the art in peer review. FEMS Microbiology Letters , 365 (19), 204.

Troullinou, P., d’Aquin, M., & Tiddi, I. (2018). Re-coding black mirror chairs’ welcome & organization. Companion of the the web conference WWW’18 (pp. 1527–1528). France: Lyon.

Wang, Q., & Schneider, J. W. (2020). Consistency and validity of interdisciplinary measures. Quantitative Science Studies , 1 (1), 239–263.

Weingart, P. (2005). Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics , 62 (1), 117–131.

Willetts, D. (2017). A university education . New York: Oxford University Press.

Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Stephen Curry, S. H., Jones, R., Kain, R., Kerridge, S., Thelwall, M., Jane Tinkler, I. V., Wouters, P., Hill, J. & Johnson, B. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management . HEFCE.

Download references

Acknowledgements

Open access funding provided by University of Zurich. We would like to thank an anonymous reviewer for very helpful comments and many pointers to relevant literature.

Author information

Authors and affiliations.

Department of Informatics, University of Zurich, Zurich, Switzerland

Sven Helmer

Chair of Experimental Bioinformatics, Technical University of Munich, Freising, Germany

David B. Blumenthal

Nephometrics GmbH, Zurich, Switzerland

Kathrin Paschen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sven Helmer .

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and Permissions

About this article

Helmer, S., Blumenthal, D.B. & Paschen, K. What is meaningful research and how should we measure it?. Scientometrics 125 , 153–169 (2020). https://doi.org/10.1007/s11192-020-03649-5

Download citation

Received : 14 November 2019

Published : 05 August 2020

Issue Date : October 2020

DOI : https://doi.org/10.1007/s11192-020-03649-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research evaluation
  • Quantitative metrics
  • Find a journal
  • Publish with us

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper

How to Write a Research Paper | A Beginner's Guide

A research paper is a piece of academic writing that provides analysis, interpretation, and argument based on in-depth independent research.

Research papers are similar to academic essays , but they are usually longer and more detailed assignments, designed to assess not only your writing skills but also your skills in scholarly research. Writing a research paper requires you to demonstrate a strong knowledge of your topic, engage with a variety of sources, and make an original contribution to the debate.

This step-by-step guide takes you through the entire writing process, from understanding your assignment to proofreading your final draft.

Table of contents

Understand the assignment, choose a research paper topic, conduct preliminary research, develop a thesis statement, create a research paper outline, write a first draft of the research paper, write the introduction, write a compelling body of text, write the conclusion, the second draft, the revision process, research paper checklist, free lecture slides.

Completing a research paper successfully means accomplishing the specific tasks set out for you. Before you start, make sure you thoroughly understanding the assignment task sheet:

  • Read it carefully, looking for anything confusing you might need to clarify with your professor.
  • Identify the assignment goal, deadline, length specifications, formatting, and submission method.
  • Make a bulleted list of the key points, then go back and cross completed items off as you’re writing.

Carefully consider your timeframe and word limit: be realistic, and plan enough time to research, write, and edit.

Prevent plagiarism. Run a free check.

There are many ways to generate an idea for a research paper, from brainstorming with pen and paper to talking it through with a fellow student or professor.

You can try free writing, which involves taking a broad topic and writing continuously for two or three minutes to identify absolutely anything relevant that could be interesting.

You can also gain inspiration from other research. The discussion or recommendations sections of research papers often include ideas for other specific topics that require further examination.

Once you have a broad subject area, narrow it down to choose a topic that interests you, m eets the criteria of your assignment, and i s possible to research. Aim for ideas that are both original and specific:

  • A paper following the chronology of World War II would not be original or specific enough.
  • A paper on the experience of Danish citizens living close to the German border during World War II would be specific and could be original enough.

Note any discussions that seem important to the topic, and try to find an issue that you can focus your paper around. Use a variety of sources , including journals, books, and reliable websites, to ensure you do not miss anything glaring.

Do not only verify the ideas you have in mind, but look for sources that contradict your point of view.

  • Is there anything people seem to overlook in the sources you research?
  • Are there any heated debates you can address?
  • Do you have a unique take on your topic?
  • Have there been some recent developments that build on the extant research?

In this stage, you might find it helpful to formulate some research questions to help guide you. To write research questions, try to finish the following sentence: “I want to know how/what/why…”

A thesis statement is a statement of your central argument — it establishes the purpose and position of your paper. If you started with a research question, the thesis statement should answer it. It should also show what evidence and reasoning you’ll use to support that answer.

The thesis statement should be concise, contentious, and coherent. That means it should briefly summarize your argument in a sentence or two, make a claim that requires further evidence or analysis, and make a coherent point that relates to every part of the paper.

You will probably revise and refine the thesis statement as you do more research, but it can serve as a guide throughout the writing process. Every paragraph should aim to support and develop this central claim.

A faster, more affordable way to improve your paper

Scribbr’s new AI Proofreader checks your document and corrects spelling, grammar, and punctuation mistakes with near-human accuracy and the efficiency of AI!

good research work meaning

Proofread my paper

A research paper outline is essentially a list of the key topics, arguments, and evidence you want to include, divided into sections with headings so that you know roughly what the paper will look like before you start writing.

A structure outline can help make the writing process much more efficient, so it’s worth dedicating some time to create one.

Your first draft won’t be perfect — you can polish later on. Your priorities at this stage are as follows:

  • Maintaining forward momentum — write now, perfect later.
  • Paying attention to clear organization and logical ordering of paragraphs and sentences, which will help when you come to the second draft.
  • Expressing your ideas as clearly as possible, so you know what you were trying to say when you come back to the text.

You do not need to start by writing the introduction. Begin where it feels most natural for you — some prefer to finish the most difficult sections first, while others choose to start with the easiest part. If you created an outline, use it as a map while you work.

Do not delete large sections of text. If you begin to dislike something you have written or find it doesn’t quite fit, move it to a different document, but don’t lose it completely — you never know if it might come in useful later.

Paragraph structure

Paragraphs are the basic building blocks of research papers. Each one should focus on a single claim or idea that helps to establish the overall argument or purpose of the paper.

Example paragraph

George Orwell’s 1946 essay “Politics and the English Language” has had an enduring impact on thought about the relationship between politics and language. This impact is particularly obvious in light of the various critical review articles that have recently referenced the essay. For example, consider Mark Falcoff’s 2009 article in The National Review Online, “The Perversion of Language; or, Orwell Revisited,” in which he analyzes several common words (“activist,” “civil-rights leader,” “diversity,” and more). Falcoff’s close analysis of the ambiguity built into political language intentionally mirrors Orwell’s own point-by-point analysis of the political language of his day. Even 63 years after its publication, Orwell’s essay is emulated by contemporary thinkers.

Citing sources

It’s also important to keep track of citations at this stage to avoid accidental plagiarism . Each time you use a source, make sure to take note of where the information came from.

You can use our free citation generators to automatically create citations and save your reference list as you go.

APA Citation Generator MLA Citation Generator

The research paper introduction should address three questions: What, why, and how? After finishing the introduction, the reader should know what the paper is about, why it is worth reading, and how you’ll build your arguments.

What? Be specific about the topic of the paper, introduce the background, and define key terms or concepts.

Why? This is the most important, but also the most difficult, part of the introduction. Try to provide brief answers to the following questions: What new material or insight are you offering? What important issues does your essay help define or answer?

How? To let the reader know what to expect from the rest of the paper, the introduction should include a “map” of what will be discussed, briefly presenting the key elements of the paper in chronological order.

The major struggle faced by most writers is how to organize the information presented in the paper, which is one reason an outline is so useful. However, remember that the outline is only a guide and, when writing, you can be flexible with the order in which the information and arguments are presented.

One way to stay on track is to use your thesis statement and topic sentences . Check:

  • topic sentences against the thesis statement;
  • topic sentences against each other, for similarities and logical ordering;
  • and each sentence against the topic sentence of that paragraph.

Be aware of paragraphs that seem to cover the same things. If two paragraphs discuss something similar, they must approach that topic in different ways. Aim to create smooth transitions between sentences, paragraphs, and sections.

The research paper conclusion is designed to help your reader out of the paper’s argument, giving them a sense of finality.

Trace the course of the paper, emphasizing how it all comes together to prove your thesis statement. Give the paper a sense of finality by making sure the reader understands how you’ve settled the issues raised in the introduction.

You might also discuss the more general consequences of the argument, outline what the paper offers to future students of the topic, and suggest any questions the paper’s argument raises but cannot or does not try to answer.

You should not :

  • Offer new arguments or essential information
  • Take up any more space than necessary
  • Begin with stock phrases that signal you are ending the paper (e.g. “In conclusion”)

There are four main considerations when it comes to the second draft.

  • Check how your vision of the paper lines up with the first draft and, more importantly, that your paper still answers the assignment.
  • Identify any assumptions that might require (more substantial) justification, keeping your reader’s perspective foremost in mind. Remove these points if you cannot substantiate them further.
  • Be open to rearranging your ideas. Check whether any sections feel out of place and whether your ideas could be better organized.
  • If you find that old ideas do not fit as well as you anticipated, you should cut them out or condense them. You might also find that new and well-suited ideas occurred to you during the writing of the first draft — now is the time to make them part of the paper.

The goal during the revision and proofreading process is to ensure you have completed all the necessary tasks and that the paper is as well-articulated as possible.

Global concerns

  • Confirm that your paper completes every task specified in your assignment sheet.
  • Check for logical organization and flow of paragraphs.
  • Check paragraphs against the introduction and thesis statement.

Fine-grained details

Check the content of each paragraph, making sure that:

  • each sentence helps support the topic sentence.
  • no unnecessary or irrelevant information is present.
  • all technical terms your audience might not know are identified.

Next, think about sentence structure , grammatical errors, and formatting . Check that you have correctly used transition words and phrases to show the connections between your ideas. Look for typos, cut unnecessary words, and check for consistency in aspects such as heading formatting and spellings .

Finally, you need to make sure your paper is correctly formatted according to the rules of the citation style you are using. For example, you might need to include an MLA heading  or create an APA title page .

Scribbr’s professional editors can help with the revision process with our award-winning proofreading services.

Discover our paper editing service

Checklist: Research paper

I have followed all instructions in the assignment sheet.

My introduction presents my topic in an engaging way and provides necessary background information.

My introduction presents a clear, focused research problem and/or thesis statement .

My paper is logically organized using paragraphs and (if relevant) section headings .

Each paragraph is clearly focused on one central idea, expressed in a clear topic sentence .

Each paragraph is relevant to my research problem or thesis statement.

I have used appropriate transitions  to clarify the connections between sections, paragraphs, and sentences.

My conclusion provides a concise answer to the research question or emphasizes how the thesis has been supported.

My conclusion shows how my research has contributed to knowledge or understanding of my topic.

My conclusion does not present any new points or information essential to my argument.

I have provided an in-text citation every time I refer to ideas or information from a source.

I have included a reference list at the end of my paper, consistently formatted according to a specific citation style .

I have thoroughly revised my paper and addressed any feedback from my professor or supervisor.

I have followed all formatting guidelines (page numbers, headers, spacing, etc.).

You've written a great paper. Make sure it's perfect with the help of a Scribbr editor!

Open Google Slides Download PowerPoint

Is this article helpful?

Other students also liked.

  • Writing a Research Paper Introduction | Step-by-Step Guide
  • Writing a Research Paper Conclusion | Step-by-Step Guide
  • Research Paper Format | APA, MLA, & Chicago Templates

More interesting articles

  • Academic Paragraph Structure | Step-by-Step Guide & Examples
  • Checklist: Writing a Great Research Paper
  • How to Create a Structured Research Paper Outline | Example
  • How to Write a Discussion Section | Tips & Examples
  • How to Write Recommendations in Research | Examples & Tips
  • How to Write Topic Sentences | 4 Steps, Examples & Purpose
  • Research Paper Appendix | Example & Templates
  • Research Paper Damage Control | Managing a Broken Argument
  • What Is a Theoretical Framework? | Guide to Organizing

What is your plagiarism score?

iEduNote. Studying Made Simple!

Research: Definition, Characteristics, Goals, Approaches

Research: Definition, Characteristics, Goals, Approaches

Research is an original and systematic investigation undertaken to increase existing knowledge and understanding of the unknown to establish facts and principles.

Let’s understand research:

What is Research?

Research is a voyage of discovery of new knowledge. It comprises creating ideas and generating new knowledge that leads to new and improved insights and the development of new materials, devices, products, and processes.

It should have the potential to produce sufficiently relevant results to increase and synthesize existing knowledge or correct and integrate previous knowledge.

Good reflective research produces theories and hypotheses and benefits any intellectual attempt to analyze facts and phenomena.

Where did the word Research Come from?

The word ‘research’ perhaps originates from the old French word “recerchier” which meant to ‘ search again.’ It implicitly assumes that the earlier search was not exhaustive and complete; hence, a repeated search is called for.

In practice, ‘research’ refers to a scientific process of generating an unexplored horizon of knowledge, aiming at discovering or establishing facts, solving a problem, and reaching a decision. Keeping the above points in view, we arrive at the following definition of research:

Research Definition

Research is a scientific approach to answering a research question , solving a research problem, or generating new knowledge through a systematic and orderly collection, organization, and analysis of data to make research findings useful in decision-making.

When do we call research scientific? Any research endeavor is said to be scientific if

  • It is based on empirical and measurable evidence subject to specific principles of reasoning;
  • It consists of systematic observations, measurement, and experimentation;
  • It relies on the application of scientific methods and harnessing of curiosity;
  • It provides scientific information and theories for the explanation of nature;
  • It makes practical applications possible, and
  • It ensures adequate analysis of data employing rigorous statistical techniques.

The chief characteristic that distinguishes the scientific method from other methods of acquiring knowledge is that scientists seek to let reality speak for itself, supporting a theory when a theory’s predictions are confirmed and challenging a theory when its predictions prove false.

Scientific research has multidimensional functions, characteristics, and objectives.

Keeping these issues in view, we assert that research in any field or discipline:

  • Attempts to solve a research problem;
  • Involves gathering new data from primary or first-hand sources or using existing data for a new purpose;
  • is based upon observable experiences or empirical evidence;
  • Demands accurate observation and description;
  • Employs carefully designed procedures and rigorous analysis;
  • attempts to find an objective, unbiased solution to the problem and takes great pains to validate the methods employed;
  • is a deliberate and unhurried activity that is directional but often refines the problem or questions as the research progresses.

Characteristics of Research

Keeping this in mind that research in any field of inquiry is undertaken to provide information to support decision-making in its respective area, we summarize some desirable characteristics of research:

  • The research should focus on priority problems.
  • The research should be systematic. It emphasizes that a researcher should employ a structured procedure.
  • The research should be logical. Without manipulating ideas logically, the scientific researcher cannot make much progress in any investigation.
  • The research should be reductive. This means that one researcher’s findings should be made available to other researchers to prevent them from repeating the same research.
  • The research should be replicable. This asserts that there should be scope to confirm previous research findings in a new environment and different settings with a new group of subjects or at a different point in time.
  • The research should be generative. This is one of the valuable characteristics of research because answering one question leads to generating many other new questions.
  • The research should be action-oriented. In other words, it should be aimed at solving to implement its findings.
  • The research should follow an integrated multidisciplinary approach, i.e., research approaches from more than one discipline are needed.
  • The research should be participatory, involving all parties concerned (from policymakers down to community members) at all stages of the study.
  • The research must be relatively simple, timely, and time-bound, employing a comparatively simple design.
  • The research must be as much cost-effective as possible.
  • The research results should be presented in formats most useful for administrators, decision-makers, business managers, or community members.

3 Basic Operations of Research

Scientific research in any field of inquiry involves three basic operations:

  • Data collection;
  • Data analysis;
  • Report writing .

3 Basic Operations Of Research

  • Data collection refers to observing, measuring, and recording data or information.
  • Data analysis, on the other hand, refers to arranging and organizing the collected data so that we may be able to find out what their significance is and generalize about them.
  • Report writing is the ultimate step of the study . Its purpose is to convey the information contained in it to the readers or audience .

If you note down, for example, the reading habit of newspapers of a group of residents in a community, that would be your data collection.

If you then divide these residents into three categories, ‘regular,’ ‘occasional,’ and ‘never,’ you have performed a simple data analysis. Your findings may now be presented in a report form.

A reader of your report knows what percentage of the community people never read any newspaper and so on.

Here are some examples that demonstrate what research is:

  • A farmer is planting two varieties of jute side by side to compare yields;
  • A sociologist examines the causes and consequences of divorce;
  • An economist is looking at the interdependence of inflation and foreign direct investment;
  • A physician is experimenting with the effects of multiple uses of disposable insulin syringes in a hospital;
  • A business enterprise is examining the effects of advertisement of their products on the volume of sales;
  • An economist is doing a cost-benefit analysis of reducing the sales tax on essential commodities;
  • The Bangladesh Bank is closely observing and monitoring the performance of nationalized and private banks;
  • Based on some prior information, Bank Management plans to open new counters for female customers.
  • Supermarket Management is assessing the satisfaction level of the customers with their products.

The above examples are all researching whether the instrument is an electronic microscope, hospital records, a microcomputer, a questionnaire , or a checklist.

Research Motivation – What makes one motivated to do research?

A person may be motivated to undertake research activities because

  • He might have genuine interest and curiosity in the existing body of knowledge and understanding of the problem;
  • He is looking for answers to questions that have remained unanswered so far and trying to unfold the truth;
  • The existing tools and techniques are accessible to him, and others may need modification and change to suit the current needs.

One might research ensuring.

  • Better livelihood;
  • Better career development;
  • Higher position, prestige, and dignity in society;
  • Academic achievement leading to higher degrees;
  • Self-gratification.

At the individual level, the results of the research are used by many:

  • A villager is drinking water from an arsenic-free tube well;
  • A rural woman is giving more green vegetables to her child than before;
  • A cigarette smoker is actively considering quitting smoking;
  • An old man is jogging for cardiovascular fitness;
  • A sociologist is using newly suggested tools and techniques in poverty measurement.

The above activities are all outcomes of the research.

All involved in the above processes will benefit from the research results. There is hardly any action in everyday life that does not depend upon previous research.

Research in any field of inquiry provides us with the knowledge and skills to solve problems and meet the challenges of a fast-paced decision-making environment.

9 Qualities of Research

Good research generates dependable data. It is conducted by professionals and can be used reliably for decision-making. It is thus of crucial importance that research should be made acceptable to the audience for which research should possess some desirable qualities in terms of.

9 qualities of research are;

Purpose clearly defined

Research process detailed, research design planner, ethical issues considered, limitations revealed, adequate analysis ensured, findings unambiguously presented, conclusions and recommendations justified..

We enumerate below a few qualities that good research should possess.

Good research must have its purposes clearly and unambiguously defined.

The problem involved or the decision to be made should be sharply delineated as clearly as possible to demonstrate the credibility of the research.

The research procedures should be described in sufficient detail to permit other researchers to repeat the research later.

Failure to do so makes it difficult or impossible to estimate the validity and reliability of the results. This weakens the confidence of the readers.

Any recommendations from such research justifiably get little attention from the policymakers and implementation.

The procedural design of the research should be carefully planned to yield results that are as objective as possible.

In doing so, care must be taken so that the sample’s representativeness is ensured, relevant literature has been thoroughly searched, experimental controls, whenever necessary, have been followed, and the personal bias in selecting and recording data has been minimized.

A research design should always safeguard against causing mental and physical harm not only to the participants but also those who belong to their organizations.

Careful consideration must also be given to research situations when there is a possibility for exploitation, invasion of privacy, and loss of dignity of all those involved in the study.

The researcher should report with complete honesty and frankness any flaws in procedural design; he followed and provided estimates of their effects on the findings.

This enhances the readers’ confidence and makes the report acceptable to the audience. One can legitimately question the value of research where no limitations are reported.

Adequate analysis reveals the significance of the data and helps the researcher to check the reliability and validity of his estimates.

Data should, therefore, be analyzed with proper statistical rigor to assist the researcher in reaching firm conclusions.

When statistical methods have been employed, the probability of error should be estimated, and criteria of statistical significance applied.

The presentation of the results should be comprehensive, easily understood by the readers, and organized so that the readers can readily locate the critical and central findings.

Proper research always specifies the conditions under which the research conclusions seem valid.

Therefore, it is important that any conclusions drawn and recommendations made should be solely based on the findings of the study.

No inferences or generalizations should be made beyond the data. If this were not followed, the objectivity of the research would tend to decrease, resulting in confidence in the findings.

The researcher’s experiences were reflected.

The research report should contain information about the qualifications of the researchers .

If the researcher is experienced, has a good reputation in research, and is a person of integrity, his report is likely to be highly valued. The policymakers feel confident in implementing the recommendations made in such reports.

4 Goals of Research

Goals Of Research

The primary goal or purpose of research in any field of inquiry; is to add to what is known about the phenomenon under investigation by applying scientific methods. Though each research has its own specific goals, we may enumerate the following 4 broad goals of scientific research:

Exploration and Explorative Research

Description and descriptive research, causal explanation and causal research, prediction and predictive research.

The link between the 4 goals of research and the questions raised in reaching these goals.

Let’s try to understand the 4 goals of the research.

Exploration is finding out about some previously unexamined phenomenon. In other words, an explorative study structures and identifies new problems.

The explorative study aims to gain familiarity with a phenomenon or gain new insights into it.

Exploration is particularly useful when researchers lack a clear idea of the problems they meet during their study.

Through exploration, researchers attempt to

  • Develop concepts more clearly;
  • Establish priorities among several alternatives;
  • Develop operational definitions of variables;
  • Formulate research hypotheses and sharpen research objectives;
  • Improve the methodology and modify (if needed) the research design .

Exploration is achieved through what we call exploratory research.

The end of an explorative study comes when the researchers are convinced that they have established the major dimensions of the research task.

Many research activities consist of gathering information on some topic of interest. The description refers to these data-based information-gathering activities. Descriptive studies portray precisely the characteristics of a particular individual, situation, or group.

Here, we attempt to describe situations and events through studies, which we refer to as descriptive research.

Such research is undertaken when much is known about the problem under investigation.

Descriptive studies try to discover answers to the questions of who, what, when, where, and sometimes how.

Such research studies may involve the collection of data and the creation of distribution of the number of times the researcher observes a single event or characteristic, known as a research variable.

A descriptive study may also involve the interaction of two or more variables and attempts to observe if there is any relationship between the variables under investigation .

Research that examines such a relationship is sometimes called a correlational study. It is correlational because it attempts to relate (i.e., co-relate) two or more variables.

A descriptive study may be feasible to answer the questions of the following types:

  • What are the characteristics of the people who are involved in city crime? Are they young? Middle-aged? Poor? Muslim? Educated?
  • Who are the potential buyers of the new product? Men or women? Urban people or rural people?
  • Are rural women more likely to marry earlier than their urban counterparts?
  • Does previous experience help an employee to get a higher initial salary?

Although the data description in descriptive research is factual, accurate, and systematic, the research cannot describe what caused a situation.

Thus, descriptive research cannot be used to create a causal relationship where one variable affects another.

In other words, descriptive research can be said to have a low requirement for internal validity . In sum, descriptive research deals with everything that can be counted and studied.

But there are always restrictions on that. All research must impact the lives of the people around us.

For example, finding the most frequent disease that affects the people of a community falls under descriptive research.

But the research readers will have the hunch to know why this has happened and what to do to prevent that disease so that more people will live healthy lives.

It dictates that we need a causal explanation of the situation under reference and a causal study vis-a-vis causal research .

Explanation reveals why and how something happens.

An explanatory study goes beyond description and attempts to establish a cause-and-effect relationship between variables. It explains the reason for the phenomenon that the descriptive study observed.

Thus, if a researcher finds that communities with larger family sizes have higher child deaths or that smoking correlates with lung cancer, he is performing a descriptive study.

If he explains why it is so and tries to establish a cause-and-effect relationship, he is performing explanatory or causal research . The researcher uses theories or at-least hypotheses to account for the factors that caused a certain phenomenon.

Look at the following examples that fit causal studies:

  • Why are people involved in crime? Can we explain this as a consequence of the present job market crisis or lack of parental care?
  • Will the buyers be motivated to purchase the new product in a new container ? Can an attractive advertisement motivate them to buy a new product?
  • Why has the share market shown the steepest-ever fall in stock prices? Is it because of the IMF’s warnings and prescriptions on the commercial banks’ exposure to the stock market or because of an abundant increase in the supply of new shares?

Prediction seeks to answer when and in what situations will occur if we can provide a plausible explanation for the event in question.

However, the precise nature of the relationship between explanation and prediction has been a subject of debate.

One view is that explanation and prediction are the same phenomena, except that prediction precedes the event while the explanation takes place after the event has occurred.

Another view is that explanation and prediction are fundamentally different processes.

We need not be concerned with this debate here but can simply state that in addition to being able to explain an event after it has occurred, we would also be able to predict when it will occur.

Research Approaches

4 Research Approaches

There are two main approaches to doing research.

The first is the basic approach, which mostly pertains to academic research. Many people view this as pure research or fundamental research.

The research implemented through the second approach is variously known as applied research, action research, operations research, or contract research.

Also, the third category of research, evaluative research, is important in many applications. All these approaches have different purposes influencing the nature of the respective research.

Lastly, precautions in research are required for thorough research.

So, 4 research approaches are;

  • Basic Research .
  • Applied Research .
  • Evaluative Research .
  • Precautions in Research.

Areas of Research

The most important fields or areas of research, among others, are;

  • Social Research .
  • Health Research .
  • Population Research .
  • Business Research .
  • Marketing Research .
  • Agricultural Research .
  • Biomedical Research.
  • Clinical Research .
  • Outcomes Research.
  • Internet Research.
  • Archival Research.
  • Empirical Research.
  • Legal Research .
  • Education Research .
  • Engineering Research .
  • Historical Research.

Check out our article describing all 16 areas of research .

Precautions in Research

Whether a researcher is doing applied or basic research or research of any other form, he or she must take necessary precautions to ensure that the research he or she is doing is relevant, timely, efficient, accurate, and ethical .

The research is considered relevant if it anticipates the kinds of information that decision-makers, scientists, or policymakers will require.

Timely research is completed in time to influence decisions.

  • Research is efficient when it is of the best quality for the minimum expenditure and the study is appropriate to the research context.
  • Research is considered accurate or valid when the interpretation can account for both consistencies and inconsistencies in the data.
  • Research is ethical when it can promote trust, exercise care, ensure standards, and protect the rights of the participants in the research process.

What is the definition of research?

What are the characteristics of good research.

Good research should be systematic, logical, reductive, replicable, generative, action-oriented, integrated, participatory, simple, timely, cost-effective, and presented in formats useful for decision-makers.

What are the three basic operations involved in scientific research?

The three basic operations of scientific research are data collection, data analysis, and report writing.

What are the four broad goals of scientific research?

The four broad goals of scientific research are exploration, description, causal explanation, and prediction.

What distinguishes the scientific method from other methods of acquiring knowledge?

The chief characteristic that distinguishes the scientific method from other methods is that scientists seek to let reality speak for itself, supporting a theory when its predictions are confirmed and challenging a theory when its predictions prove false.

What is the origin of the word ‘research’?

The word ‘research’ is believed to originate from the old French word “recerchier,” which means to ‘search again.’ This implies that an initial search was not exhaustive or complete, necessitating a repeated search.

How is “research methodology” defined?

Research methodology is a way to systematically study the various steps adopted by a researcher in studying research problems. It encompasses the logic, assumptions, justification, and rationale behind the chosen methods. It seeks to answer questions related to the choice of research methods, the definition of the research problem, the formulation of hypotheses, and the techniques used for data analysis.

How does research methodology ensure the appropriateness of a research method?

Research methodology ensures that the correct procedures are employed to address research problems. It provides the justification for the choice of a particular research method over others and explains the logic behind such choices.

After discussing the research definition and knowing the characteristics, goals, and approaches, it’s time to delve into the research fundamentals. For a comprehensive understanding, refer to our detailed research and methodology concepts guide .

Research should be relevant, timely, efficient, accurate, and ethical. It should anticipate the information required by decision-makers, be completed in time to influence decisions, be of the best quality for the minimum expenditure, and protect the rights of participants in the research process.

The two main approaches to research are the basic approach, often viewed as pure or fundamental research, and the applied approach, which includes action research, operations research, and contract research.

Having touched upon research, take next steps with our comprehensive resources on research and research methodology concepts .

  • Hypothesis Testing: Definition, Examples
  • Computer-Assisted Personal Interviewing (CAPI)
  • Stapel Scale: Definition, Example
  • Health Research: Definition, Examples
  • Research Hypothesis: Elements, Format, Types
  • Standard Error of Measurement
  • Business Research: Definition, Examples
  • Simple Category Scale: Definition, Example
  • How to Write an Evaluation Report
  • How to Select an Appropriate Evaluator
  • Variables: Definition, Examples, Types of Variables in Research
  • Content Analysis Method in Research
  • Research Objectives: Meaning, Types
  • Research Proposal: Components, Structure, Sample, Example
  • Data Analysis and Interpretation

Explore Jobs

  • Jobs Near Me
  • Remote Jobs
  • Full Time Jobs
  • Part Time Jobs
  • Entry Level Jobs
  • Work From Home Jobs

Find Specific Jobs

  • $15 Per Hour Jobs
  • $20 Per Hour Jobs
  • Hiring Immediately Jobs
  • High School Jobs
  • H1b Visa Jobs

Explore Careers

  • Business And Financial
  • Architecture And Engineering
  • Computer And Mathematical

Explore Professions

  • What They Do
  • Certifications
  • Demographics

Best Companies

  • Health Care
  • Fortune 500

Explore Companies

  • CEO And Executies
  • Resume Builder
  • Career Advice
  • Explore Majors
  • Questions And Answers
  • Interview Questions

The Most Important Research Skills (With Examples)

  • What Are Hard Skills?
  • What Are Technical Skills?
  • What Are What Are Life Skills?
  • What Are Social Media Skills Resume?
  • What Are Administrative Skills?
  • What Are Analytical Skills?
  • What Are Research Skills?
  • What Are Transferable Skills?
  • What Are Microsoft Office Skills?
  • What Are Clerical Skills?
  • What Are Computer Skills?
  • What Are Core Competencies?
  • What Are Collaboration Skills?
  • What Are Conflict Resolution Skills?
  • What Are Mathematical Skills?
  • How To Delegate

Find a Job You Really Want In

Research skills are the ability to find out accurate information on a topic. They include being able to determine the data you need, find and interpret those findings, and then explain that to others. Being able to do effective research is a beneficial skill in any profession, as data and research inform how businesses operate.

Whether you’re unsure of your research skills or are looking for ways to further improve them, then this article will cover important research skills and how to become even better at research.

Key Takeaways

Having strong research skills can help you understand your competitors, develop new processes, and build your professional skills in addition to aiding you in finding new customers and saving your company money.

Some of the most valuable research skills you can have include goal setting, data collection, and analyzing information from multiple sources.

You can and should put your research skills on your resume and highlight them in your job interviews.

The Most Important Research Skills

What are research skills?

Why are research skills important, 12 of the most important research skills, how to improve your research skills, highlighting your research skills in a job interview, how to include research skills on your resume, resume examples showcasing research skills, research skills faqs.

  • Sign Up For More Advice and Jobs

Research skills are the necessary tools to be able to find, compile, and interpret information in order to answer a question. Of course, there are several aspects to this. Researchers typically have to decide how to go about researching a problem — which for most people is internet research.

In addition, you need to be able to interpret the reliability of a source, put the information you find together in an organized and logical way, and be able to present your findings to others. That means that they’re comprised of both hard skills — knowing your subject and what’s true and what isn’t — and soft skills. You need to be able to interpret sources and communicate clearly.

Research skills are useful in any industry, and have applications in innovation, product development, competitor research, and many other areas. In addition, the skills used in researching aren’t only useful for research. Being able to interpret information is a necessary skill, as is being able to clearly explain your reasoning.

Research skills are used to:

Do competitor research. Knowing what your biggest competitors are up to is an essential part of any business. Researching what works for your competitors, what they’re doing better than you, and where you can improve your standing with the lowest resource expenditure are all essential if a company wants to remain functional.

Develop new processes and products. You don’t have to be involved in research and development to make improvements in how your team gets things done. Researching new processes that make your job (and those of your team) more efficient will be valued by any sensible employer.

Foster self-improvement. Folks who have a knack and passion for research are never content with doing things the same way they’ve always been done. Organizations need independent thinkers who will seek out their own answers and improve their skills as a matter of course. These employees will also pick up new technologies more easily.

Manage customer relationships. Being able to conduct research on your customer base is positively vital in virtually every industry. It’s hard to move products or sell services if you don’t know what people are interested in. Researching your customer base’s interests, needs, and pain points is a valuable responsibility.

Save money. Whether your company is launching a new product or just looking for ways to scale back its current spending, research is crucial for finding wasted resources and redirecting them to more deserving ends. Anyone who proactively researches ways that the company can save money will be highly appreciated by their employer.

Solve problems. Problem solving is a major part of a lot of careers, and research skills are instrumental in making sure your solution is effective. Finding out the cause of the problem and determining an effective solution both require accurate information, and research is the best way to obtain that — be it via the internet or by observation.

Determine reliable information. Being able to tell whether or not the information you receive seems accurate is a very valuable skill. While research skills won’t always guarantee that you’ll be able to tell the reliability of the information at first glance, it’ll prevent you from being too trusting. And it’ll give the tools to double-check .

Experienced researchers know that worthwhile investigation involves a variety of skills. Consider which research skills come naturally to you, and which you could work on more.

Data collection . When thinking about the research process, data collection is often the first thing that comes to mind. It is the nuts and bolts of research. How data is collected can be flexible.

For some purposes, simply gathering facts and information on the internet can fulfill your need. Others may require more direct and crowd-sourced research. Having experience in various methods of data collection can make your resume more impressive to recruiters.

Data collection methods include: Observation Interviews Questionnaires Experimentation Conducting focus groups

Analysis of information from different sources. Putting all your eggs in one source basket usually results in error and disappointment. One of the skills that good researchers always incorporate into their process is an abundance of sources. It’s also best practice to consider the reliability of these sources.

Are you reading about U.S. history on a conspiracy theorist’s blog post? Taking facts for a presentation from an anonymous Twitter account?

If you can’t determine the validity of the sources you’re using, it can compromise all of your research. That doesn’t mean just disregard anything on the internet but double-check your findings. In fact, quadruple-check. You can make your research even stronger by turning to references outside of the internet.

Examples of reliable information sources include: Published books Encyclopedias Magazines Databases Scholarly journals Newspapers Library catalogs

Finding information on the internet. While it can be beneficial to consulate alternative sources, strong internet research skills drive modern-day research.

One of the great things about the internet is how much information it contains, however, this comes with digging through a lot of garbage to get to the facts you need. The ability to efficiently use the vast database of knowledge that is on the internet without getting lost in the junk is very valuable to employers.

Internet research skills include: Source checking Searching relevant questions Exploring deeper than the first options Avoiding distraction Giving credit Organizing findings

Interviewing. Some research endeavors may require a more hands-on approach than just consulting internet sources. Being prepared with strong interviewing skills can be very helpful in the research process.

Interviews can be a useful research tactic to gain first-hand information and being able to manage a successful interview can greatly improve your research skills.

Interviewing skills involves: A plan of action Specific, pointed questions Respectfulness Considering the interview setting Actively Listening Taking notes Gratitude for participation

Report writing. Possessing skills in report writing can assist you in job and scholarly research. The overall purpose of a report in any context is to convey particular information to its audience.

Effective report writing is largely dependent on communication. Your boss, professor , or general reader should walk away completely understanding your findings and conclusions.

Report writing skills involve: Proper format Including a summary Focusing on your initial goal Creating an outline Proofreading Directness

Critical thinking. Critical thinking skills can aid you greatly throughout the research process, and as an employee in general. Critical thinking refers to your data analysis skills. When you’re in the throes of research, you need to be able to analyze your results and make logical decisions about your findings.

Critical thinking skills involve: Observation Analysis Assessing issues Problem-solving Creativity Communication

Planning and scheduling. Research is a work project like any other, and that means it requires a little forethought before starting. Creating a detailed outline map for the points you want to touch on in your research produces more organized results.

It also makes it much easier to manage your time. Planning and scheduling skills are important to employers because they indicate a prepared employee.

Planning and scheduling skills include: Setting objectives Identifying tasks Prioritizing Delegating if needed Vision Communication Clarity Time-management

Note-taking. Research involves sifting through and taking in lots of information. Taking exhaustive notes ensures that you will not neglect any findings later and allows you to communicate these results to your co-workers. Being able to take good notes helps summarize research.

Examples of note-taking skills include: Focus Organization Using short-hand Keeping your objective in mind Neatness Highlighting important points Reviewing notes afterward

Communication skills. Effective research requires being able to understand and process the information you receive, either written or spoken. That means that you need strong reading comprehension and writing skills — two major aspects of communication — as well as excellent listening skills.

Most research also involves showcasing your findings. This can be via a presentation. , report, chart, or Q&A. Whatever the case, you need to be able to communicate your findings in a way that educates your audience.

Communication skills include: Reading comprehension Writing Listening skills Presenting to an audience Creating graphs or charts Explaining in layman’s terms

Time management. We’re, unfortunately, only given 24 measly hours in a day. The ability to effectively manage this time is extremely powerful in a professional context. Hiring managers seek candidates who can accomplish goals in a given timeframe.

Strong time management skills mean that you can organize a plan for how to break down larger tasks in a project and complete them by a deadline. Developing your time management skills can greatly improve the productivity of your research.

Time management skills include: Scheduling Creating task outlines Strategic thinking Stress-management Delegation Communication Utilizing resources Setting realistic expectations Meeting deadlines

Using your network. While this doesn’t seem immediately relevant to research skills, remember that there are a lot of experts out there. Knowing what people’s areas of expertise and asking for help can be tremendously beneficial — especially if it’s a subject you’re unfamiliar with.

Your coworkers are going to have different areas of expertise than you do, and your network of people will as well. You may even know someone who knows someone who’s knowledgeable in the area you’re researching. Most people are happy to share their expertise, as it’s usually also an area of interest to them.

Networking involves: Remembering people’s areas of expertise Being willing to ask for help Communication Returning favors Making use of advice Asking for specific assistance

Attention to detail. Research is inherently precise. That means that you need to be attentive to the details, both in terms of the information you’re gathering, but also in where you got it from. Making errors in statistics can have a major impact on the interpretation of the data, not to mention that it’ll reflect poorly on you.

There are proper procedures for citing sources that you should follow. That means that your sources will be properly credited, preventing accusations of plagiarism. In addition, it means that others can make use of your research by returning to the original sources.

Attention to detail includes: Double checking statistics Taking notes Keeping track of your sources Staying organized Making sure graphs are accurate and representative Properly citing sources

As with many professional skills, research skills serve us in our day to day life. Any time you search for information on the internet, you’re doing research. That means that you’re practicing it outside of work as well. If you want to continue improving your research skills, both for professional and personal use, here are some tips to try.

Differentiate between source quality. A researcher is only as good as their worst source. Start paying attention to the quality of the sources you use, and be suspicious of everything your read until you check out the attributions and works cited.

Be critical and ask yourself about the author’s bias, where the author’s research aligns with the larger body of verified research in the field, and what publication sponsored or published the research.

Use multiple resources. When you can verify information from a multitude of sources, it becomes more and more credible. To bolster your faith in one source, see if you can find another source that agrees with it.

Don’t fall victim to confirmation bias. Confirmation bias is when a researcher expects a certain outcome and then goes to find data that supports this hypothesis. It can even go so far as disregarding anything that challenges the researcher’s initial hunch. Be prepared for surprising answers and keep an open mind.

Be open to the idea that you might not find a definitive answer. It’s best to be honest and say that you found no definitive answer instead of just confirming what you think your boss or coworkers expect or want to hear. Experts and good researchers are willing to say that they don’t know.

Stay organized. Being able to cite sources accurately and present all your findings is just as important as conducting the research itself. Start practicing good organizational skills , both on your devices and for any physical products you’re using.

Get specific as you go. There’s nothing wrong with starting your research in a general way. After all, it’s important to become familiar with the terminology and basic gist of the researcher’s findings before you dig down into all the minutia.

A job interview is itself a test of your research skills. You can expect questions on what you know about the company, the role, and your field or industry more generally. In order to give expert answers on all these topics, research is crucial.

Start by researching the company . Look into how they communicate with the public through social media, what their mission statement is, and how they describe their culture.

Pay close attention to the tone of their website. Is it hyper professional or more casual and fun-loving? All of these elements will help decide how best to sell yourself at the interview.

Next, research the role. Go beyond the job description and reach out to current employees working at your desired company and in your potential department. If you can find out what specific problems your future team is or will be facing, you’re sure to impress hiring managers and recruiters with your ability to research all the facts.

Finally, take time to research the job responsibilities you’re not as comfortable with. If you’re applying for a job that represents increased difficulty or entirely new tasks, it helps to come into the interview with at least a basic knowledge of what you’ll need to learn.

Research projects require dedication. Being committed is a valuable skill for hiring managers. Whether you’ve had research experience throughout education or a former job, including it properly can boost the success of your resume .

Consider how extensive your research background is. If you’ve worked on multiple, in-depth research projects, it might be best to include it as its own section. If you have less research experience, include it in the skills section .

Focus on your specific role in the research, as opposed to just the research itself. Try to quantify accomplishments to the best of your abilities. If you were put in charge of competitor research, for example, list that as one of the tasks you had in your career.

If it was a particular project, such as tracking the sale of women’s clothing at a tee-shirt company, you can say that you “directed analysis into women’s clothing sales statistics for a market research project.”

Ascertain how directly research skills relate to the job you’re applying for. How strongly you highlight your research skills should depend on the nature of the job the resume is for. If research looks to be a strong component of it, then showcase all of your experience.

If research looks to be tangential, then be sure to mention it — it’s a valuable skill — but don’t put it front and center.

Example #1: Academic Research

Simon Marks 767 Brighton Blvd. | Brooklyn, NY, 27368 | (683)-262-8883 | [email protected] Diligent and hardworking recent graduate seeking a position to develop professional experience and utilize research skills. B.A. in Biological Sciences from New York University. PROFESSIONAL EXPERIENCE Lixus Publishing , Brooklyn, NY Office Assistant- September 2018-present Scheduling and updating meetings Managing emails and phone calls Reading entries Worked on a science fiction campaign by researching target demographic Organizing calendars Promoted to office assistant after one year internship Mitch’s Burgers and Fries , Brooklyn, NY Restaurant Manager , June 2014-June 2018 Managed a team of five employees Responsible for coordinating the weekly schedule Hired and trained two employees Kept track of inventory Dealt with vendors Provided customer service Promoted to restaurant manager after two years as a waiter Awarded a $2.00/hr wage increase SKILLS Writing Scientific Research Data analysis Critical thinking Planning Communication RESEARCH Worked on an ecosystem biology project with responsibilities for algae collection and research (2019) Lead a group of freshmen in a research project looking into cell biology (2018) EDUCATION New York University Bachelors in Biological Sciences, September 2016-May 2020

Example #2: Professional Research

Angela Nichols 1111 Keller Dr. | San Francisco, CA | (663)-124-8827 |[email protected] Experienced and enthusiastic marketer with 7 years of professional experience. Seeking a position to apply my marketing and research knowledge. Skills in working on a team and flexibility. EXPERIENCE Apples amp; Oranges Marketing, San Francisco, CA Associate Marketer – April 2017-May 2020 Discuss marketing goals with clients Provide customer service Lead campaigns associated with women’s health Coordinating with a marketing team Quickly solving issues in service and managing conflict Awarded with two raises totaling $10,000 over three years Prestigious Marketing Company, San Francisco, CA Marketer – May 2014-April 2017 Working directly with clients Conducting market research into television streaming preferences Developing marketing campaigns related to television streaming services Report writing Analyzing campaign success statistics Promoted to Marketer from Junior Marketer after the first year Timberlake Public Relations, San Francisco, CA Public Relations Intern – September 2013–May 2014 Working cohesively with a large group of co-workers and supervisors Note-taking during meetings Running errands Managing email accounts Assisting in brainstorming Meeting work deadlines EDUCATION Golden Gate University, San Francisco, CA Bachelor of Arts in Marketing with a minor in Communications – September 2009 – May 2013 SKILLS Marketing Market research Record-keeping Teamwork Presentation. Flexibility

What research skills are important?

Goal-setting and data collection are important research skills. Additional important research skills include:

Using different sources to analyze information.

Finding information on the internet.

Interviewing sources.

Writing reports.

Critical thinking.

Planning and scheduling.

Note-taking.

Managing time.

How do you develop good research skills?

You develop good research skills by learning how to find information from multiple high-quality sources, by being wary of confirmation bias, and by starting broad and getting more specific as you go.

When you learn how to tell a reliable source from an unreliable one and get in the habit of finding multiple sources that back up a claim, you’ll have better quality research.

In addition, when you learn how to keep an open mind about what you’ll find, you’ll avoid falling into the trap of confirmation bias, and by staying organized and narrowing your focus as you go (rather than before you start), you’ll be able to gather quality information more efficiently.

What is the importance of research?

The importance of research is that it informs most decisions and strategies in a business. Whether it’s deciding which products to offer or creating a marketing strategy, research should be used in every part of a company.

Because of this, employers want employees who have strong research skills. They know that you’ll be able to put them to work bettering yourself and the organization as a whole.

Should you put research skills on your resume?

Yes, you should include research skills on your resume as they are an important professional skill. Where you include your research skills on your resume will depend on whether you have a lot of experience in research from a previous job or as part of getting your degree, or if you’ve just cultivated them on your own.

If your research skills are based on experience, you could put them down under the tasks you were expected to perform at the job in question. If not, then you should likely list it in your skills section.

University of the People – The Best Research Skills for Success

Association of Internet Research Specialists — What are Research Skills and Why Are They Important?

MasterClass — How to Improve Your Research Skills: 6 Research Tips

How useful was this post?

Click on a star to rate it!

Average rating / 5. Vote count:

No votes so far! Be the first to rate this post.

' src=

Sky Ariella is a professional freelance writer, originally from New York. She has been featured on websites and online magazines covering topics in career, travel, and lifestyle. She received her BA in psychology from Hunter College.

Recent Job Searches

  • Registered Nurse Jobs Resume Location
  • Truck Driver Jobs Resume Location
  • Call Center Representative Jobs Resume Location
  • Customer Service Representative Jobs Resume
  • Delivery Driver Jobs Resume Location
  • Warehouse Worker Jobs Resume Location
  • Account Executive Jobs Resume Location
  • Sales Associate Jobs Resume Location
  • Licensed Practical Nurse Jobs Resume Location
  • Company Driver Jobs Resume

Related posts

good research work meaning

What Is Organizational Behavior Management (OBM)? (With Examples)

good research work meaning

Administrative Job Duties (With Examples)

good research work meaning

50 Jobs That Use Visio The Most

research experience resume

What Are Persuasion Skills? (With Examples)

  • Career Advice >
  • Hard Skills >
  • Research Skills

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Psicol Reflex Crit
  • v.34; 2021 Dec

What factors contribute to the meaning of work? A validation of Morin’s Meaning of Work Questionnaire

Anne pignault.

1 Université de Lorraine, Psychology & Neuroscience Laboratory (2LPN, EA7489), 23 boulevard Albert 1er, 54000 Nancy, France

Claude Houssemand

2 University of Luxembourg, Department of Education and Social Work, Institute for Lifelong Learning & Guidance (LLLG), 2 Avenue de l’Université, L-4365 Esch-sur-Alzette, Luxembourg

Associated Data

The datasets generated and/or analyzed during the current study are available from the corresponding author.

Considering the recent and current evolution of work and the work context, the meaning of work is becoming an increasingly relevant topic in research in the social sciences and humanities, particularly in psychology. In order to understand and measure what contributes to the meaning of work, Morin constructed a 30-item questionnaire that has become predominant and has repeatedly been used in research in occupational psychology and by practitioners in the field. Nevertheless, it has been validated only in part.

Meaning of work questionnaire was conducted in French with 366 people (51.3% of women; age: ( M = 39.11, SD = 11.25); 99.2% of whom were employed with the remainder retired). Three sets of statistical analyses were run on the data. Exploratory and confirmatory factor analysis were conducted on independent samples.

The questionnaire described a five-factor structure. These dimensions (Success and Recognition at work and of work, α = .90; Usefulness, α = .88; Respect for work, α = .88; Value from and through work, α = .83; Remuneration, α = .85) are all attached to a general second-order latent meaning of work factor (α = .96).

Conclusions

Validation of the scale, and implications for health in the workplace and career counseling practices, are discussed.

Introduction

Since the end of the 1980s, many studies have been conducted to explore the meaning of work, particularly in psychology (Rosso, Dekas, & Wrzesniewski, 2010 ). A review of the bibliographical data in PsychInfo shows that between 1974 and 2006, 183 studies addressed this topic (Morin, 2006 ). This scholarly interest was primarily triggered by Sverko and Vizek-Vidovic’s ( 1995 ) article, which identified the approaches and models that have been used and their main results.

Whereas early studies on the meaning of work introduced the concept and its theoretical underpinnings (e.g., Harpaz, 1986 ; Harpaz & Fu, 2002 ; Morin, 2003 ; MOW International Research team, 1987 ), later research tried to connect this aspect of work with other psychological dimensions or individual perceptions of the work context (e.g., Harpaz & Meshoulam, 2010 ; Morin, 2008 ; Morin, Archambault, & Giroux, 2001 ; Rosso et al., 2010 ; Wrzesniewski, Dutton, & Debebe, 2003 ). Nevertheless, scholars, particularly those in organizational and occupational psychology, soon found it difficult to precisely identify the meaning of work because it changes in accordance with the conceptualizations of different researchers, the theoretical models used to describe it, and the tools that are available to measure it for individuals and for groups.

This article first seeks to clarify the concept of the meaning of work (definitions and models) before bringing up certain problems involved in its measurement and the diversity in how the concept has been used. Then the paper focuses on a particular meaning of work measurement tool developed in Canada, which is now widely used in French-speaking countries. At the beginning of the twenty-first century, Morin et al. ( 2001 ) developed a 30-item questionnaire to better determine the dimensions that give meaning to a person’s work. The statistical analyses needed to determine the reliability and validity of Morin et al.’s meaning of work questionnaire have never been completed. Indeed, some changes were made to the initial scale, and the analyses only based on homogenous samples of workers in different professional sectors. Thus and even though the meaning of work scale is used quite frequently, both researchers and practitioners have been unsure about whether or not to trust its results. The main objective of the present study was thus to provide a psychometric validation of Morin et al.’s meaning of work scale and to uncover its latent psychological structure.

Meaning of work: from definition to measurement

Meaning of work: what is it.

As many scholars have found, the concept of the meaning of work is not easy to define (e.g., Rosso et al., 2010 ). In terms of theory, it has been defined differently in different academic fields. In psychology, it refers to an individual’s interpretations of his/her actual experiences and interactions at work (Ros, Schwartz, & Surkiss, 1999 ). From a sociological point of view, it involves assessing meaning in reference to a system of values (Rosso et al., 2010 ). In this case, its definition depends on cultural or social differences, which make explaining this concept even more complex (e.g., Morse & Weiss, 1955 ; MOW International Research team, 1987 ; Steers & Porter, 1979 ; Sverko & Vizek-Vidovic, 1995 ).

At a conceptual level, the meaning of work has been defined in three different ways (Morin, 2003 ). First, it can refer to the meaning of work attached to an individual’s representations of work and the values he/she attributes to that work (Morse & Weiss, 1955 ; MOW International Research team, 1987 ). Second, it can refer to a personal preference for work as defined by the intentions that guide personal action (Super & Sverko, 1995 ). Third, it can be understood as consistency between oneself and one’s work, similar to a balance in one’s personal relationship with work (Morin & Cherré, 2004 ).

With respect to terms, some differences exist because the meaning of work is considered an individual’s interpretation of what work means or of the role it plays in one’s life (Pratt & Ashforth, 2003 ). Yet this individual perception is also influenced by the environment and the social context (Wrzesniewski et al., 2003 ). The psychological literature on the meaning of work has primarily examined its positive aspects, even though work experiences can be negative or neutral. This partiality about the nature of the meaning of work in research has led to some confusion in the literature between this concept and that of meaningful , which refers to the extent to which work has personal significance (a quantity) and seems to depend on positive elements (Steger, Dik, & Duffy, 2012 ). A clearer demarcation should be made between these terms in order to specify the exact sense of the meaning of work: “This would reserve ‘meaning’ for instances in which authors are referring to what work signifies (the type of meaning), rather than the amount of significance attached to the work” (Rosso et al., 2010 , p. 95).

The original idea of the meaning of work refers to the central importance of work for people, beyond the simple behavioral activity through which it occurs. Drawing on various historical references, certain authors present work as an essential driver of human life; these scholars then seek to understand how work is fundamental (e.g., Morin, 2006 ; Sverko & Vizek-Vidovic, 1995 ). The concept of the meaning of work is connected to the centrality of work for the individual and consequently fulfills four different important functions: economic (to earn a living), social (to interact with others), prestige (social position), and psychological (identity and recognition). In this view, the centrality of work is based on an ensemble of personal and social values that differ between individuals as well as between cultures, economic climates, and occupations (England, 1991 ; England & Harpaz, 1990 ; Roe & Ester, 1999 ; Ruiz-Quintanilla & England, 1994 ; Topalova, 1994 ; Zanders, 1993 ).

Meaning of work: which theoretical model?

The first theoretical model for the meaning of work was based on research in the MOW project (MOW International Research team, 1987 ), considered the “most empirically rigorous research ever undertaken to understand, both within and between countries, the meanings people attach to their work roles” (Brief, 1991 , p. 176). This view suggests that the meaning of work is based on five principal theoretical dimensions: work centrality as a life role, societal norms regarding work, valued work outcomes, importance of work goals, and work-role identification. A series of studies on this theory was conducted in Israel (Harpaz, 1986 ; Harpaz & Fu, 2002 ; Harpaz & Meshoulam, 2010 ), complementing the work of the MOW project (MOW International Research team, 1987 ). Harpaz ( 1986 ) empirically identified six latent factors that represent the meaning of work: work centrality, entitlement norm, obligation norm, economic orientation, interpersonal relations, and expressive orientation.

Another theoretical model on the importance of work in a person’s life was created by Sverko in 1989 . This approach takes into account the interactions among certain work values (the importance of these values and the perception of possible achievements through work), which depend on a process of socialization. The ensemble is then moderated by an individual’s personal experiences with work. In the same vein, Rosso et al. ( 2010 ) tried to create an exhaustive model of the sources that influence the meaning of work. This model is built around two major dimensions: Self-Others (individual vs. other individuals, groups, collectives, organizations, and higher powers) and Agency-Communion (the drives to differentiate, separate, assert, expand, master, and create vs. the drives to contact, attach, connect, and unite). This theoretical framework describes four major pathways to the meaning of work: individuation (autonomy, competence, and self-esteem), contribution (perceived impact, significance, interconnection, and self-abnegation), self-connection (self-concordance, identity affirmation, and personal engagement), and unification (value systems, social identification, and connectedness).

Lastly, a more recent model (Lips-Wiersma & Wright, 2012 ) converges with the theory suggested by Rosso et al. ( 2010 ) but distinguishes two dimensions: Self-Others versus Being-Doing. This model describes four pathways to meaningful work: developing the inner self, unity with others, service to others, and expressing one’s full potential.

Without claiming to be exhaustive, this brief presentation of the theoretical models of the meaning of work underscores the difficulty in precisely defining this concept, the diversity of possible approaches to identifying its contours, and therefore implicitly addresses the various tools designed to measure it.

Measuring the meaning of work

Various methodologies have been used to better determine the concept of the meaning of work and to grasp what it involves in practice. The tools examined below have been chosen because of their different methodological approaches.

One of the first kinds of measurements was developed by the international MOW project (MOW International Research team, 1987 ). In this study, England and Harpaz ( 1990 ) and Ruiz-Quintanilla and England ( 1994 ) used 14 defining elements to assess agreement on the perception of work of 11 different sample groups questioned between 1989 and 1992. These elements, resulting from the definition of work given by the MOW project and studied by applying multivariate analyses and textual content analyses ( When do you consider an activity as working ? Choose four statements from the list below which best define when an activity is “ working,” MOW International Research team, 1987 ), can be grouped into four distinct heuristic categories (Table ​ (Table1 1 ).

Items used to define the concept of work

These items were taken from Ruiz-Quintanilla and England ( 1994 ). The letter in front of each item corresponds to the initial order of the items (MOW International Research team, 1987 )

Similarly, England ( 1991 ) studied changes in the meaning of work in the USA between 1982 and 1989. He used four different methodological approaches to the meaning of work: societal norms about work, importance of work goals, work centrality, and definition of work by the labor force. In the wake of these studies, others developed scales to measure the centrality of work in people’s lives, either for the general population (e.g., Warr, 2008 ) or for specific subpopulations such as unemployed people, on the basis of a rather similar conceptualization of the meaning of work (McKee-Ryan, Song, Wanberg, & Kinicki, 2005 ; Wanberg, 2012 ).

Finally, Wrzesniewski, McCauley, Rozin, and Schwartz ( 1997 ) developed a rather unusual method for evaluating people’s relationships with their work. Although not directly connected to research on the meaning of work, this study and the questionnaire they used ( University of Pennsylvania Work-Life Questionnaire ) addressed some of the same concepts. Above all, they employed the concepts in a very particular way that combined psychological scales, scenarios, and sociodemographic questions. Through these scenarios (Table ​ (Table2) 2 ) and the extent to which the respondents felt like the described characters, their relationship to work was described as either a Job, a Career, or a Calling.

Scenarios used to measure the relationship to work

These scenarios were taken from Wrzesniewski et al. ( 1997 , p. 24)

This presentation of certain tools for measuring the meaning of work reveals a variety of methodological approaches. Nevertheless, whereas certain methods have adopted a rather traditional psychological approach, others are often difficult to use for various reasons such as their psychometrics (e.g., the use of only one item to measure a concept; England, 1991 ; Wrzesniewski et al., 1997 ) or for practical reasons (e.g., the participants were asked questions that pertained not only to their individual assessment of work but also to various other parts of their lives; England, 1991 ; Warr, 2008 ). This diversity in the possible uses of the meaning of work makes it difficult to select a tool to measure it.

In French-speaking countries (Canada and Europe primarily), the previously mentioned scale created by Morin et al. ( 2001 ) has predominated and has repeatedly been used in research in occupational psychology and by practitioners in the field. Nevertheless, there has not been a complete validation of the scale (i.e., different forms of the same tool, only the use of exploratory factor analyses, and no similar structures found) that was the motivation for the current study.

The present study

The present article conceives of the meaning of work as representing a certain consistency between what an individual wants out of work and the individual’s perception, lived or imagined, of his/her work. It thus corresponds to the third definition of the meaning of work presented above—consistency between oneself and one's work (Morin & Cherré, 2004 ). This definition is strictly limited to the meaning given to work and the personal significance of this work from the activities that the work implies. Within this conceptual framework, some older studies adopted a slightly different cognitive conception, in which individuals constantly seek a balance between themselves and their environment, and any imbalance triggers a readjustment through which the person attempts to stabilize his/her cognitive state (e.g., Heider, 1946 ; Osgood & Tannenbaum, 1955 ). Here, the meaning of work must be considered a means for maintaining psychological harmony despite the destabilizing events that work might involve. In this view, meaning is viewed as an effect or a product of the activity (Brief & Nord, 1990 ) and not as a permanent or fixed state. It then becomes a result of person-environment fit and falls within the theory of work adjustment (Dawis, Lofquist, & Weiss, 1968 ).

Within this framework, a series of recurring and interdependent studies should be noted (e.g., Morin, 2003 , 2006 ; Morin & Cherré, 1999 , 2004 ) because they have attempted to measure the coherence that a person finds in the relation between the person’s self and his/her work and thus implicitly the meaning of that work. Therefore, these studies make it possible to understand the meaning of work in greater detail, meaning that it could be used in practice through a self-evaluation questionnaire. The level of coherence is considered the degree of similarity between the characteristics of work that the person attributes meaning to and the characteristics that he/she perceives in his/her present work (Aronsson, Bejerot, & Häremstam, 1999 ; Morin & Cherré, 2004 ). Based on semi-structured interviews and on older research related to the quality of life at work (Hackman & Oldham, 1976 ; Ketchum & Trist, 1992 ), a model involving 14 characteristics was developed: the usefulness of work, the social contribution of work, rationalization of the tasks, workload, cooperation, salary, the use of skills, learning opportunities, autonomy, responsibilities, rectitude of social and organizational practices, the spirit of service, working conditions, and, finally, recognition and appreciation (Morin, 2006 ; Morin & Cherré, 1999 ). Then, based on this model, a 30-item questionnaire was developed to offer more precise descriptions of these dimensions. Table ​ Table3 3 presents the items, which were designed and administered to the participants in French.

Items from the meaning of work scale by Morin with their theoretical dimensions and exploratory factor analyses

P personal power at work, U usefulness of work, R success at work, A autonomy at work, S safety, E ethics, UT usefulness of work, VP personal value, EF personal efficacy, ET ethics of work, RT rationalization of work, IE personal influence

(*) = French version. 1 = Morin and Cherré ( 1999 ). 2 = Morin et al. ( 2001 ) and Morin ( 2003 ). 3 = Morin and Cherré ( 2004 )

Some studies for structurally validating this questionnaire have been conducted over the years (e.g., Morin, 2003 , 2006 , 2008 ; Morin & Cherré, 2004 ). However, their results were not very precise or comparable. For example, the number of latent factors in the meaning of work scale structure varied (e.g., six or eight factors: Morin, 2003 ; six factors: Morin, 2006 ; Morin & Cherré, 2004 ), the sample groups were not completely comparable (especially with respect to occupations), and finally, items were added or removed or their phrasing was changed (e.g., 30 and 33 items: Morin, 2003 ; 30 items: Morin, 2006 ; 26 items: Morin, 2008 ). Yet the most prominent methodological problem was that only exploratory analyses (most often a principal component analysis with varimax rotation) had been applied. This scale was entirely relevant from a theoretical point of view because it offered a more specific definition of the meaning of work than other scales and, mainly, because some subdimensions appeared to be linked with anxiety, depression, irritability, cognitive problems, psychological distress, and subjective well-being (Morin et al., 2001 ). It was also relevant from a practical point of view because it was short and did not take much time to complete. However, its use was questionable because it had never been validated psychometrically, and a consistent latent psychological structure had not been identified across studies.

As an example, two models representing the structure of the 30-item scale are presented in Table ​ Table3 3 (Morin et al., 2001 ; Morin, 2003 , for the first model; Morin & Cherré, 2004 , for the second one). This table presents the items, the meaning of work dimensions they are theoretically related to, and the solution from the principal component analysis in each study. These analyses revealed that the empirical and theoretical structures of this tool are not stable and that the latent structure suffers from the insufficient use of statistical methods. In particular, there was an important difference found between the two models in previous studies (Morin et al., 2001 ; Morin & Cherré, 2004 ). Only the “usefulness of work” dimension was found to be identical, comprised of the same items in both models. Other dimensions had a maximum of only three items in common. Therefore, it is very difficult to utilize this tool both in practice and diagnostically, and complementary studies must be conducted. Even though there are techniques for replicating explanatory analyses (e.g., Osborne, 2012 ), such techniques could not be used here because not all the necessary information was given (e.g., all factor loadings, communalities). This is why collecting new data appeared to be the only way to analyze the scale.

More recently, two studies (which applied a new 25-item meaningful work questionnaire ) were developed on the basis of Morin’s scale (Bendassolli & Borges-Andrade, 2013 ; Bendassolli, Borges-Andrade, Coelho Alves, & de Lucena Torres, 2015 ). Even though the concepts of the “meaning of work” and “meaningful work” are close, the two scales are formally and theoretically different and do not evaluate the same construct.

The purpose of the present study was thus to determine the structure of original Morin’s 30-item scale (Morin, 2003 ; Morin & Cherré, 2004 ) by using an exploratory approach as well as confirmatory statistical methods (structural equation modeling) and in so doing, to address the lacunae in previous research discussed above. The end goal was thus to identify the structure of the scale statistically so that it can be used empirically in both academic and professional fields. Indeed, as mentioned previously, this scale is of particular interest to researchers because its design is not limited to measuring a general meaning of work for each individual; it can also be used to evaluate discrepancies or a convergence between a person’s own personal meaning of work and a specific work context (e.g., tasks, relations with others, autonomy). Finally, and with respect to previous results, the scale could be a potential predictor of professional well-being and psychological distress at work (Morin et al., 2001 ).

Participants

The questionnaire was conducted with 366 people who were mainly resident in Paris and the surrounding regions in France. The gender distribution was almost equal; 51.3% of the respondents were women. The respondents’ ages ranged from 19 to 76 years ( M = 39.11, SD = 11.25). The large majority of people were employed (99.2%). Twenty percent worked in medical and paramedical fields, 26% in retail and sales, and 17% in human resources (the other respondents worked in education, law, communication, reception, banking, and transportation). Seventy percent had fewer than 10 years of seniority in their current job ( M = 8.64, SD = 9.65). Only three people were retired (0.8%).

Morin’s 30-item meaning of work questionnaire (Morin, 2003 ; Morin et al., 2001 ; Morin & Cherré, 2004 ) along with sociodemographic questions (i.e., sex, age, job activities, and seniority at work) were conducted in French through an online platform. Answers to the meaning of work questionnaire were given on a 5-point Likert scale ranging from 1 ( strongly disagree ) to 5 ( strongly agree ).

Participants were recruited through various professional online social networks. This method does not provide for a true random sample but, owing to it resulting in a potentially larger range of respondents, it enlarges the heterogeneousness of the participants, even if it cannot ensure representativeness (Barberá & Zeitzoff, 2018 ; Hoblingre Klein, 2018 ). This point seems important because very homogenous samples were used in previous studies, especially with regard to professions.

Participants were volunteers, and were given the option of being able to stop the survey at any time. They received no compensation and no individual feedback. Participants were informed of these conditions before filling out the questionnaire. Oral and informed consent was obtained from all participants. Moreover, the Luxembourg Agency for Research Integrity (LARI on which the researchers in this study depend) specified that according to Code de la santé publique—Article L1123-7, it appears that France does not require research ethics committee [Les Comités de Protection des Personnes (CPP)] approval if the research is non-biomedical, non-interventional, observational, and does not collect personal health information, and thus CNR approval was not required.

Participants had to answer each question in order to submit the questionnaire: If one item was not answered, the respondent was not allowed to proceed to the next question. Thus, the database has no missing data. An introduction presented the subject of the study and its goals and guaranteed the participant’s anonymity. Researchers’ e-mail addresses were given, and participants were informed that they could contact the researchers for more information.

Data analyses

Three sets of statistical analyses were run on the data:

  • Analysis of the items, using traditional true score theory and item response theory, for verifying the psychometric qualities (using mainly R package “psych”). The main objectives of this part of analysis were to better understand the variability of respondents’ answers, to compute the discriminatory power of items, and to verify the distribution of items by using every classical descriptive indicator (mean, standard-deviation, skewness, and kurtosis), corrected item-total correlations, and functions of responses for distributions.
  • An exploratory factor analysis (EFA) with an oblimin rotation in order to define the latent structure of the meaning of work questionnaire, performed with the R packages “psych” and “GPArotation”. The structure we retained was based on adequation fits of various solutions (TLI, RMSEA and SRMR, see “List of abbreviations” section at the end of the article), and the use of R package “EFAtools” which helps to determine the adequate number of factors to retain for the EFA solution. Finally, this part of the analysis was concluded using calculations of internal consistency for each factor found in the scale.
  • A confirmatory factor analysis using the R package Lavaan and based on the results of the EFA, in order to verify that the latent structure revealed in Step c was valid and relevant for this meaning of work scale. The adequation between data and latent structure was appreciated on the basis of CFI, TLI, RMSEA, and SRMR (see “Abbreviations” section).

For step a, the responses of the complete sample were considered. For steps b and c, 183 subjects were selected randomly for each analysis from the total study sample. Thus, two subsamples comprised of completely different participants were used, one for the EFA in step b and one for the CFA in step c.

Because of the ordinal measurement of the responses and its small number of categories (5-point Likert), none of the items can be normally distributed. This point was verified in step a of the analyses. Thus, the data did not meet the necessary assumptions for applying factor analyses with conventional estimators such as maximum likelihood (Li, 2015 ; Lubke & Muthén, 2004 ). Therefore, because the variables were measured on ordinal scales, it was most appropriate to apply the EFA and CFA analyses to the polychoric correlation matrix (Carroll, 1961 ). Then, to reduce the effects of the specific item distributions of the variables used in the factor analyses, a minimum residuals extraction (MINRES; Harman, 1960 ; Jöreskog, 2003 ) was used for the EFA, and a weighted least squares estimator with degrees of freedom adjusted for means and variances (WLSMV) was used for the CFA as recommended psychometric studies (Li, 2015 ; Muthén, 1984 ; Muthén & Kaplan, 1985 ; Muthén & Muthén, 2010 ; Yang, Nay, & Hoyle, 2010 ; Yu, 2002 ).

The size of samples for the different analyses has been taken into consideration. A model structure analysis with 30 observed variables needs a recommended minimum sample of 100 participants for 6 latent variables, and 200 for 5 latent variables (Soper, 2019 ). The samples used in the present research corresponded to these a priori calculations.

Finally, according to conventional rules of thumb (Hu & Bentler, 1999 ; Kline, 2011 ), acceptable and excellent model fits are indicated by CFI and TLI values greater than .90 and .95, respectively, by RMSEA values smaller than .08 (acceptable) and .06 (excellent), respectively, and SRMR values smaller than .08.

Item analyses

The main finding was the limited amount of variability in the answers to each item. Indeed, as Table ​ Table4 4 shows, respondents usually and mainly chose the answers agree and strongly agree , as indicated by the column of cumulated percentages of these response modalities (%). Thus, for all items, the average answer was higher than 4, except for item 11, the median was 4, and skewness and kurtosis indicators confirmed a systematic skewed on the left leptokurtic distribution. This lack of variability in the participants’ responses and the high average scores indicate nearly unanimous agreement with the propositions made about the meaning of work in the questionnaire.

Distribution and analysis of the 30 items of the scale

M average of the answers to the item, SD standard deviation of the answers to the item, Med median, % cumulated percentages of answers 4 ( agree ) and 5 ( strongly agree ) for each item, skew skewness, kurt kurtosis, rit corrected item-total correlations

Table ​ Table4 4 also shows that the items had good discriminatory power, expressed by corrected item-total correlations (calculated with all items) which were above .40 for all items. Finally, item analyses were concluded through the application of item response theory (Excel tools using the eirt add in; Valois, Houssemand, Germain, & Belkacem, 2011 ) which confirmed, by analyses of item characteristic curves (taking into account that item response theory models are parametric and assume that the item responses distributions follow a logistic function, Rasch, 1980 ; Streiner, Norman, & Cairney, 2015 , p. 297), the psychometric quality of each item and their link to an identical latent dimension. These different results confirmed the interest in keeping all items of the questionnaire in order to measure the work-meaning construct.

Exploratory analyses of the scale

A five-factor solution was identified. This solution explained 58% of the total variance in the responses of the scale items; the TLI was .885, the RMSEA was .074, and the SRMR was .04. The structure revealed by this analysis was relatively simple (saturation of one main factor for each item; Thurstone, 1947 ), and the communality of each item was high, except for item 11. The solution we retained presented the best adequation fits and the most conceptual explanation concerning the latent factors. Additionally, the “EFAtools” R package confirmed the appropriateness of the chosen solution. Table ​ Table5 5 shows the EFA results, which described a five-factor structure.

Loadings and communalities of the 30 items from the meaning of work scale

EFA with five factors, oblimin rotation. Bold = loading ≥ .30. h 2 = communality

Nevertheless, the correlation matrix for the latent factors obtained by the EFA (see Table ​ Table6) 6 ) suggested the existence of a general second-order meaning of work factor, because the five factors were significantly correlated each with others. This result could be described as the existence of a general meaning of work factor, which alone would explain 44% of the total variance in the responses.

Correlations between the latent factors from the EFA, Cronbach’s alpha, and McDonald omega for each dimension and general factor

F1: success and recognition at work and from work; F2: usefulness; F3: respect; F4: value from and through work; F5: remuneration; general: total scale

Internal consistency of latent factors of the scale

The internal consistency of each latent factor, estimated by Cronbach alpha and McDonald omega, was high (above .80) and very high for the entire scale (α = .96 and ω = .97). Thus, for S uccess and Recognition at work and from work ’ s factor ω was .93, for Usefulness ’s factor ω was .92, for Respect ’s factor ω was .91, for Value from and through work ’s factor ω was slightly lower and equal to .85, and finally for Remuneration ’ s factor for which ω was .87.

Confirmatory factor analyses of the scale

In order to improve the questionnaire, we applied a CFA to this five-factor model to improve the model fit and refine the latent dimensions of the questionnaire. We used CFA to (a) determine the relevance of this latent five-factor structure and (b) confirm the relevance of a general second-order meaning-of-work factor. Although this procedure might appear redundant at first glance, it enabled us to select a definitive latent structure in which each item represents only one latent factor (simple structure; Thurstone, 1947 ), whereas the EFA that was computed in the previous step showed that certain items loaded on several factors. The CFA also easily verified the existence of a second-order latent meaning of work factor (the first-order loadings were .894, .920, .873, .892, and .918, respectively). Thus, this CFA was computed to complement the previous analyses by refining the latent model proposed for the questionnaire.

According to conventional rules of thumb (Hu & Bentler, 1999 ; Kline, 2011 ), although the RMSEA value for the five-factor model was somewhat too high, the CFI and TLI values were excellent (χ 2 = 864.72, df = 400, RMSEA = .080, CFI = .989, TLI = .988). Table ​ Table7 7 presents the adequation fits for both solutions: a model with 5 first-order factors (as EFA suggests), and a model with 5 first-order factors and 1 second-order factor.

Solutions of confirmatory factor analyses

χ 2 Chi-square, df degrees of freedom, CFI comparative fit index, TLI Tucker-Lewis Index of factoring reliability, RMSEA root mean square error of approximation, SRMR standardized root mean square residual

Figure ​ Figure1 1 shows the model after the confirmatory test. This analysis confirmed the existence of a simple structure with five factors for the meaning of work scale and with a general, second-order factor of the meaning of work as suggested by the previous EFA.

An external file that holds a picture, illustration, etc.
Object name is 41155_2020_167_Fig1_HTML.jpg

Standardized solution of the structural model of the Meaning of Work Scale

The objective of this study was to verify the theoretical and psychometric structure of the meaning of work scale developed by Morin in recent years (Morin, 2003 ; Morin et al., 2001 ; Morin & Cherré, 2004 ). This scale has the advantages of being rather short, of proposing a multidimensional structure for the meaning of work, and of making it possible to assess the coherence between the aspects of work that are personally valued and the actual characteristics of the work environment. Thus, it can be used diagnostically or to guide individuals. To establish the structure of this scale, we analyzed deeply the items, and we implemented exploratory and confirmatory factor analyses, which we believe the scale’s authors had not carried out sufficiently. Moreover, we used a broad range of psychometric evaluation methods (traditional true score theory, item response theory, EFA, and structural equation modeling) to test the validity of the scale.

Item analyses confirmed results found in previous studies in which the meaning-of-work scale was administered. The majority of respondents agreed with the proposals of the questionnaire. Thus, this lack of variability is not specific to the present research and its sample (e.g., Morin & Cherré, 2004 ). Nevertheless, this finding can be explained by different reasons (which could be studied by other research) such as social desirability and the importance of work norms in industrial societies, or a lack of control regarding response bias.

The various versions of the latent structure of the scale proposed by the authors were not confirmed by the statistical analyses seen here. It nevertheless appears that this tool for assessing the meaning of work can describe and measure five different dimensions, all attached to a general factor. The first factor (F1), composed of nine items, is a dimension of recognition and success (e.g., item 17: work where your skills are recognized ; item 19: work where your results are recognized ; item 24: work that enables you to achieve the goals that you set for yourself ). It should thus be named Success and Recognition at work and from work and is comparable to dimensions from previous studies (personal success, Morin et al., 2001 ; social influence, Morin & Cherré, 2004 ). The second factor (F2), composed of seven items, is a dimension that represents the usefulness of work for an individual, whether that usefulness is social (e.g., Item 22: work that gives you the opportunity to serve others ) or personal (e.g., Item 28: work that enables you to be fulfilled ). It can be interpreted in terms of the Usefulness of work and generally corresponds to dimensions of the same name in earlier models (Morin, 2003 ; Morin & Cherré, 2004 ), although the definition used here is more precise. The third factor (F3), described by four items, refers to the Respect dimension of work (e.g., Item 5: work that respects human values ) and corresponds in part to the factors highlighted in prior studies (respect and rationalization of work, Morin, 2003 ; Morin & Cherré, 2004 ). The fourth factor (F4), composed of four items, refers to the personal development dimension and Value from and through work (e.g., Item 2: work that enables you to learn or to improve ). It is in some ways similar to autonomy and effectiveness, described by the authors of the scale (Morin, 2003 ; Morin & Cherré, 2004 ). Finally, the fifth and final factor (F5), with six items, highlights the financial and, more important, personal benefits sought or received from work. This includes physical and material safety and the enjoyment of work (e.g., item 14: work you enjoy doing ). This dimension of Remuneration partially converges with the aspects of personal values related to work described in previous research (Morin et al., 2001 ). Although the structure of the scale highlighted here differed from previous studies, some theoretical elements were nevertheless consistent with each other. To be convinced of this, the Table ​ Table8 8 highlights possible overlaps.

Final structure the items of the meaning of work scale by Morin and their theoretical dimensions

1 = Previous dimensions of Morin et al. ( 2001 ) and Morin ( 2003 ). 2 = Morin and Cherré ( 1999 )

A second important result of this study is the highlighting of a second-order factor by the statistical analyses carried out. This latent second-level factor refers to the existence of a general meaning of work dimension. This unitary conception of the meaning of work, subdivided into different linked facets, is not in contradiction with the different theories related to this construct. Thus, Ros et al. ( 1999 ) defined the meaning of work as a personal interpretation of experiences and interaction at work. This view of meaning of work can confer it a unitary functionality for maintaining psychological harmony, despite the destabilizing events that are often a feature of work. It must be considered as a permanent process of work adjustment or work adaptation. In order to be effective, this adjustment needs to remain consistent and to be globally oriented toward the cognitive balance between the reality of work and the meaning attributed to it. Thus, it has to keep a certain coherence which would explain the unitary conception of the meaning of work.

In addition to the purely statistical results of this study, whereas some partial overlap was found between the structural model in this study and structural models from previous work, this paper provides a much-needed updating and improvement of these dimensions, as we examined several theoretical meaning of work models in order to explain them psychologically. Indeed, the dimensions defined here as Success and Recognition , Usefulness , Respect , Value , and Remuneration from the meaning of work scale by Morin et al. ( 2001 ) have some strong similarities to other theoretical models on the meaning of work, even though the authors of the scale referred to these models only briefly. For example, the dimensions work centrality as a life role , societal norms regarding work , valued work outcomes , importance of work goals , and work-role identification (MOW International Research team, 1987 ) concur with the model described in the present study. In the same manner, the model by Rosso et al. ( 2010 ) has some similarities to the present structure, and there is a conceptual correspondence between the five dimensions found here and those from their study ( individuation , contribution , self-connection , and unification ). Finally, Baumeister’s ( 1991 ), Morin and Cherré’s ( 2004 ), and Sommer, Baumeister, and Stillman ( 2012 ) studies presented similar findings on the meaning of important life experiences for individuals; they described four essential needs that make such experiences coherent and reasonable ( purpose , efficacy - control , rectitude , and self - worth ). It is obvious that the parallels noted here were fostered by the conceptual breadth of the dimensions as defined in these models. In future research, much more precise definitions are needed. To do so, it will be essential to continue running analyses to test for construct validity by establishing convergent validity between the dimensions of the various existing meaning of work scales.

It is also interesting to note the proximity between the dimensions described here and those examined in studies on the dimensions that characterize the work context (Pignault & Houssemand, 2016 ) or in Karasek’s ( 1979 ) and Siegrist’s ( 1996 ) well-known models, for example, which determined the impact of work on health, stress, and well-being. These studies were able to clearly show how dimensions related to autonomy, support, remuneration, and esteem either contribute to health or harm it. These dimensions, which give meaning to work in a manner that is similar to the dimensions highlighted in the current study (Recognition, Value, and Remuneration in particular), are also involved in health. Thus, it would be interesting to verify the relations between these dimensions and measures of work health.

Thus, the conceptual dimensions of the meaning of work, as defined by Morin ( 2003 ) and Morin and Cherré ( 1999 ), remained of strong theoretical importance even if, at the empirical level, the scale created on this basis did not correspond exactly. The present study has had the modest merit of showing this interest, and also of proposing a new structure of the facets of this general dimension. One of the major interests of this research can be found in the possible better interpretations that this scale will enable to make. As mentioned above, the Morin’s scale is very frequently used in practice (e.g., in state employment agencies or by Human Resources departments), and the divergent models of previous studies could lead to individual assessments of the meaning of work diverging, depending on the reading grid chosen. Showing that a certain similarity in the structures of the meaning of work exists, and that a general factor of the meaning of work could be considered, the results of the current research can contribute to more precise use of this tool.

At this stage and in conclusion, it may be interesting to consider the reasons for the variations between the structures of the scale highlighted by the different studies. There were obviously the different changes applied to the different versions of the scale, but beyond that, three types of explanation could emerge. At the level of methods, the statistics used by the studies varied greatly, and could explain the variations observed. At the level of the respondents, work remains one of the most important elements of life in our societies. A certain temptation to overvalue its importance and purposes could be at the origin of the broad acceptance of all the proposals of the questionnaire, and the strong interactions between the sub-dimensions. Finally, at the theoretical level, if, as our study showed, a general dimension of meaning of work seems to exist, all the items, all the facets and all the first order factors of the scale, are strongly interrelated at each respective level. As well, small variations in the distribution of responses could lead to variations of the structure.

The principal contribution of this study is undoubtedly the use of confirmatory methods to test the descriptive models that were based on Morin’s scale (Morin, 2003 , 2006 ; Morin & Cherré, 1999 , 2004 ). The principal results confirm that the great amount of interest in this scale is not without merit and suggest its validity for use in research, both by practitioners (e.g., career counselors and Human Resources departments) and diagnostically. The results show a tool that assesses a general dimension and five subdimensions of the meaning of work with a 30-item questionnaire that has strong psychometric qualities. Conceptual differences from previous exploratory studies were brought to light, even though there were also certain similarities. Thus, the objectives of this study were met.

Limitations

As with any research, this study also has a certain number of limitations. The first is the sample size used for statistical analyses. Even if the research design respected the general criteria for these kind of analyses (Soper, 2019 ), it will be necessary to repeat the study with larger samples. The second is the cultural and social character of the meaning of work, which was not addressed in this study because the sample was comprised of people working in France. They can thus be compared with those in Morin’s studies ( 2003 ) because of the linguistic proximity (French) of the samples, but differences in the structure of the scale could be due to cultural differences between America and Europe. Nevertheless, other different international populations should be questioned about their conception of the meaning of work in order to measure the impact of cultural and social aspects (England, 1991 ; England & Harpaz, 1990 ; Roe & Ester, 1999 ; Ruiz-Quintanilla & England, 1994 ; Topalova, 1994 ; Zanders, 1993 ). In the same vein, a third limitation involves the homogeneity of the respondents’ answers. Indeed, there was quasi-unanimous agreement with all of the items describing work (see Table ​ Table4 4 and previous results, Morin & Cherré, 2004 ). It is worth examining whether this lack of variance results from a work norm that is central and promoted in industrialized countries as it might mask broader interindividual differences. Thus, this study’s protocol should be repeated with other samples from different cultures. Finally, a fourth limitation that was mentioned previously involves the validity of the scale. Concerning the content validity and because some items loaded similarly different factors, it could be interesting to verify the wording content of the items, and potentially modify or replace some of them. The purpose of the present study was not to change the content of the scale but to suggest how future studies could analyze this point. Concerning the construct validity, this first phase of validation needs to be followed by other phases that involve tests of convergent validity between the existing meaning of work scales as well as tests of discriminant validity in order to confirm the existence of the meaning of work construct examined here. In such studies, the centrality of work (Warr, 2008 ; Warr, Cook, & Wall, 1979 ) should be used to confirm the validity of the meaning of work scale. Other differential, individual, and psychological variables related to work (e.g., performance, motivation, well-being) should also be introduced in order to expand the understanding of whether relations exist between the set of psychological concepts involved in work and individuals’ jobs.

Acknowledgements

Not applicable.

Abbreviations

Authors’ contributions.

Both the authors are responsible for study conceptualization, data collection, data preparation, data analysis and report writing. The original questionnaire is a public one. No permission is required. The author(s) read and approved the final manuscript.

No funding.

Availability of data and materials

Ethics approval and consent to participate.

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The Luxembourg Agency for Research Integrity (LARI) specifies that according to Code de la santé publique - Article L1123-7, it appears that France does not require research ethics committee (Les Comités de Protection des Personnes (CPP)) approval if the research is non-biomedical, non-interventional, observational, and does not collect personal health information. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. At the beginning of the questionnaire, the participants had to give their consent that the data could be used for research purposes, and they had to consent to the publication of the results of the study. Participation was voluntary and confidential. No potentially identifiable human images or data is presented in this study.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Anne Pignault, Email: [email protected] .

Claude Houssemand, Email: [email protected] .

  • Aronsson G, Bejerot E, Häremstam A. Healthy work: Ideal and reality among public and private employed academics in Sweden. Personal Public Management. 1999; 28 (2):197–215. doi: 10.1177/009102609902800203. [ CrossRef ] [ Google Scholar ]
  • Barberá P, Zeitzoff T. The new public address system: why do world leaders adopt social media? International Studies Quarterly. 2018; 62 (1):121–130. doi: 10.1093/isq/sqx047. [ CrossRef ] [ Google Scholar ]
  • Baumeister RF. Meaning of life. New York: Guilford; 1991. [ Google Scholar ]
  • Bendassolli PF, Borges-Andrade JE. Meaningfulness in work in Brazilian and French creative industries. Spanish Journal of Psychology. 2013; 16 :1–15. doi: 10.1017/sjp.2013.107. [ CrossRef ] [ Google Scholar ]
  • Bendassolli PF, Borges-Andrade JE, Coelho Alves JS, de Lucena Torres T. Meaningful work scale in creative industries: A confirmatory factor analysis. Psico-USF. 2015; 20 (1):1–12. doi: 10.1590/1413-82712015200101. [ CrossRef ] [ Google Scholar ]
  • Brief AP. MOW revisited: A brief commentary. European Work and Organizational Psychology. 1991; 1 :176–182. doi: 10.1080/09602009108408523. [ CrossRef ] [ Google Scholar ]
  • Brief AP, Nord WR. Meaning of occupational work. Toronto: Lexington Books; 1990. [ Google Scholar ]
  • Carroll JB. The nature of the data, or how to choose a correlation coefficient. Psychometrika. 1961; 26 :247–272. doi: 10.1007/bf02289768. [ CrossRef ] [ Google Scholar ]
  • Dawis RV, Lofquist LH, Weiss DJ. A theory of work adjustment (a revision) Minnesota Studies in Vocational Rehabilitation, XXIII. 1968; 47 :1–14. doi: 10.1016/b978-0-08-013391-1.50030-4. [ CrossRef ] [ Google Scholar ]
  • England, G. W. (1991). The meaning of working in USA: Recent changes. The European Work and Organizational Psychologist , 1 , 111–124.  10.1111/j.1464-0597.1990.tb01036.x 
  • England GW, Harpaz I. How working is defined: National contexts and demographic and organizational role influences. Journal of Organizational Behavior. 1990; 11 :253–266. doi: 10.1002/job.4030110402. [ CrossRef ] [ Google Scholar ]
  • Hackman JR, Oldham GR. Motivation through the design of work: Test of a theory. Organizational Behavior and Human Performance. 1976; 16 :250–279. doi: 10.1016/0030-5073(76)90016-7. [ CrossRef ] [ Google Scholar ]
  • Harman HH. Modern Factor Analysis. Chicago: The University of Chicago Press; 1960. [ Google Scholar ]
  • Harpaz I. The factorial structure of the meaning of work. Human Relations. 1986; 39 :595–614. doi: 10.1177/001872678603900701. [ CrossRef ] [ Google Scholar ]
  • Harpaz I, Fu X. The structure of the meaning of work: A relative stability amidst change. Human Relations. 2002; 55 :639–−668. doi: 10.1177/0018726702556002. [ CrossRef ] [ Google Scholar ]
  • Harpaz I, Meshoulam I. The meaning of work, employment relations, and strategic human resources management in Israel. Human Resource Management Review. 2010; 20 :212–223. doi: 10.1016/j.hrmr.2009.08.009. [ CrossRef ] [ Google Scholar ]
  • Heider F. Attitudes and cognitive organization. Journal of Psychology. 1946; 21 :107–112. doi: 10.1080/00223980.1946.9917275. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hoblingre Klein H. Réseaux sociaux professionnels: instruments d’empowerment professionnel ?: Analyse de cas de consultants RH et de recruteurs sur LinkedIn. 2018. [ Google Scholar ]
  • Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling. 1999; 6 :1–55. doi: 10.1080/10705519909540118. [ CrossRef ] [ Google Scholar ]
  • Jöreskog KG. Factor analysis by MINRES. 2003. [ Google Scholar ]
  • Karasek RA. Job demands, job decision latitude, and mental strain: Implications for job redesign. Administrative Science Quarterly. 1979; 24 :285–308. doi: 10.2307/2392498. [ CrossRef ] [ Google Scholar ]
  • Ketchum LD, Trist E. All teams are not created equal. How employee empowerment really works. Newbury Park: Sage; 1992. [ Google Scholar ]
  • Kline RB. Principles and practices of structural equation modeling. 3. New-York: Guilford; 2011. [ Google Scholar ]
  • Li, C. H. (2015). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood diagonally weighted least squares. Behavior Research Method , 1–14. 10.3758/s13428-015-0619-7. [ PubMed ]
  • Lips-Wiersma M, Wright S. Measuring the meaning of meaningful work: Development and validation of the Comprehensive Meaningful Work Scale (CMWS) Group & Organization Management. 2012; 37 (5):655–685. doi: 10.1177/1059601112461578. [ CrossRef ] [ Google Scholar ]
  • Lubke GH, Muthén BO. Applying multigroup confirmatory factor models for continuous outcomes to Likert scale data complicates meaningful group comparisons. Structural Equation Modeling. 2004; 11 :514–534. doi: 10.1207/s15328007sem1104_2. [ CrossRef ] [ Google Scholar ]
  • McKee-Ryan F, Song Z, Wanberg CR, Kinicki AJ. Psychological and physical well-being during unemployment: A meta- analytic study. Journal of Applied Psychology. 2005; 90 :53–76. doi: 10.1037/0021-9010.90.1.53. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morin E. Sens du travail. Définition, mesure et validation. In: Vandenberghe C, Delobbe N, Karnas G, editors. Dimensions individuelles et sociales de l’investissement professionnel. Louvain: Presses Universitaires de Louvain; 2003. pp. 11–20. [ Google Scholar ]
  • Morin E. Document—Centre de recherche et d’intervention pour le travail, l’efficacité organisationnelle et la santé (CRITEOS), HEC Montréal. 2006. Donner un sens au travail. [ Google Scholar ]
  • Morin E. Sens du travail, santé mentale et engagement organisationnel. Montréal: IRSST; 2008. [ Google Scholar ]
  • Morin E, Archambault M, Giroux H. Projet Qualité de Vie au Travail. Rapport Final. Montréal: HEC Montréal; 2001. [ Google Scholar ]
  • Morin E, Cherré B. Les cadres face au sens du travail. Revue Française de Gestion. 1999; 126 :83–93. doi: 10.3166/rfg.251.149-164. [ CrossRef ] [ Google Scholar ]
  • Morin E, Cherré B. Réorganiser le travail et lui donner du sens. In: Lancry A, Lemoine C, editors. La personne et ses rapports au travail. Paris: L’Harmattan; 2004. pp. 87–102. [ Google Scholar ]
  • Morse NC, Weiss RS. The function and meaning of work and the job. American Sociological Review. 1955; 20 :191–198. doi: 10.2307/2088325. [ CrossRef ] [ Google Scholar ]
  • MOW International Research team . The meaning of working. London: Academic Press; 1987. [ Google Scholar ]
  • Muthén B. A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators. Psychometrika. 1984; 49 :115–132. doi: 10.1007/bf02294210. [ CrossRef ] [ Google Scholar ]
  • Muthén LK, Kaplan D. A comparison of some methodologies for the factor analysis of non-normal Likert variables. British Journal of Mathematical and Statistical Psychology. 1985; 38 :171–189. doi: 10.1111/j.2044-8317.1985.tb00832.x. [ CrossRef ] [ Google Scholar ]
  • Muthén LK, Muthén BO. Mplus user’s guide. 6. Los Angeles: Muthén & Muthén; 2010. [ Google Scholar ]
  • Osborne, J. W. & Fitzpatrick, D. C. (2012). Replication analysis in exploratory factor analysis: what it is and why it makes your analysis better. Practical Assessment, Research & Evaluation , 17 (15), 1–8. 10.7275/h0bd-4d11.
  • Osgood CE, Tannenbaum PH. The principle of congruity in the perception of attitude change. Psychological Review. 1955; 62 :42–55. doi: 10.1037/h0048153. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pignault A, Houssemand C. Construction and initial validation of the Work Context Inventory. Journal of Vocational Behavior. 2016; 92 :1–11. doi: 10.1016/j.jvb.2015.11.006. [ CrossRef ] [ Google Scholar ]
  • Pratt MG, Ashforth BE. Fostering meaningfulness in working and at work. In: Cameron KS, Dutton JE, Quinn RE, editors. Positive organizational scholarship. San Francisco: Berrett-Koehler Publishers; 2003. pp. 309–327. [ Google Scholar ]
  • Rasch G. Probabilistic models for some intelligence and attainment tests. Chicago: The University of Chicago Press; 1980. [ Google Scholar ]
  • Roe RA, Ester P. Values and Work: Empirical findings and theoretical perspective. Applied Psychology: An international review. 1999; 48 (1):1–21. doi: 10.1111/j.1464-0597.1999.tb00046.x. [ CrossRef ] [ Google Scholar ]
  • Ros M, Schwartz SH, Surkiss S. Basic individual values, work values, and meaning of work. Applied Psychology: An international review. 1999; 48 (1):49–71. doi: 10.1111/j.1464-0597.1999.tb00048.x. [ CrossRef ] [ Google Scholar ]
  • Rosso BD, Dekas KH, Wrzesniewski A. On the meaning of work: A theoretical integration and review. Research in Organizational Behavior. 2010; 30 :91–127. doi: 10.1016/j.riob.2010.09.001. [ CrossRef ] [ Google Scholar ]
  • Ruiz-Quintanilla SA, England GW. How working is defined: Structure and stability. 1994. [ Google Scholar ]
  • Siegrist JA. Adverse health effects of high-effort/low-reward conditions. Journal of Occupational Health Psychology. 1996; 1 :27–41. doi: 10.1037/1076-8998.1.1.27. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sommer KL, Baumeister RF, Stillman TF. The construction of meaning from life events: Empirical studies of personal narratives. In: Wong PTP, editor. The Human Quest for Meaning. New York: Routledge; 2012. pp. 297–314. [ Google Scholar ]
  • Soper DS. A-priori sample size calculator for structural equation models [Software] 2019. [ Google Scholar ]
  • Steers RM, Porter L. Work and motivation: An evaluative summary. In: Steers RM, Porter L, editors. Motivation and work behaviour. New-York: McGraw-Hill; 1979. pp. 555–564. [ Google Scholar ]
  • Steger MF, Dik BJ, Duffy RD. Measuring meaningful work: The work and meaning inventory (WAMI) Journal of Career Assessment. 2012; 20 (3):322–337. doi: 10.1177/1069072711436160. [ CrossRef ] [ Google Scholar ]
  • Streiner, D. L., Norman, G. R., & Cairney, J. (2015). Health measurement scales. Oxford Medicine Online. 10.1093/med/9780199685219.001.0001.
  • Super DE, Sverko B. Life roles, values, and careers. San Francisco: Jossey-Bass Publishers; 1995. [ Google Scholar ]
  • Sverko B. Origin of individual differences in importance attached to work: A model and a contribution to its evaluation. Journal of Vocational Behavior. 1989; 34 :28–39. doi: 10.1016/0001-8791(89)90062-6. [ CrossRef ] [ Google Scholar ]
  • Sverko B, Vizek-Vidovic V. Studies of the meaning of work: Approaches, models, models and some of the findings. In: Super DE, Sverko B, editors. Life roles, values, and Careers. San Francisco: Jossey-Bass Publishers; 1995. pp. 3–21. [ Google Scholar ]
  • Thurstone LL. Multiple-factor analysis. Chicago: University of Chicago Press; 1947. [ Google Scholar ]
  • Topalova V. Changes in the attitude to work and unemployment during the period of social transition. In: Roe RA, Russinova V, editors. Psychosocial aspects of employment: European perspectives. Tilburg: Tilburg University Press; 1994. pp. 21–28. [ Google Scholar ]
  • Valois P, Houssemand C, Germain S, Belkacem A. An open source tool to verify the psychometric properties of an evaluation instrument. Procedia Social and Behavioral Sciences. 2011; 15 :552–556. doi: 10.1016/j.sbspro.2011.03.140. [ CrossRef ] [ Google Scholar ]
  • Wanberg, C.R. (2012) The Individual Experience of Unemployment. Annual Review of Psychology, 63 (1), 369-396. 10.1146/annurev-psych-120710-100500 [ PubMed ]
  • Warr P. Work values: Some demographic and cultural correlates. Journal of Occupational and Organizational Psychology. 2008; 81 :751–775. doi: 10.1348/096317907x263638. [ CrossRef ] [ Google Scholar ]
  • Warr PB, Cook JD, Wall TD. Scales of measurement of some work attitudes and aspects of psychological well-being. Journal of Occupational Psychology. 1979; 52 :129–148. doi: 10.1111/j.2044-8325.1979.tb00448.x. [ CrossRef ] [ Google Scholar ]
  • Wrzesniewski A, Dutton JE, Debebe G. Interpersonal sensemaking and the meaning of work. Research in Organizational Behavior. 2003; 25 :93–135. doi: 10.1016/s0191-3085(03)25003-6. [ CrossRef ] [ Google Scholar ]
  • Wrzesniewski A, McCauley C, Rozin P, Schwartz B. Jobs, careers, and callings: people’s relations to their work. Journal of Research in Personality. 1997; 31 :21–33. doi: 10.1006/jrpe.1997.2162. [ CrossRef ] [ Google Scholar ]
  • Yang C, Nay S, Hoyle RH. Three approaches to using lengthy ordinal scales in structural equation models. Parceling, latent scoring, and shortening scales. Applied Psychological Measurement. 2010; 34 :122–142. doi: 10.1177/0146621609338592. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yu C-Y. Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Dissertation. Los Angeles: University of California; 2002. [ Google Scholar ]
  • Zanders H. Changing work values. In: Ester P, Halman L, de Moor R, editors. The individualizing society. Value changes in Europe and North-America. Tilburg: Tilburg University Press; 1993. pp. 129–153. [ Google Scholar ]

How AI can speed scientific discovery, from predicting virus variants to vital protein research

AI.

AI as a force for good? Panelists at the 2023 Annual Meeting of the Global Future Councils reviewed the evidence. Image:  Unsplash/Steve Johnson

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Andrea Willige

good research work meaning

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, artificial intelligence.

Have you read?

These are world leaders' top 3 concerns, according to AI analysis of their speeches

AI: 3 ways artificial intelligence is changing the future of work

The european union’s artificial intelligence act - explained, don't miss any update on this topic.

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:

The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Artificial Intelligence .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

good research work meaning

An interview with ChatGPT about managing the risks of generative AI

Nelson Novaes Neto and Keri Pearlson

November 3, 2023

good research work meaning

How is agritech helping to optimize the farming sector?

Arushi Goel and Sowmya Komaravolu

October 31, 2023

good research work meaning

Artificial Intelligence's next frontier could be stroke intervention

Chris Mansi

October 27, 2023

good research work meaning

The double-edged sword of artificial intelligence in cybersecurity

Deryck Mitchelson

good research work meaning

Andrea Willige

October 26, 2023

good research work meaning

Sound recordings and AI are telling scientists if biodiversity is recovering in tropical forests

Liz Kimbrough

Mobile Menu Overlay

The White House 1600 Pennsylvania Ave NW Washington, DC 20500

FACT SHEET: President   Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more. As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI. The Executive Order directs the following actions: New Standards for AI Safety and Security

As AI’s capabilities grow, so do its implications for Americans’ safety and security.  With this Executive Order, the  President directs the  most sweeping  actions  ever taken  to protect Americans from  the potential  risks  of  AI  systems :

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.  In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public. 
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.  The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
  • Protect against the risks of using AI to engineer dangerous biological materials  by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content . The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software,  building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
  • Order the development of a National Security Memorandum that directs further actions on AI and security,  to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.

Protecting Americans’ Privacy

Without safeguards, AI can put Americans’ privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.  To better protect Americans’ privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions:

  • Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques— including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.  
  • Strengthen privacy-preserving research   and technologies,  such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.
  • Evaluate how agencies collect and use commercially available information —including information they procure from data brokers—and  strengthen privacy guidance for federal agencies  to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data.
  • Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques,  including those used in AI systems. These guidelines will advance agency efforts to protect Americans’ data.

Advancing Equity and Civil Rights

Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the  Blueprint for an AI Bill of Rights  and issuing an  Executive Order directing agencies to combat algorithmic discrimination , while enforcing existing authorities to protect people’s rights and safety.  To ensure that AI advances equity and civil rights, the President directs the following additional actions:

  • Provide clear guidance to landlords, Federal benefits programs, and federal contractors  to keep AI algorithms from being used to exacerbate discrimination.
  • Address algorithmic discrimination  through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
  • Ensure fairness throughout the criminal justice system  by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.

Standing Up for Consumers, Patients, and Students

AI can bring real benefits to consumers—for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans.  To protect consumers while ensuring that AI can make Americans better off, the President directs the following actions:

  • Advance the responsible use of AI  in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI. 
  • Shape AI’s potential to transform education  by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.

Supporting Workers

AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement.  To mitigate these risks, support workers’ ability to bargain collectively, and invest in workforce training and development that is accessible to all, the President directs the following actions:

  • Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers  by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.
  • Produce a report on AI’s potential labor-market impacts , and  study and identify options for strengthening federal support for workers facing labor disruptions , including from AI.

Promoting Innovation and Competition

America already leads in AI innovation—more AI startups raised first-time capital in the United States last year than in the next seven countries combined.  The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions:

  • Catalyze AI research across the United States  through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and expanded grants for AI research in vital areas like healthcare and climate change.
  • Promote a fair, open, and competitive AI ecosystem  by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.
  • Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States  by modernizing and streamlining visa criteria, interviews, and reviews.

Advancing American Leadership Abroad

AI’s challenges and opportunities are global.  The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions:

  • Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI . The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.
  • Accelerate development and implementation of vital AI standards  with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.
  • Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges,  such as advancing sustainable development and mitigating dangers to critical infrastructure.

Ensuring Responsible and Effective Government Use of AI

AI can help government deliver better results for the American people. It can expand agencies’ capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions.  To ensure the responsible government deployment of AI and modernize federal AI infrastructure, the President directs the following actions:

  • Issue guidance for agencies’ use of AI,  including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment.  
  • Help agencies acquire specified AI products and services  faster, more cheaply, and more effectively through more rapid and efficient contracting.
  • Accelerate the rapid hiring of AI professionals  as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields.

As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI. The Administration has already consulted widely on AI governance frameworks over the past several months—engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations. The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation. For more on the Biden-Harris Administration’s work to advance AI, and for opportunities to join the Federal AI workforce, visit AI.gov .

Stay Connected

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

  • TechRepublic

Account Information

good research work meaning

What Is a VPN? Definition, How It Works, and More

Share with Your Friends

Your email has been sent

Image of Luis Millares

A VPN (virtual private network) encrypts your internet traffic and protects your online privacy. Find out how it works and why you should use it.

A VPN, or a virtual private network, is a mechanism used to establish a secure connection between a device and a network — such as a remote employee’s computer and a company’s internal server. Organizations use VPNs to secure connections and prevent potential threats from accessing and taking advantage of sensitive information.

In this article, we define what a VPN is, how it works, and how it can benefit you and your organization.

What is a VPN?

How do vpns work, vpn benefits vs. vpn drawbacks, benefits of vpns, drawbacks of vpns, vpn pricing, vpn security, popular vpn providers, should your organization use a vpn, featured partners.

ESET PROTECT Advanced

A virtual private network (VPN) encrypts and hides online activity and sensitive information, such as browsing history and IP addresses, to keep your connection secure.

Think of online activity on unsecured public Wi-Fi as riding a bike on an open road; bikers and everything they’re doing are visible — what sites they’ve visited, where they’re coming from and at what times they traveled. Online activity secured through a VPN represents a private tunnel for bikers to travel through with their activity and information hidden.

Whether you’re an individual or a business, using a VPN can help protect your online data from potential threats.

When you want to visit a website using a secure connection, you must first connect to the VPN using the app or browser extension .

The VPN generates a private connection where your internet activity is encrypted and made unreadable. Once encrypted, the internet data is routed to a VPN server that masks your IP address, adding an additional layer of anonymity to your connection.

Finally, the VPN decrypts the data and sends it over to the site you’re visiting, creating the illusion that your machine connected from the VPN server’s location instead of your actual whereabouts.

This process makes sure that any data or information that can be tied back to you is scrambled and untraceable before it reaches your internet service provider.

Types of VPNs

There are quite a few types of VPNs, but three of the most common are remote access, site-to-site and personal VPNs.

  • Remote-access VPNs allow users to connect to a remote network securely. Companies typically use this type of VPN to allow remote employees to safely access resources through a secure corporate network. Enterprise VPNs such as Perimeter 81 and NordLayer fall under this category.
  • Site-to-site VPNs are used by large organizations to connect multiple networks, enabling secure communication and resource sharing across different business headquarters. These networks, known as intranets, are common in big corporations with multiple locations, vast resources and data.
  • Personal VPNs are designed for individual users, offering access to a VPN provider’s servers to protect personal information and unblock georestricted content. Examples of personal VPNs are NordVPN, Surfshark, and ExpressVPN.

Depending on the service provider, VPNs can come with a range of potential benefits and limitations.

Let’s delve into three significant benefits of VPNs and how they can enhance online security and privacy.

Secures your data and internet traffic

A VPN’s ability to protect user data through encryption is one of its most important benefits. As many workforces heavily rely on the internet, having a secure way to access corporate resources has become a necessity. As more companies adopt hybrid and remote working environments, VPN protection has become increasingly vital.

Through VPNs, sensitive information such as browsing activity, IP addresses, and private communications within businesses and organizations can be protected — whether you’re in a remote workplace or not.

Prevents tracking and allows for anonymity

VPNs prevent tracking and allow for greater anonymity as marketing websites and malicious actors would have a harder time tracking down a device’s specific IP address.

There are even VPNs that route and encrypt user traffic to two or more servers through a process called multihop — adding an even higher level of privacy.

Access to georestricted content

Because a VPN server allows you to use an alternate IP address and location, you can make it appear that you’re using the internet from a location of your choosing to access geographically restricted content.

This can be handy if you’re traveling in another country and need to access a site or an online resource that’s only available in a certain location. For example, if you set your location to a VPN server in Switzerland, all the websites you visit will perceive you as someone using the internet from that country.

While VPNs can offer numerous benefits, it’s important to consider the potential drawbacks.

Quality VPNs aren’t free

While there are free VPNs available, you will benefit the most from a paid solution. Quality VPNs come at a price because of the need for providers to obtain the right server hardware, have servers in multiple countries and ensure these servers operate securely.

Paid VPNs also include more robust security protocols that protect user data, as well as provide unlimited bandwidth for your internet activity.

Fortunately, VPN providers typically offer a range of subscription plans to accommodate diverse needs.

Slower internet speed

Another drawback of VPNs is the slower than usual internet connection. Because you’re routing your internet traffic through multiple steps, there’s an inevitable slowdown in real-world speed.

Slower speeds can also be the result of busy VPN servers, as a server theoretically handles thousands of connections from all over the world.

Not all VPNs are secure

It’s important to understand that while most VPNs are effective security solutions, not all VPN providers offer the same level of protection. Some VPNs have had a history of reportedly logging and selling user data. This is why looking into a VPN’s no-logs policy and third-party security testing are important steps before integrating a VPN into your organization.

As most VPNs require a paid subscription, here are some factors that determine their cost.

Security and encryption

As a security product first and foremost, a VPN’s cost is affected by the level of security it provides to the end-user. Proven security tunneling protocols such as OpenVPN, WireGuard, and IKEv2 are usually offered in more expensive VPN services. Having military-grade AES-256 encryption is another important tenet of VPNs that influences the eventual cost.

Server network

Generally, the more servers a VPN has the more expensive it is to run. This is because having servers in multiple locations around the world costs money to maintain.

The quality of these servers also affect pricing, as access to more secure and faster servers can be more expensive.

Extra features also affect VPN pricing. Aside from providing an encrypted connection, some VPNs will include specialized features such as data breach monitors and ad blockers.

VPN providers can also offer their VPN on multiple platforms such as Windows, macOS, Android, iOS, browsers and smart TVs — all of which cost money on the side of the VPN provider.

Plan duration and length

Plan duration and length will also determine the final VPN cost. As a baseline, almost all VPN providers offer at least a one-month subscription, ranging from as low as $8 to as much as $20 per user, per month.

To lessen the cost, most VPNs offer longer plans that range from one to three years, often at a reduced monthly rate.

Outside of encryption, there are other aspects of VPNs that impact overall security.

  • No-logs policy: A no-logs policy is a VPN provider’s assurance to customers that it doesn’t log or keep track of any user data.
  • Independent security audit: VPNs can bolster their security claims through independent security audits. These third-party tests can show prospective users that a VPN’s security promises, such as a no-logs policy, are legitimate and backed by evidence and data.
  • Five Eyes Alliance (FVEY): The Five Eyes alliance is an intelligence-sharing alliance comprising the United States, the United Kingdom, Australia, New Zealand and Canada. These countries have a history of surveillance on citizens, thus, VPNs that operate from these states can pose security risks. VPNs based in an FVEY country can still provide quality security but it’s a valid reason to try other options that operate in more privacy-friendly nations.

Some of the best VPN providers today are NordVPN, ExpressVPN, and CyberGhost VPN. NordVPN offers built-in protection against malware, ads and trackers and has a unique Meshnet encrypted file-sharing system that can be useful for those who regularly transfer files.

ExpressVPN comes with an easy-to-use and well-designed desktop application and provides great speed and performance. CyberGhost VPN offers an extensive server network that gives you access to 9,500+ servers spread across 100 countries.

Given the potential security benefits, using a VPN to protect your organization’s data is a no-brainer — especially for businesses or organizations that heavily rely on the internet to access company resources, conduct internal communications and run their day-to-day operations.

VPNs can keep sensitive information safe from bad actors, prevent unwanted tracking and allow access to georestricted content. These advantages are even more crucial for companies that have shifted towards hybrid and remote work arrangements.

While cost can be a hurdle, various VPN providers provide a range of subscription options to accommodate most budgets. In this digital age, VPNs offer significant benefits that outweigh their drawbacks.

Subscribe to the Cloud Insider Newsletter

This is your go-to resource for the latest news and tips on the following topics and more, XaaS, AWS, Microsoft Azure, DevOps, virtualization, the hybrid cloud, and cloud security. Delivered Mondays and Wednesdays

  • 8 Best Penetration Testing Tools and Software for 2023 (TechRepublic)
  • 6 Best Cybersecurity Certifications of 2023 (TechRepublic)
  • Network security policy (TechRepublic Premium)
  • Cybersecurity: More must-read coverage (TechRepublic on Flipboard)
  • See all of Luis's content
  • Cloud Security

Editor's Picks

good research work meaning

TechRepublic Premium Editorial Calendar: Policies, Checklists, Hiring Kits and Glossaries for Download

TechRepublic Premium content helps you solve your toughest IT issues and jump-start your career or next project.

good research work meaning

7 Best AI Art Generators of 2023

This is a comprehensive list of the best AI art generators. Explore the advanced technology that transforms imagination into stunning artworks.

Close up of payroll summary detail with figures and euro.

The Best Cheap Payroll Services for 2023

Find the perfect payroll service for your business without breaking the bank. Discover the top cheap payroll services, features, pricing and pros and cons.

NordVPN logo.

NordVPN Review (2023): Pricing, Security & Performance

Is NordVPN worth it? How much does it cost and is it safe to use? Read our NordVPN review to learn about pricing, features, security, and more.

Task management software to organize project tasks, assign staff to finish work on schedule, project plan or efficiency work concept, business people team manage tasks assignment on computer laptop.

Best Free Project Management Software for 2023

Free project management software provides flexibility for managing projects without paying a cent. Check out our list of the top free project management tools.

Red cloud symbols with money and other symbols surrounding it.

Cloud Strategies Are Facing a New Era of Strain in Australia, New Zealand

Australian and New Zealand enterprises in the public cloud are facing pressure to optimize cloud strategies due to a growth in usage and expected future demand, including for artificial intelligence use cases.

Hiring Kit: Microsoft Power BI Developer

The modern enterprise generates more raw data than ever before, but it takes someone with special skills to turn that data stream into information that decision-makers can actually use. The extraction of useful business intelligence requires insight into all aspects of an enterprise’s business operations, technical knowledge of what, where and how data is generated ...

Offshore Work Policy

It’s common practice for companies to use offshore employees or contractors in order to offload work to specialized individuals or reduce costs associated with certain tasks and responsibilities. This can free up staff to focus on more complex and valuable initiatives, and also ensure 24×7 operations for companies which rely upon on-call staff and subject-matter ...

Quick Glossary: Cleantech

As the global population grows, so does the demand for a wider array of products and services. However, this surge in demand often leads to production processes that generate increased waste, posing a significant threat to our environment. Fortunately, cleantech, short for clean technology, offers a compelling solution to this impending crisis. Cleantech encompasses environmentally ...

  • TechRepublic on Twitter
  • TechRepublic on Facebook
  • TechRepublic on LinkedIn
  • TechRepublic on Flipboard
  • Privacy Policy
  • Terms of Use
  • Property of TechnologyAdvice

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

  • Skip to main content
  • Keyboard shortcuts for audio player

This teacher shortage solution has gone viral. But does it work?

From Hechinger Report

Kavitha Cardoza

A new graduate jumps into the classroom to teach.

School custodian Jenna Gros is teaching a group of fourth-graders how to convert fractions to decimals.

"How would you write 6/100 in decimal form?" she asks, and then waits patiently for them to come up with the correct answer.

Gros, pronounced "grow," has been a custodian at Wyandotte Elementary School in St. Mary Parish, La., for more than 18 years, and now she's also a teacher in training.

"Everything is about kids and relationships. We don't just do garbage," she says, laughing.

For Gros, helping children learn is a dream come true — and it wouldn't be possible if not for a Grow Your Own program, an alternative pathway to becoming an educator. She's working toward a bachelor's degree in education, and as part of her studies, she has to get 15 hours a week of in-class training, which can include observing a teacher, tutoring students or helping design lessons. Best of all, the fees for her schooling are minimal: $75 a month.

Gros' school principal, Celeste Pipes, is eager for Gros to complete her training. She thinks Gros will be a wonderful teacher, and Pipes has also been struggling to fill teacher positions.

"I remember when I started teaching 20 years ago. I didn't know if I was guaranteed a job," Pipes says. "And in just that short amount of time, we are pulling people literally off the streets to fill spots in a classroom."

6 things to know about U.S. teacher shortages and how to solve them

6 things to know about U.S. teacher shortages and how to solve them

Across the U.S., many principals face a similar challenge. There are an estimated 55,000 vacant teaching positions in U.S. schools, according to the tracker teachershortages.com . One possible solution has gone viral: Grow Your Own programs. According to researchers, as of the spring of 2022, an estimated 900 U.S. school districts were using these programs to try to ease their teacher shortages.

Grow Your Own programs aim to recruit future teachers from the local community, and state and federal governments have made hundreds of millions of dollars available to pay for them. Michigan has invested more than $175 million in recent years, Tennessee has invested more than $20 million, and Grow Your Own teacher apprenticeship programs now have access to millions of dollars in federal job-training funds through new U.S. Labor Department guidance .

There's just one problem, researchers say: It's unclear whether these programs actually work.

Grow Your Own programs have been celebrated as a catchall solution

"This term, 'Grow Your Own program,' has really caught on fire in the last five years," says Danielle Edwards, an assistant professor of education at Old Dominion University in Virginia.

These programs have been around for decades, but Edwards says they've "exploded" in number in recent years.

Some help people earn bachelor's degrees or complete their teacher certification, while others simply aim to increase interest in the teaching profession. One program may target school employees, like Gros, who don't have college degrees or degrees in education, while another may focus on military veterans , college students or even K-12 students , with some starting as young as middle school .

As teacher shortages loom, one district grows future educators in high school

As teacher shortages loom, one district grows future educators in high school

Grow Your Own programs have been celebrated as a way to ease teacher shortages, increase retention, make degrees more accessible and diversify an overwhelmingly white workforce. But researchers say there isn't much data to show that these programs consistently do any of that.

"We're seeing [Grow Your Own programs] as a silver bullet ... but we just don't know if the programs themselves induced people to become teachers," says Edwards.

"There's very, very little empirical evidence about the effectiveness of these pathways," says Roddy Theobald, deputy director of the Center for Analysis of Longitudinal Data in Education Research.

That hasn't stopped education agencies from going all in. Here's U.S. Education Secretary Miguel Cardona back in January, at an event outlining his priorities: "For the first time, we're putting millions into ensuring Grow Your Own programs [are] developed to bring the talent into the profession. ... We know those programs work, and we're putting money and support behind it."

Why it's hard to know whether these programs work

Part of the problem is Grow Your Own programs can vary widely. One program may involve just a short high school career day presentation, while another gives out scholarships to traditional teacher training schools and yet another offers apprenticeship programs that are completely free. Some are in person, while others are online or hybrid. Some are run by universities; others are run by independent nonprofits.

"States and districts use 'Grow Your Own' to mean wildly different things in wildly different settings," Theobald says. That makes it hard to measure their effectiveness.

Theobald says another challenge is that Grow Your Own programs rarely target the specific needs of schools. Some states, for example, have staffing shortages only in, say, special education or STEM fields, and local programs may not be graduating teachers in those areas, leading to a "misalignment."

"Sometimes they result in even more teachers to teach courses that the state doesn't actually need."

And finally, Edwards says we don't know whether Grow Your Own programs translate into more teacher diversity — a big priority given that public school students are mostly children of color, while teachers are mostly white.

Yet the U.S. Department of Education continues to invest in and promote these programs. When NPR asked the department to comment on the lack of evidence, the department cited research — from New America , the Learning Policy Institute and the department's own Institute of Education Sciences — that outlines examples of higher retention rates, improved teacher diversity and better student outcomes connected to Grow Your Own programs.

But Edwards says those studies don't provide direct evidence of the effectiveness of Grow Your Own programs. Some, for example, don't include information on dropout rates for teachers in training. Some programs in the studies recruit teachers more broadly (most recruit college graduates) and aren't focused just on members of the local community, the hallmark of Grow Your Own programs. And none of the studies cited provides a counterfactual, which means we don't know whether these individuals would have become teachers in the absence of a Grow Your Own program. The programs may be selecting individuals who would have become teachers anyway, Edwards says.

It also isn't clear whether these teachers are more effective. Edwards believes much more research is needed, given the high "interest and investment."

"We want to know whether teachers who participate in Grow Your Own programs have higher contributions to student test scores, whether they have higher contributions to the likelihood of kids graduating high school, whether [the students] graduate college and their income when they become adults."

Research always lags behind

David Donaldson says it's too soon to write off these programs. He founded the National Center for Grow Your Own nonprofit and worked on the issue while he was at the Tennessee Department of Education.

He agrees that there is no shared definition of Grow Your Own programs and that this makes it tricky to measure their effectiveness. "These are not apples-to-apples comparisons," he says.

Being a new teacher is hard. Having a good mentor can help

Being a new teacher is hard. Having a good mentor can help

But he says research always lags behind practice. "Any time you're trying something new, there isn't going to be research. There isn't going to be evaluation."

And Donaldson believes these programs can do a lot to increase teacher diversity, in terms of both race and class, by removing financial barriers and expanding the pool of potential educators who have long been overlooked. He cites his own mother as an example of the untapped potential within school communities: "My mom was my school cafeteria worker, but she also taught Sunday school and vacation Bible school for over two decades. She could not afford to go to college to become a teacher."

This was also true for Towanna Edwards, 47, who lives in rural eastern Arkansas. She has been trying for years to become a teacher, but she never managed to finish her training because life got in the way.

"I'm a single mother with three children, two grandchildren. And I have two jobs as well," she says. She works as a secretary for an education nonprofit and also at an after-school program.

Edwards was able to restart teacher training in 2021 when she found a Grow Your Own program that was low cost and offered online classes during the evenings and weekends. "That is the very first reason I joined, absolutely. [It was] affordable," she says. The other reason was that it worked well with her schedule.

These stories show how Grow Your Own programs can help get more people to consider becoming educators, Donaldson says. "It allows us to have a different conversation about who gets to become a teacher and how they are prepared. That's the power of Grow Your Own."

A school custodian feels the power of Grow Your Own

Efforts are underway to start gathering data that might answer the questions that Danielle Edwards and other researchers are raising. But in the meantime, schools have immediate needs.

And custodian Jenna Gros, at Wyandotte Elementary, is eager to help. As she walks the school hallways, sweeping, spraying and shelving, she stops constantly to wave at children who shout out, "Miss Jenna!"

Gros says she wouldn't have become a teacher if not for her Grow Your Own program. She makes $22,000 a year as a janitor. After she graduates, debt free, in 2024, her salary will more than double.

Best of all, she expects to get a teaching job at this same elementary school, which means she can keep her accrued benefits as a district employee.

Gros loves how a teacher can shape a child's future for the better. "That's what a teacher is — a nurturer trying to provide them with the resources that they are going to need for later on in life. I think I can be that person," she says, and then pauses. "I know I can."

This story was produced in collaboration with The Hechinger Report , a nonprofit, independent news organization focused on inequality and innovation in education.

Edited by Nicole Cohen Visual design and development by LA Johnson Audio story produced by Lauren Migaki

Elon Musk and Rishi Sunak Chat China, Killer Robots and the Meaning of Life

Reuters

Britain's Prime Minister Rishi Sunak attends an in-conversation event with Tesla and SpaceX's CEO Elon Musk in London, Britain, Thursday, Nov. 2, 2023. Kirsty Wigglesworth/Pool via REUTERS Reuters

By Alistair Smout

LONDON (Reuters) -Billionaire Elon Musk welcomed China's engagement on artificial intelligence safety and said he wanted to see Beijing aligned with Britain and the United States on the subject, speaking in London on Thursday alongside British Prime Minister Rishi Sunak.

Musk backed China's inclusion in the first AI Safety Summit, hosted at Bletchley Park, England, which has drawn leading companies and nations together to agree initial steps on how to manage the risks of cutting-edge AI models.

"If the United States and the UK and China are aligned on safety, then that's going to be a good thing, because that's where the leadership is generally," he said.

Musk, who was in May given royal-like treatment during a visit to China, welcomed that Beijing had participated in AI safety talks at the event.

"Having them here I think was essential, really. If they're not participants, it's pointless."

Sunak interviewed Musk - feted by Britain as a star guest at the two-day summit - on a small stage in a gilded room at London's Lancaster House, one of the government's most opulent venues, which is often used for diplomatic functions.

Self-confessed tech geek Sunak said he felt "privileged and excited" to host Musk and used the occasion to make a not-so-subtle investment pitch when the Tesla and SpaceX founder said startup firms needed high rewards to take risks in the sector.

"We very much have a tax system that supports that," Sunak said.

MAGIC GENIE

The eclectic discussion took place in front of an invited audience of dozens of other business leaders, the final event in a two-day summit seen as a big step forward in harnessing AI for good by starting to think about the serious risks it may pose.

Musk and Sunak agreed on the possible need for physical "off-switches" to prevent robots from running out of control in a dangerous way, making reference to "The Terminator" film franchise and other science-fiction films.

"All these movies with the same plot fundamentally all end with the person turning it off," Sunak said, adding that the importance of physical off switches had formed part of the discussions at the summit earlier in the day.

Musk told Sunak he thought AI was "the most disruptive force in history", speculating the technology would be able to "do everything" and make employment as we know it today a thing of the past.

"I don't know if that makes people comfortable or uncomfortable," he said.

"It's both good and bad. One of the challenges in the future will be, how do we find meaning in life if you have a magic genie that can do everything you want?"

(Reporting by Alistair Smout; Writing by William James; Editing by Paul Sandle and Jamie Freed)

Copyright 2023 Thomson Reuters .

Tags: artificial intelligence , United States , European Union , United Kingdom , Europe

The Best Financial Tools for You

Credit Cards

good research work meaning

Personal Loans and Advice

good research work meaning

Mortgages and Advice

good research work meaning

Best Bank Accounts of 2023

good research work meaning

Comparative assessments and other editorial opinions are those of U.S. News and have not been previously reviewed, approved or endorsed by any other entities, such as banks, credit card issuers or travel companies. The content on this page is accurate as of the posting date; however, some of our partner offers may have expired.

good research work meaning

Subscribe to our daily newsletter to get investing advice, rankings and stock market news.

See a newsletter example .

You May Also Like

Vanguard vs. fidelity.

Kat Tretina Nov. 3, 2023

good research work meaning

7 Best ETFs to Buy Now

Jeff Reeves Nov. 3, 2023

good research work meaning

7 Best Growth ETFs to Buy Now

Tony Dong Nov. 3, 2023

good research work meaning

7 Space Stocks and ETFs to Watch

Tony Dong Nov. 2, 2023

good research work meaning

The 10 Most Valuable Private Companies

John Divine Nov. 2, 2023

good research work meaning

8 Free Investment Classes and Resources

Scott Ward Nov. 2, 2023

good research work meaning

8 Best High-Yield REITs to Buy

Tony Dong Nov. 1, 2023

good research work meaning

2023's 10 Best-Performing Stocks

Wayne Duggan Nov. 1, 2023

good research work meaning

How Much Cash Should I Hold?

Kate Stalter Nov. 1, 2023

good research work meaning

How to Avoid Sequence of Returns Risk

good research work meaning

5 of the Best Stocks to Buy Now

Ian Bezek Nov. 1, 2023

good research work meaning

The 9 Best 401(k) Funds

Pat Crawley Oct. 31, 2023

good research work meaning

7 Best Marijuana Stocks to Buy Now

Matt Whittaker Oct. 31, 2023

good research work meaning

Biggest Tech Companies in the World

Wayne Duggan Oct. 31, 2023

good research work meaning

7 Materials Stocks to Buy for Income

Jeff Reeves Oct. 31, 2023

good research work meaning

Green Hydrogen Stocks and ETFs

Matt Whittaker Oct. 30, 2023

good research work meaning

Magnificent 7 Stocks: What Are They?

Wayne Duggan Oct. 30, 2023

good research work meaning

7 Best Vanguard Funds for Retirement

Tony Dong Oct. 30, 2023

good research work meaning

7 Smart Beta ETFs to Buy Now

good research work meaning

7 Best Money Market Funds for 2023

Tony Dong Oct. 27, 2023

good research work meaning

IMAGES

  1. PPT

    good research work meaning

  2. Basics Of Research Definition Meaning And Criteria Of A Good Research

    good research work meaning

  3. PPT

    good research work meaning

  4. How to Choose Good Research Topics for Your Research Paper

    good research work meaning

  5. A GOOD RESEARCH

    good research work meaning

  6. PPT

    good research work meaning

VIDEO

  1. Part 1 sec1 ( Complete your research work by yourself)

  2. hard work meaning

  3. characteristics of good research bsc nursing ✨📔

  4. Meaningful Work

  5. The Only way to do great work is to love what you do #motivation

  6. I need to do the work, meaning in bangla and Arabic.#subscribe for more video #arabic #handwriting

COMMENTS

  1. Q: What does good research mean?

    Answer: Good quality research is one that provides robust and ethical evidence. A good research must revolve around a novel question and must be based on a feasible study plan. It must make a significant contribution to scientific development by addressing an unanswered question or by solving a problem or difficulty that existed in the real world.

  2. Top 10 Qualities of Good Academic Research

    1. Good research is anchored on a sound research question. A sound research question is one of the most important characteristics of good research. In fact, formulating one is embedded in the curricula of research-heavy programs like engineering and physics degrees and careers.

  3. The Top 5 Qualities of Every Good Researcher

    The truly good researcher perseveres. They accept this disappointment, learn from the failure, reevaluate their experiment, and keep moving forward. 4. Collaboration. Teamwork makes the dream work. Contrary to the common perception of the solitary genius in their lab, research is an extremely collaborative process.

  4. What Constitutes a Good Research?

    - Enago Academy What Constitutes a Good Research? By Enago Academy Apr 23, 2021 3 mins read 🔊 Listen The Declining Art of Good Research We seem to be compromising our commitment to good research in favor of publishable research, and there are a combination of trends that are accountable for this.

  5. What is Research

    The search for knowledge is closely linked to the object of study; that is, to the reconstruction of the facts that will provide an explanation to an observed event and that at first sight can be considered as a problem. It is very human to seek answers and satisfy our curiosity.

  6. What is Good Qualitative Research?

    An extensive review of the literature, contributions from expert groups and practitioners themselves lead to the generation of two core principles of quality: transparency and systematicity, elaborated to summarize the range of techniques commonly used, mirroring the flow of the research process.

  7. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  8. What is good research practice?

    Research practice encompasses the generic methodologies that are common to all fields of research and scholarly endeavor. The term 'good research practice' describes the expected norms of professional behavior of researchers. As Royal Society Te Apārangi, we are legislated to "provide infrastructure and other support for the professional ...

  9. (PDF) The Criteria of a Good Research

    The Criteria of a Good Research. By Iman yassin JASSIM. ... you break it into its parts, then work on them one step at a time. ... The Definition of Research. Research is the means by which we ...

  10. Research

    Original research, also called primary research, is research that is not exclusively based on a summary, review, or synthesis of earlier publications on the subject of research.This material is of a primary-source character. The purpose of the original research is to produce new knowledge, rather than to present the existing knowledge in a new form (e.g., summarized or classified).

  11. (Pdf) Elements of A Good Research

    J M Jolley. Mitchell, M.L. and Jolley, J.M., 2012. Research design explained. Nelson Education. PDF | This paper is about explaining what a good researcher should include in his/her and how to do ...

  12. Research Skills: What They Are and Why They're Important

    Guide Overview Research skills in the workplace Many employers value research skills in their employees, especially when it comes to research-oriented positions such as those in analysis and data management. Common research skills necessary for a variety of jobs include attention to detail, time management, and problem solving.

  13. What is meaningful research and how should we measure it?

    We discuss the trend towards using quantitative metrics for evaluating research. We claim that, rather than promoting meaningful research, purely metric-based research evaluation schemes potentially lead to a dystopian academic reality, leaving no space for creativity and intellectual initiative. After sketching what the future could look like if quantitative metrics are allowed to proliferate ...

  14. What is Scientific Research and How Can it be Done?

    Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new ...

  15. A Beginner's Guide to Starting the Research Process

    Step 1: Choose your topic Step 2: Identify a problem Step 3: Formulate research questions Step 4: Create a research design Step 5: Write a research proposal Other interesting articles Step 1: Choose your topic First you have to come up with some ideas. Your thesis or dissertation topic can start out very broad.

  16. What is a research project?

    A research project is an academic, scientific, or professional undertaking to answer a research question. Research projects can take many forms, such as qualitative or quantitative, descriptive, longitudinal, experimental, or correlational. What kind of research approach you choose will depend on your topic.

  17. How to Write a Research Paper

    Understand the assignment. Choose a research paper topic. Conduct preliminary research. Develop a thesis statement. Create a research paper outline. Write a first draft of the research paper. Write the introduction. Write a compelling body of text. Write the conclusion.

  18. Research

    Research Definition. Research is a careful and detailed study into a specific problem, concern, or issue using the scientific method. It's the adult form of the science fair projects back in ...

  19. Research: Definition, Characteristics, Goals, Approaches

    Research is a systematic investigation undertaken to increase existing knowledge and understanding of the unknown to establish facts and principles. Top Subjects 📗 Principles of Marketing Marketing Management Fundamentals of Management Strategic Management Organizational Behavior Human Resource Management Principles of Accounting Banking Insurance

  20. The Most Important Research Skills (With Examples)

    Reviewing notes afterward. Communication skills. Effective research requires being able to understand and process the information you receive, either written or spoken. That means that you need strong reading comprehension and writing skills — two major aspects of communication — as well as excellent listening skills.

  21. Research questions, hypotheses and objectives

    Research question. Interest in a particular topic usually begins the research process, but it is the familiarity with the subject that helps define an appropriate research question for a study. 1 Questions then arise out of a perceived knowledge deficit within a subject area or field of study. 2 Indeed, Haynes suggests that it is important to know "where the boundary between current ...

  22. On the meaning of work: A theoretical integration and review

    Research in this tradition has tended to focus on how employees make or find positive meaning in their work, even, for example, in work that is typically considered undesirable (Wrzesniewski and Dutton, 2001, Wrzesniewski et al., 2003). 4 However, the use of the word "meaning" in the meaning of work literature primarily denotes positive ...

  23. What factors contribute to the meaning of work? A validation of Morin's

    The first theoretical model for the meaning of work was based on research in the MOW project (MOW International Research team, 1987), considered the "most empirically rigorous research ever undertaken to understand, both within and between countries, the meanings people attach to their work roles" (Brief, 1991, p. 176). This view suggests ...

  24. AI for good? 4 ways it speeds scientific discovery

    Artificial Intelligence. Artificial intelligence (AI) has a proven ability to speed the pace of scientific discovery and deliver new insights in a range of fields. At the World Economic Forum's recent Annual Meeting of the Global Future Councils, a panel of experts looked beyond AI's complex governance issues to how it can be a force for good.

  25. FACT SHEET: President

    Catalyze AI research across the United States through a pilot of the National AI Research Resource—a tool that will provide AI researchers and students access to key AI resources and data—and ...

  26. What Is a VPN? How Does It Work & Why Should You Use It?

    A VPN, or a virtual private network, is a mechanism used to establish a secure connection between a device and a network — such as a remote employee's computer and a company's internal ...

  27. This teacher shortage solution has gone viral. But does it work?

    One possible solution has gone viral: Grow Your Own programs. According to researchers, as of the spring of 2022, an estimated 900 U.S. school districts were using these programs to try to ease ...

  28. Elon Musk Says Good for UK, US and China to Align on AI Safety

    Elon Musk and Rishi Sunak Chat China, Killer Robots and the Meaning of Life More Britain's Prime Minister Rishi Sunak attends an in-conversation event with Tesla and SpaceX's CEO Elon Musk in ...