HJRS | HEC Journal Recognition System
Recommended format for most reference management software
Recommended format for BibTeX-specific software
- Higher Education Council (Author)
Research guidance, Research Journals, Top Universities
HEC recognised language & linguistics (w,x,y category) journals
List of HJRS-approved language and linguistics journals for 2023. This blog post aims to provide the list of the top 20 HEC-recognized language and linguistic journals under the w, x, and y categories.
We will also be describing how to find HEC-recognized language and linguistics journals from the official website of HJRS. HJRS categorizes a research journal into three categories: W, X, and Y—where W is the highest recognized category, followed by X and Y within each subject area and its associated sub-categories.
HJRS says: “A collection of research journals that are categorized into three different categories—W, X, and Y—within their respective knowledge areas on the basis of a number of internationally benchmarked and recognized parameters that measure the quality of a journal.”
Top 20 HEC recognised language & linguistics journals
Steps to find hec-recognised language & linguistics journals from the official website of hjrs.
- visit official website of HJRS .
- Select Area as “Arts and Humanities”.
- Select Subject Areas as “Arts and Humanities”.
- Finally Select Subject Sub Categories as “Language and Linguistics”.
- You will be shown a list of HJRS approved language and linguistics journals.
- You can also export the list to your local computer.
Source & Credit : HJRS
More related posts to read:
How to write and publish a research paper for journal publication?
Types of research articles: Check them one by one
How to write a scientific review paper?
Difference between research paper and thesis
Tips for publishing in high-impact factor journals
Leave a comment cancel reply.
Save my name, email, and website in this browser for the next time I comment.
Notify me of follow-up comments by email.
Notify me of new posts by email.
- How to recruit?
- Internship calendars
- Post an offer
- How to give
- Ways to give
- 2019-2024 campaign
- News and publications
- Annual Report
- Build your brand
- Work with our students
- Become a partner
- Our corporate partners
HEC Foundation Rewards Outstanding Research in 2022
The 46th edition of the HEC Foundation Awards once again highlighted the cutting-edge excellence of the school’s research at all levels. 11 prizes in nine categories were awarded to academics and students who, in the course of the last year, have written works that are seen as a formidable vector of influence and impact on current or future leaders - both in business and society at large.
© Estel Plagué, HEC Stories
“On behalf of the Foundation and the nine jury members, let me congratulate all those nominated and awarded for the pertinence of their work and the rigor of their analyses.” With these words, the HEC Foundation Executive Director Delphine Colson opened proceedings at the March 27 award ceremony. As with last year , the prizes were split over two events. In early March, six students were rewarded for works ranging from anti-harassment initiatives (MBA candidate Anna Dragina ) and programs to reduce prostate cancer mortality amongst minority men in the USA (EMBA graduate Quoc-Dien Trinh ); to proposals to transform organizations’ value architecture to achieve decarbonization (SASI graduate Janice Klaiber ), and the creation of a digital platform to provide drinking water for Nigeria’s megacity of Lagos (EM graduate Joseph Reymond ).
A few weeks later, it was the turn of HEC’s top academics, assembled in the Paris HEC Alumni headquarters, to accept rewards for the years of intense research in their respective fields. “The bar is being set higher and higher each year,” admitted Andrea Masini, Dean of Faculty and Research at HEC. “It’s a real challenge for the juries to identify the laureates because the quality of the papers and pedagogical initiatives is improving all the time. So, I think our game is getting tougher by the year.”
Doctorate Prize for Social Cognition Processes
This particular evening, four of the annual awards were attributed to the HEC academics who caught the eye of the nine-strong jury. In the presence of Foundation representatives, HEC deans, the jury members, individual donors and corporate partners, it was the occasion to reward a wide berth of academic excellence.
Linares’ award is the culmination of a long collaboration with her supervisor, HEC Marketing Professor Anne-Laure Sellier, who herself won Foundation awards in 2013 (Pedagogical Innovation) and 2020 ( Research Article of the Year). Sellier was on hand to applaud her former student’s achievement and shared the words of the jury: “Claire is destined to have a great career in academia. Her work on the existence and consequences of facial stereotyping in marketing and management is part of an important breakthrough in facial detection technologies and access to facial data online.”
HEC’s Associate Dean for Research, Christophe Pérignon also underlined the tremendously competitive fields the research professors operate in. Before handing the Article of the Year Award to Associate Professor Denisa Mindruta , Pérignon outlined the context: “Each year, HEC professors publish around 50 articles in “A” journals of their respective fields. These journals receive submissions from academics from the best universities in the world and typically accept only 5% of them. The publications represent two-three years of preparation, data-collection, investigation, writing, presentation, revision, ... Tough waters to navigate! Which makes Denisa’s article on CEOs’ human capital and how it relates to their firms’ strategic approach to acquisition all the more praiseworthy.” (Find her awarded research article explained on Knowledge@HEC , in English and in French).
The professor from the Strategy and Business Policy Department graciously accepted the award in the presence of jury president Laurent Inard, partner and Chief R&D Officer at Mazars. “I am deeply honored to be awarded this trophy from a jury that mixes professionals and academics,” Mindruta said. “It underlines one of the objectives I’ve set out for my research: to be relevant in the world of management and make an impact.”
Preparing Climate Leaders of Tomorrow
Impact is just one such objective behind the newly-created Climate & Business Certificate created by HEC academics Daniel Halbheer and Igor Shishlov . The duo won the Bruno Roux de Bézieux Award for the originality, pertinence and efficacy of their new program. “Pedagogy has evolved in unheard ways these past years,” said jury member Anne Michaut , HEC’s Associate Dean (Education Track and Pedagogy). “These rewards recognize the importance of making our courses evolve, thus preparing students to navigate the complexities of the world they are entering.”
The certificate provides students with the tools to forge careers as climate leaders working on ways to limit the global temperature rise to 1.5°C. “We feel humbled by this award,” said Shishlov who also works as a consultant on climate policy at Perspective Climate Group. “We began working on this program two years ago,” added Halbheer, who is the FII Institute chairholder on Business Models for the Circular Economy. “We started with a blank sheet of paper since it’s only a recent addition to the HEC curricula. Now the topic has become a strategic issue, reflecting the growing interest in firms who are tackling climate change. We believe this certificate is a stepping stone towards integrating the issues and challenges into several HEC programs.” Details of the academics’ approach to this crucial certificate have been detailed by HEC Stories .
Allying “A” Publications with Impact
It came down to Andrea Masini to wrap up the award ceremony by presenting the 2023 Researcher of the Year Award to Pepa Kraft for her work on rating agencies:
More broadly, the professor who is also a Chartered Financial Analyst has been studying the biases within credit rating agencies and their rating process. “Accounting rules are written in a way that allow firms to recognize debts that meet standards and other types of debt can sort of be hidden. Rating analysts dig deep and get those numbers onto the balance sheet. So, it’s very, very prevalent, you have to dig into the footnotes or do a series of financial statement analyses to reveal them.” And how do the credit rating agencies respond to her work? Kraft laughed: “I’ve presented it to agencies and sometimes it was like stepping into the lion’s den. But in the end, we had very beneficial discussions.”
Meanwhile, the author of "Market Power and Credit Rating Standards: Global Evidence" said she was extremely grateful for the recognition by the Foundation Awards: “Research is a long, sometimes very lonely process. It can be frustrating to have your papers rejected and have to resubmit after years of work. So, it’s nice to have such an award on top of the acceptances.” (Find two of her research papers explained on Knowledge@HEC ).
Learn more about the impact of the HEC Foundation on the production of knowledge at HEC Paris on this page here .
Download the Prizes Brochure
HealthCare Ethics Committee Forum: An Interprofessional Journal on Healthcare Institutions' Ethical and Legal Issues
HEC Forum is an international, peer-reviewed publication featuring original contributions of interest to practicing physicians, nurses, social workers, risk managers, attorneys, ethicists, and other HEC committee members. Contributions from all pertinent sources are included, written in a style written appealing to HEC members and lay readers. HEC Forum publishes essays and research papers, and includes such sections as Essays on Substantive Bioethical/Health Law Issues; Analyses of Procedural and Operational Committee Issues; Document Exchange; Special Articles; International Perspectives; Mt./St. Anonymous: Cases and Institutional Policies; Point/Counterpoint Argumentation; Case Reviews, Analyses, and Resolutions; Chairperson's Section; `Tough Spot'; Critical Annotations; Health Law Alert; Network News and Letters to the Editors.
HEC Forum is an official partner journal of the American Society for Humanities + Bioethics:
- Mark J. Cherry
Volume 36, Issue 1
Can we be creative with communication assessing decision-making capacity in an adult with selective mutism.
- Nicholas R. Mercado
What is a High-Quality Moral Case Deliberation?-Facilitators’ Perspectives in the Euro-MCD Project
- Lena M. Jakobsen
- Bert Molewijk
- Gøril Ursin
Non-Psychiatric Treatment Refusal in Patients with Depression: How Should Surrogate Decision-Makers Represent the Patient’s Authentic Wishes?
- Esther Berkowitz
- Stephen Trevick
Organizational Ethics in Healthcare: A National Survey
- Kelly Turner
- William A. Nelson
Medical-Legal Partnerships and Prevention: Caring for Unrepresented Patients Through Early Identification and Intervention
- Cathy L. Purvis Lively
Covid-19 and impact on peer review.
As a result of the significant disruption that is being caused by the COVID-19 pandemic we are very aware that many researchers will have difficulty in meeting the timelines associated with our peer review process during normal times. Please do let us know if you need additional time. Our systems will continue to remind you of the original timelines but we intend to be highly flexible at this time.
- Current Contents / Social & Behavioral Sciences
- Google Scholar
- Japanese Science and Technology Agency (JST)
- Journal Citation Reports/Social Sciences Edition
- OCLC WorldCat Discovery Service
- Social Science Citation Index
- TD Net Discovery Service
- The Philosopher’s Index
- UGC-CARE List (India)
Rights and permissions
© Springer Nature B.V.
- Find a journal
- Publish with us
- Track your research
Bring on the AI guardrails!
Ziad Obermeyer to Senate panel: Here's how AI in health care can do more good than harm
- By Sheila Kaplan
- 7 min. read ▪ Published February 14
- Share on LinkedIn
- Share on Facebook
- Share on X (Twitter)
Ziad Obermeyer believes that artificial intelligence can help doctors and others in the healthcare system make better decisions, improving health and reducing cost. He also thinks that without strong oversight, much could go wrong.
On February 8, Obermeyer, Blue Cross Distinguished Associate Professor of Health Policy and Management at Berkeley Public Health, warned the U.S. Senate Finance Committee about some of AI’s potential hazards within the healthcare field, and offered ways to ensure that AI systems are safe, unbiased and useful.
The hearing, “Artificial Intelligence and Health Care: Promise and Pitfalls,” explored the growing use of AI in medicine, and by federal health care agencies.
“Throughout my ten years of practicing medicine, I have agonized over missed diagnoses, futile treatments, unnecessary tests and more,” Obermeyer said. “The collective weight of these errors, in my view, is a major driver of the dual crisis in our healthcare system: suboptimal outcomes at very high cost. AI holds tremendous promise as a solution to both problems.”
Obermeyer, a physician and researcher, studies how machine learning can help doctors make better decisions (like whom to test for heart attack ), and help researchers make new discoveries—by “seeing” the world the way algorithms do (like finding new causes of pain that doctors miss , or linking individual body temperature set points to health outcomes). He has also shown how widely-used algorithms affecting millions of patients automate and scale up racial bias . That work has impacted how many organizations build and use algorithms , and how lawmakers and regulators hold AI accountable.
Obermeyer is a co-PI of a lab, joint between Berkeley and U Chicago, that builds algorithmic tools to improve decision-making and deepen understanding in health. He is the co-founder of Nightingale Open Science, a non-profit that makes massive new medical imaging datasets available for research; and Dandelion, a data platform to jump-start AI innovation in health. He is also a Chan Zuckerberg Biohub Investigator, a Faculty Research Fellow at the National Bureau of Economic Research, and was named an Emerging Leader by the the National Academy of Medicine.
Obermeyer told the panel that one area where AI is already used to improve patient care, is in helping doctors predict which patients are at high risk for potential arrhythmias that cause sudden death.
“In the U.S. alone, 300,000 people experience sudden cardiac death every year,” Obermeyer said. “What makes these events so tragic is that many of them are preventable: had we known a patient was at high risk, we would have implanted a defibrillator in her heart, to terminate the potential arrhythmias that cause sudden death, and save her life. Unfortunately, we are very bad at knowing who is at high risk.”
Obermeyer worked with a team of colleagues in the U.S. and Sweden to train an AI system to predict the risk of sudden cardiac death using just the waveform of a patient’s electrocardiogram.
“It performs far better than our current prediction technologies, based largely on human judgment,” he said. “This means we have the potential to both save more lives and reduce waste, by ensuring that precious defibrillators are implanted in the right patients. It’s rare to have an opportunity to both improve quality and reduce cost; normally we must choose. AI is a transformative new way for us to sidestep this dilemma entirely, and rebuild our health care system on a foundation of data-driven decision making.”
This principle—better human decisions through AI-driven predictions—can apply to many areas of medicine, Obermeyer said. But despite his optimism, Obermeyer worries that without concerted effort from researchers, the private sector, and government, “AI may be on a path to do more harm than good in health care.”
To make this case, Obermeyer walked the senators through a study he led five years ago that showed how a group of poorly designed AI algorithms, built and used in both public and private sectors, perpetuated large-scale racial bias.
The algorithm’s goal was to identify patients with high future health needs. But, Obermeyer said, AI is extremely literal. Absent a data set called future health needs, the AI developers chose to predict a proxy variable that is present in health datasets: future healthcare costs.
It seemed reasonable. But because of discrimination and barriers to access, underserved patients who need health care often don’t get it, Obermeyer said.
“This means Black patients, and also poorer patients, rural patients, less-educated patients, and all those who face barriers to accessing health care when they need it—get less spent on their healthcare than their better-served counterparts, even though they have the same underlying health conditions. Low costs do not necessarily mean low needs.”
The AI ignored those facts, and predicted that Black patients would generate lower costs; and thus deprioritized them for access to help with their health.
“The result,” Obermeyer said, “was racial bias that affected important decisions for hundreds of millions of patients every year.”
“Many of the biased algorithms we studied remain in use today,” he said. When questioned by members of the panel, he added, “unfortunately as AI learns to basically replicate our current system, it’s going to replicate all of the inequalities in our current system.”
Fortunately, Obermeyer said, “there are a number of specific things that programs under this committee’s jurisdiction can do to ensure that AI produces the social value we all want.”
Obermeyer said that Medicare, Medicaid, and other programs under the finance committee’s jurisdiction can realize enormous benefits from well-designed AI products to improve quality of service and reduce costs.
“These programs should be willing to pay for AI—but they should not simply accept the flawed products that the market often produces,” Obermeyer said. “Rather, they should take advantage of their market power to articulate clear criteria for what they will pay for, and how much.”
He also called for transparency by AI businesses.
“We need more accountability in the form of evaluating those algorithms in new data sets and by third parties,” he said, “so that we don’t have to take an algorithm developer’s word that the AI is working well and equitably across groups.”
Claudia Williams, UC Berkeley School of Public Health’s inaugural chief social impact officer, said, “Dr. Obermeyer points out that AI is a policy unicorn. It has the potential to improve health and reduce costs. But it won’t achieve these outcomes without the policy guardrails he recommends.”
Other witnesses at the hearing were Michelle M. Mello, professor of health policy and of law at Stanford University; Peter Shen of Siemens Healthineers; Dr. Mark Sendak of Health AI Partnership; and Katherine Baicker, provost of the University of Chicago.
People of BPH found in this article include:
- Ziad Obermeyer Blue Cross of California Distinguished Associate Professor, Health Policy and Management
More in category “School News”:
Best of berkeley public health 2023, claudia williams joins uc berkeley school of public health as inaugural chief social impact officer, meet our new faculty: misbath daouda, doctoral candidate iemaan rana named to forbes 30 under 30 list.