Ming Jiang, an assistant professor of Data Science in the Department of Human-Centered Computing, has been named a Google Research Scholar. She was selected based on her proposal, Unity in Diversity: Augmenting the Geo-Cultural Competence of Language Technologies for Intercultural Communication.
Google research proposals are evaluated on faculty and research merit, proposal quality, commitment to broadening participation, and aligning with Google AI principles. The highly selective and prestigious Google Research Award provides “unrestricted gifts to support research at institutions around the world, and is focused on funding world-class research conducted by early-career professors.”
Andrew Miller, chair of the Department of Human-Centered Computing, said, “Dr. Jiang embodies the unique strength of Luddy and HCC in human-centered AI. This high-profile award is a mark of excellence, particularly this early in her career. On behalf of the department, I am extremely proud of Dr. Jiang’s success!”
Making Large Language Models better at human communication
Recent developments in smart computer programs known as large language models, or LLMs, have made language technologies crucial for people to communicate with each other and with machines. For example, LLMs are involved when humans interact with chatbots, translation tools, and virtual assistants.
Even though these smart programs have gotten really good at understanding and responding to what people say, they mainly focus on content alone. They often overlook the social context, like the background information about the people talking, their cultures, and the situations they’re in. This can be a risk because it might lead to misunderstandings, especially when people from different cultures are communicating. Something that seems normal in one culture might be confusing or even offensive in another.
To bridge these gaps, Jiang is studying how people from different cultures talk and understand each other. She wants to teach these smart programs to recognize and respect these cultural differences. To do this, she’ll endow foundational mechanisms with language technologies and design strategies that involve humans directly in the learning process (called “human-in-the-loop”) and teach the programs to consider the context of the conversation by natural language prompts (called “in-context learning”). This will help the LLMs better grasp cultural nuances, lessening their vulnerabilities and making them more effective in tasks like machine translation and answering questions accurately.
“This project will advance the fields of human-centered Natural Language Processing (NLP) by rethinking language technologies in intercultural circumstances, an underserved NLP scenario yet ubiquitous in practical communication practices. We hope to enhance machines’ mutual understanding of diverse cultural knowledge through this project, thereby promoting intercultural communication,” Jiang said.
Jiang will present her study in June at NAACL 2024. Her paper with co-author Mansi Joshi, “CPopQA: Ranking Cultural Concept Popularity by LLMs,” will be published in the Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics.
Media Contact
Joanne Lovrinic
jebehele@iu.edu
317-278-9208