The state updated its sex education guidelines last year for students from pre-K …
Enhancing Trustworthiness and Accountability in Artificial Intelligence
To ensure effective communication through natural language, it is crucial for all parties involved to comprehend words and their context, assume the content’s good faith and trustworthiness, reason about the shared information, and apply it to real-world situations. MIT PhD students, who are interning with the MIT-IBM Watson AI Lab, namely Athul Paul Jacob SM ’22, Maohao Shen SM ’23, Victor Butoi, and Andi Peng SM ’23, are working on improving every step of this process in natural language models to enhance their dependability and accuracy for users.
Jacob’s research focuses on game theory to improve existing natural language models’ output. He is interested in understanding human behavior and using it to build better AI systems. By developing systems that could learn and predict human behavior and negotiate strategically, Jacob and his research team tackled a unique challenge presented by the board game “Diplomacy.” They addressed research challenges such as modeling human behavior and determining when humans tend to act irrationally. Their approach involved recasting the problem of language generation as a two-player game.
Through the use of “generator” and “discriminator” models, Jacob’s team created a natural language system that generates answers to questions and then evaluates the correctness of those answers. This collaborative learning algorithm aims to make language models more truthful and reliable while maintaining their pre-trained language model’s priors. Jacob suggests that combining this technique with a smaller language model can enable competitive performance compared to larger models.
In many cases, language models generate results with high confidence that may not align with their accuracy. To address this issue, Maohao Shen and his group are using uncertainty quantification (UQ). They aim to calibrate language models when they are poorly calibrated and focus on the classification problem. By allowing a language model to generate free text and converting it into a multiple-choice classification task, they can determine if the model’s confidence is over- or under-confident.
Shen’s team developed a technique that fine-tunes the confidence output of a pre-trained language model. They trained an auxiliary model using ground-truth information to correct the language model’s confidence. The technique was evaluated on multiple benchmark datasets, demonstrating its ability to align the accuracy and confidence of language model predictions for unseen tasks.
Vision-language models often struggle with compositional reasoning, which is essential for decision-making in real-world scenarios. Victor Butoi and his lab team are working on enhancing the capability of vision-language models to reason about what they see and understand key phrases. They aim to improve the model’s ability to solve subtasks and answer questions related to specific concepts like spatial relationships.
By developing a model using low-rank adaptation of large language models (LoRA) and training it on the annotated dataset Visual Genome, Butoi’s team guides the model to consider specific relationships, such as “left,” and uses the model’s caption output to prompt the vision-language model to perform related tasks more easily.
AI systems in the field of robotics interact with their surroundings using computer vision and language. Andi Peng and her mentors focus on assisting individuals with physical constraints using virtual worlds. They are developing embodied AI models in a simulated environment called ThreeDWorld. These models consist of a “human” agent that requires support and a helper agent. The team leverages semantic priors captured by large language models to help the helper AI understand the “human” agent’s abilities, motivations, and actions through natural language. Their goal is to improve the helper’s decision-making, bidirectional communication, understanding of the physical scene, and overall contribution.
According to Peng, it is crucial to build robots and systems for humans in a way that conveys human knowledge. The emphasis is on enabling systems to operate in a human-like manner that is understandable to humans, rather than autonomously performing tasks in a peculiar manner.