It is through research that scientific, technological, social, and cultural advances are generated, enriching society as a whole. Furthermore, research fosters critical thinking, intellectual curiosity, and the ability to solve complex problems—qualities that are essential for training highly competent professionals. Additionally, its integration with education allows students to engage with cutting-edge knowledge and actively participate in projects.
In the School of Engineering, most research activity takes place through centers and research groups, where projects with a high technological and scientific content are developed, applied research is conducted, and innovative educational materials are produced .
On this occasion, we spoke with Dr. Sergio Yovine, professor and academic coordinator of Artificial Intelligence and Big Data, to learn about the work being done within the Artificial Intelligence (AI) research group.
Within this field, there is an overarching concept that encompasses our research, known as Responsible Artificial Intelligence, which focuses on the use, application, and development of AI devices and algorithms to ensure they operate fairly.
Yovine.
The goal of Responsible AI is to ensure that artificial intelligence systems operate in a fair, transparent, safe, and respectful manner toward individuals and society as a whole; this involves data protection and verifying the appropriate use of the AI being developed.
“To get more specific about what we do within the group, we use artificial intelligence to verify and analyze artificial intelligence,” he explains, adding, “To draw a parallel, it’s similar to what’s done in software engineering, where software is developed to verify and test other software.”
To explain this better, Yovine gives the following example: “One of the common questions today is whether language models generate text that exhibits gender bias or any other type of bias. The goal is to characterize this property in a more formal way and then develop an algorithm that verifies whether the language model exhibits such bias—and if it does, to take appropriate action.”
Another area of research they are conducting focuses on privacy—specifically, how to build privacy into machine learning algorithms. Yovine explains that “medical records can be used for diagnostic analysis, disease prevention, and so on, but you don’t want private information to leak or be made public. “The way to tackle the problem is to implement mechanisms so that all sensitive information is protected.” “The difficulty when you do that is that the data becomes less useful, so to speak, so you have to find a good balance between protecting the data without losing its utility.”
Key Opportunities
“Specifically when it comes to verification, explainability, and privacy, the challenge lies in applying these concepts to a concrete use case because there are major players in this field (Google, Meta, Apple, OpenAI). I believe the opportunity will lie in developing external mechanisms that work with the output of these chatbots or language models,” says Yovine.
For example, using a language model but tailoring it to a specific application, such as accessing private documentation or creating a chat focused on a particular topic.
To better explain how it works, Yovine gives the following example: “A university sets up a chat service to provide information about the various degree programs it offers, but some of the documentation used may contain private information that the university does not want to disclose. In that case, privacy settings can be applied because it involves integrating an external service.”
“There’s also the perspective of the person making the query, who provides personal information and doesn’t want their data to be shared. There’s an opportunity here to keep that input data private and ensure it isn’t disclosed. That doesn’t happen today if you use ChatGPT—there’s no guarantee it won’t be shared,” says Yovine.