Ericsson signed an agreement to partner with IITM CeRAI as a ‘Platinum Consortium Member’ for five years.
IIT Madras Centre for Responsible AI (CeRAI) will be partnering with Ericsson for joint research in the area of Responsible AI, a release stated.
To commemorate the occasion, a Symposium on Responsible AI for Networks of the Future was organized where leaders from Ericsson Research and IIT Madras participated to discuss the developments and advancements in the field of Responsible AI.
During the event held at the IIT Madras campus on September 23, Ericsson signed an agreement to partner with CeRAI as a ‘Platinum Consortium Member’ for five years. Under this MoU, Ericsson Research will support and participate in all research activities at CeRAI.
The Centre for Responsible AI is an interdisciplinary research centre that envisions becoming a premier research centre for both fundamental and applied research in Responsible AI with immediate impact in deploying AI systems in the Indian ecosystem.
AI Research is of high importance to Ericsson as the 6G networks would be autonomously driven by AI algorithms.
“Research on AI will produce the tools for operating tomorrow’s businesses. IIT Madras strongly believes in impactful translational work in collaboration with the industry, and we are very happy to collaborate with Ericsson to do cutting edge R&D in this subject.”
Prof. Manu Santhanam
Addressing the symposium, Chief Guest, Prof. Manu Santhanam, Dean (Industrial Consultancy and Sponsored Research), IIT Madras, said, “Research on AI will produce the tools for operating tomorrow’s businesses. IIT Madras strongly believes in impactful translational work in collaboration with the industry, and we are very happy to collaborate with Ericsson to do cutting edge R&D in this subject.”
Elaborating on the partnership between CeRAI and Ericsson, Prof. B. Ravindran, Faculty Head, CeRAI, IIT Madras, and Robert Bosch Centre for Data Science and AI (RBCDSAI), IIT Madras, said, “Networks of the future will enable easier access to high performing AI systems. It is imperative that we embed responsible AI principles from the very beginning in such systems. Ericsson, being a leader in future networks is an ideal partner for CeRAI to drive the research and for facilitating adoption of responsible design of AI systems.”
With the advent of 5G and 6G networks, many critical applications are likely to be deployed on devices such as mobile phones. This requires new research to ensure that AI models and their predictions are explainable and to provide performance guarantees appropriate to the applications they are deployed in
Prof. Ravindran
With the advent of 5G and 6G networks, many critical applications are likely to be deployed on devices such as mobile phones. This requires new research to ensure that AI models and their predictions are explainable and to provide performance guarantees appropriate to the applications they are deployed in, according to Prof. Ravindran.
Some of the key projects presented during this Symposium include:
The project on large-language models (LLMs) in healthcare, which focuses on detecting biases shown by the models, scoring methods for real-world applicability of a model, and reducing biases in Large Language Models (LLMs). Custom-scoring methods are being designed based on Risk Management Framework (RMF) put forth by National Institute of Standards and Technology (NIST), the U.S. federal agency for advancing measurement science and standards.
The project on participatory AI addresses the black-box nature of AI at various stages, including pre-development, design, development and training, deployment, post-deployment and audit. Taking inspiration from domains such as town planning and forest rights, the project studies governance mechanisms that enable stakeholders to provide constructive inputs for better customisation of AI, improve accuracy and reliability, raise objections over potential negative impacts.
Generative AI models based on attention mechanisms have recently gained significant interest for their exceptional performance in various tasks such as machine translation, image summarization, text generation, and healthcare, but they are complex and difficult for users to interpret. The project on interpretability of attention-based models explores the conditions under which these models are accurate but fail to be interpretable, algorithms which can improve the interpretability of such models, and understanding which patterns in the data these models tend to learn.
Multi Agent Reinforcement Learning for trade-off and conflict resolution in intent based networks: Intent-based management is gaining traction in telecom networks due to strict performance demands. Existing approaches often use traditional methods, treating each closed loop independently and lacking scalability. This project studies a Multi-agent Reinforcement Learning (MARL) method to handle complex coordination and encouraging loops to cooperate automatically when intents conflict. Current efforts explore generalization abilities of the model by leveraging explainability and causality for joint actions of agents.