Abstract: In the rapidly evolving field of machine learning, "Dataset Distillation and Pruning" has emerged as a key strategy for enhancing model efficiency. Dataset distillation involves extracting essential information from extensive datasets to create refined, smaller-scale data that maintains model robustness while reducing computational burden. It can be likened to distilling knowledge from vast amounts of data. On the other hand, dataset pruning is akin to pruning unnecessary branches from a tree. This technique involves removing redundant or minimally impactful data points, resulting in a more streamlined, faster, and resource-efficient machine learning model. By eliminating extraneous information, dataset pruning aids in constructing lean algorithms with outstanding performance and without unnecessary computational overhead. These two approaches collectively address the challenges posed by the abundance of data in the digital age. Dataset distillation and pruning complement each other in model compression research and further optimize the entire machine learning workflow's energy consumption, ultimately facilitating sustainable deployment of large-scale data and models on endpoints.
Speaker Bio: Joey Tianyi Zhou is the Deputy Director and Principal Scientist, with the A*STAR Centre for Frontier AI Research (CFAR), Singapore. Before working at CFAR, he was a senior research engineer with SONY US Research Centre in San Jose, USA. Dr. Zhou received a Ph.D. degree in computer science from Nanyang Technological University (NTU), Singapore. His current interests mainly focus on improving the efficiency and robustness of machine learning algorithms. In these areas, he has published more than 150 papers and received the Best Paper Nominations at the European Conference on Computer Vision (ECCV’16), ACM Multimedia (MM’24), Best Paper Award at IEEE SmartCity 2022, International Joint Conference on Artificial Intelligence (IJCAI) workshops, respectively. Dr. Zhou regularly organizes workshops/tutorials at top-tier international conferences like CVPR, IJCAI, ICDCS, etc. He is serving on an Editorial Board for many leading journals like AIJ, IEEE Transactions, etc., and Area Chairs in top machine learning conferences like ICLR, ICML, NeurIPS, KDD, IJCAI, and Associate Programme Chair in IJCAI 2025, etc. He is listed in the Top 2% Scientists Worldwide by Stanford University. He holds senior member in IEEE, a member in INNS and Technical Coordinator for AICI Section.
|
Abstract: AI today can pass the Turing test and is in the process of transforming science, technology, humans, and society. Surprisingly modern AI is built out of two very simple and old ideas, rebranded as deep learning: neural networks and gradient descent learning. The storage of information in neural networks by gradient descent is distributed or "holographic", and since Dennis Gabor invented holography, I am particularly honored to be a recipient of the prize that bears his name. I will describe several applications of AI to problems in biomedicine developed in my laboratory, from the molecular level to the patient level, using omic data, imaging data, and clinical data. Examples include the analysis of circadian rhythms in gene expression data, the identification of polyps in colonoscopies, and the prediction of post-operative outcomes. I will discuss the opportunities and challenges for developing, integrating, and deploying AI in the first AI-driven hospitals of the future and present two frameworks for addressing some of the most pressing societal issues related to AI research and safety.
Speaker Bio: Pierre Baldi earned MS degrees in Mathematics and Psychology from the University of Paris, and a PhD in Mathematics from the California Institute of Technology. He is currently Distinguished Professor in the Department of Computer Science, Founding Director of the AI in Science Institute, and Associate Director of the Center for Machine Learning and Intelligent Systems at the University of California Irvine. The long term focus of his research is on understanding intelligence in brains and machines. He has made several contributions to the theory of AI and deep learning, and developed and applied AI and deep learning methods for problems in engineering and the natural sciences, for instance in physics (e.g., exotic particle detection) , chemistry (e.g., reaction prediction), and bio-medicine (e.g., protein structure prediction, biomedical imaging analysis). He has published five books, including Deep Learning in Science, Cambridge University Press (2021) and ~500 scientific articles. His honors include the 1993 Lew Allen Award at JPL, the 2010 E. R. Caianiello Prize for research in machine learning, the 2023 Dennis Gabor Award of the International Neural Network Society, and election to Fellow of the AAAS, AAAI, IEEE, ACM, and ISCB. He serves as Associated Editor for Artificial Intelligence, Neural Networks, and the IEEE/ACM Transactions in Computational Biology and Bioinformatics. He has mentored ~100 graduate students and postdoctoral fellows and co-founded several startup companies. At UCI, he has introduced several new courses, including the course: Neural Networks and Deep Learning, and the course: AI Frontiers: Technical, Ethical, and Societal.
|