AI Acceleration: Optimising Machine Learning Through Compression
15 Jun 2023
TinyML (that is scaled downed machine learning) is paving the way for more important ML at the edge with compressing algorithm and data models capable of running more efficiently in small spaces. Why do we need to run bigger models in smaller footprints? As models get bigger and more demanding on the inference side, can we achieve the performance needed? What about other techniques? Federated learning? For addresses security and privacy elements. How far can we doing more with less hardware and how does it relate to sustainability specifically?