LLMs in the Wild: A Practitioners Guide to Navigating the AI Deployment Maze
11 Jun 2025
Data Excellence Stage
Data Excellence
Deploying LLMs at scale requires a robust infrastructure, efficient resource allocation, and adaptive performance strategies. However, enterprises face significant challenges, from managing technical debt to addressing unpredictable model behaviour, making large-scale AI deployment both complex and resource-intensive.
This session will focus on GPU optimization, memory management, and hybrid cloud/on-prem deployment, tackling the latency, cost, and scalability trade-offs enterprises must navigate. Learn practical strategies to streamline model integration, mitigate technical debt, and optimize inference for real-time applications.
Pass Type
VIP All Access Pass,Delegate Pass,Start-up & Investor Pass,Academic Pass,Press Pass
Content Focus
Use Case
Session Type
Keynote / Solo
Session Focus
Use Case
Session Keyword
AI Breakthroughs