Taxonomy Fuelling Interpretable Models and Data
14 Jun 2023
Practitioners
Practitioners Stage
We rely so heavily on our outputs from our ML models, sometimes we don't question the predictions they have provided, especially when other teams are reading and relying on them. We look at the factors needed to move from creating a model for its accuracy to creating a model for all to understand. Is it trust vs lack of understanding? Do we need to select more understandable features or more understandable reports on what those features mean to the final model? The use of taxonomies mproving transparencies. So canIe create human-cantered explanations? And can we make algorithms understandable for decision makers?