There is no question that AI is making its way into every façade of our lives, whether by businesses to improve customer experience and efficiency, or by government agencies to save money and identify fraud.
Yet even supporters acknowledge that some major problems continue to plague AI, namely in areas such as bias which is exacerbated by poor explainability. It is for this reason – among others, that ethics has increasingly become a major consideration, with governments and large tech firms starting to roll out guidelines for ethical AI.
Moreover, ML models also tend to be highly specialized without a general understanding of the world. This has led to assertions by some experts that we might be reaching the limits of AI, with diminishing returns and AI that lacks genuine comprehension.
Researchers around the world are working apace on these concerns, however, and recent reports offer a glimmer of hope for continued progress.
Building explainability into AI
As noted in a blog on MIT News, “explanation methods” in ML models often describe how certain features in a model contribute to the final prediction. But existing techniques can be confusing and don’t fully address the concerns of various groups of users, say a group of researchers behind a new AI paper.
“The term ‘interpretable feature’ is not specific nor detailed enough to capture the full extent to which features impact the usefulness of ML explanations,” wrote the authors, who say that AI model builders should consider using interpretable features at the start of the development progress – and not work on explainability after the fact.
To allay the concerns of domain experts who often don’t trust AI models because they don’t understand the features that influence predictions, the researchers proposed paying more attention to the features that are useful to domain experts taking real-world actions.
Based on the idea that one size doesn’t fit all when it comes to interpretability, the researchers drew on years of fieldwork to develop a taxonomy to help developers craft features that are easier for the target audience to understand. This was done by defining properties that make features interpretable for five distinct types of users, from AI experts to the people affected by a machine-learning model’s prediction.
To be clear, there is a tradeoff between offering interpretable features and model accuracy, though lead author Alexandra Zytek says it is very small in “a lot” of domains. In terms of risks, one possibility is that a malicious developer could, say, put a race feature in a broad, abstract concept such as “socioeconomic factors” to hide its effects.
You can read the full paper titled “The need for interpretable features: Motivation and taxonomy” here (pdf).
Learning physics through videos
As reported by New Scientist, an algorithm created by Google DeepMind – which previously created an AI to beat the world champion at Go, can now distinguish between videos in which objects obey the laws of physics and ones where they don't.
The DeepMind team was attempting to train AI in “intuitive physics”, which is the human ability to grasp the physical world. As noted by the researchers, existing AI systems pale in their understanding of intuitive physics, even when compared to very young children.
To bridge this gap between humans and machines, the researchers turned to concepts from the field of developmental psychology. An AI called Physics Learning through Auto-encoding and Tracking Objects (PLATO) was created and trained to identify objects and their interactions using simulated videos of objects.
According to New Scientist, some of the videos “showed objects obeying the laws of physics, while others depicted nonsensical actions, such as a ball rolling behind a pillar, not emerging from the other side, but then reappearing from behind another pillar further along its route.”
When set to predict what would happen next, PLATO was “usually” correct, which suggests that the AI had developed an intuitive knowledge of physics.
It is important to note that the code for PLATO is not released, with the researchers claiming that their “implementation of PLATO is not externally viable”. They remain open to being contacted for clarifying questions or implementation details, however.
The paper can be accessed here.
Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].
Image credit: iStockphoto/Black_Kira