Abstract: Convolutional neural networks (CNN) are known to learn an image representation that captures concepts relevant to the task, but do so in an implicit way that hampers model interpretability. However, one could argue that such a representation is hidden in the neurons and can be made explicit by teaching the model to recognize semantically interpretable attributes that are present in the scene. We call such an intermediate layer a semantic bottleneck. Once the attributes are learned, they can be re-combined to reach the final decision and provide...
(read more)
Topics: 
Artificial intelligence
Machine learning
Natural language processing