Visualization Tool For Keras 🔭

Arjun Mohnot
4 min readMay 2, 2019

One of the most debated topics in deep learning is how to interpret and understand a trained model — particularly in the context of high-risk industries like healthcare. The term “black box” has often been associated with deep learning algorithms. How can we trust the results of a model if we can’t explain how it works? It’s a legitimate question. Take the example of a deep learning model trained for detecting cancerous tumours. The model tells you that it is 99% sure that it has detected cancer — but it does not tell you why or how it made that decision. Did it find an important clue in the MRI scan? Or was it just a smudge on the scan that was incorrectly detected as a tumour? This is a matter of life and death for the patient and doctors cannot afford to be wrong. As we have seen in the cancerous tumour example, it is crucial that we know what our model is doing — and how it’s making decisions on its predictions. Typically, the reasons listed below are the most important points for a deep learning practitioner to remember:

  1. Understanding how the model works
  2. Assistance in Hyperparameter tuning
  3. Finding out the failures of the model and getting an intuition of why they fail
  4. Explaining the decisions to a consumer / end-user or a business executive

👉🏻 Neural nets are black boxes. In recent years, several approaches for understanding and visualizing Convolutional Networks have been developed in the literature. They give us a way to peer into the black boxes, diagnose misclassifications, and assess whether the network is over/underfitting. Guided backpropagation can also be used to create trippy art, neural/texture style transfer among the list of other growing applications. The purpose of the project is to design a Visualization Tool for Keras to visualize and debug what CNN’s are learning and to show heatmaps for a large variety of models like Resnet50, InceptionV3, Vgg16, MobileNet, etc.

👉🏻 This project is an attempt to apply machine learning techniques to build an open-source tool to show heat maps on a large variety of models to provide users with higher resolution.

⭐ You can find the Github link for this project at here.

  • Visualization Tool For Keras
  • Keras implementation of GradCAM.
  1. Adapted and optimized code from Jacobgil
  2. Original torch implementation: Ramprs

GradCam

Installation 👨🏻‍🏫

  1. Github link for this project at here.
  2. To install firstly, clone this repository. Then open command prompt in that directory and pip install -r requirements.txt

Steps to run the scripts 🏃‍♂️

  1. Go to the main folder, open cmd and run python main.py
  2. To visualize the intermediate layers go to Intermediate Layer folder, open cmd and run python Intermediate.py

Note- SMTP mail and meta tag scripts are present in the root folder

Screenshots 📸

📑 Page 1

📑 Page 2

Use Case Diagram 🗺️

Definitions and Acronyms 🔠

Terms & Definition

NASNet: Neural Architecture Search Network, a family of models. That were designed automatically by learning the model architectures.

MobileNet: The general architecture and can be used for multiple use cases. It uses depth-wise separable convolutions which basically means it performs a single convolution on each colour channel.

GUI: A graphical user interface is a form of user interface that allows users to interact with electronic devices.

Keras: An open-source neural-network library written in Python. It can run on top of TensorFlow, Microsoft Cognitive Toolkit, Theano, or PlaidML.

Heatmap’s: Representation of data in the form of a map or diagram in which data values are represented as colours.

Thank You 🙏

Originally published at https://arjun009.github.io on May 2, 2019.

--

--