Human Motion Recognition Using Smartphones and Smartwatches

Arjun Mohnot
5 min readOct 13, 2019
Source: Vicara

Always doubted how your smartphone, smartwatch or wristband recognises if you’re walking 🚶, running 🏃 or relaxing 💆‍♂️?
Well, your equipment apparently holds various sensors that provide different data. GPS, audio (i.e. microphones), image (i.e. cameras), route (i.e. gyrocompasses) and acceleration sensors are extremely prevalent nowadays.

A Multi-Sensor within Everyone’s Pocket 📱
Among the increase of screens of all sizes for both work and recreation, numerous people discover themselves relaxing for much of the time. But, similar devices can suggest people stay engaged and active. Smartphones include various sensors that can be used to analyse movement. Continuously polling smartphone sensors do a number of activities and would diminish smartphones batteries fast. Hence, a valuable algorithm should remain ready to classify exercises based upon nearly sparse polling of sensors. Some references show that smartphone gyroscopes do considerably higher power than accelerometers, so they should just be used if completely required to solve the problem at hand. We wish to use data obtained from accelerometer sensors more. Practically every latest smartphone becomes a tri-axial accelerometer that measures acceleration in all three spatial dimensions.

Source: Codelabs

Additionally, accelerometers can identify device orientation.
In this part of the series, we will try to analyze time series and with classification like a Linear Classifiers: Logistic Regression, Naive Bayes Classifier, Nearest Neighbor, Support Vector Machines, Decision Trees, Boosted Trees, Random Forest, Perceptron and Neural Networks, we will try to predict the type of activity that is currently happening.

The actual-world🌍 data

We will use UCI-HAR (Human Activity Recognition) dataset; This repository holds six labels from the accelerometer and other sensors measurements from 30 subjects labelled as WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, or LAYING. The dataset was collected in a controlled, laboratory setting. It contains 561-feature vector with time and frequency domain variables.
The experiments have been carried out with a group of 30 volunteers within an age bracket of 19–48 years.

⭐ The purpose of this project is to generate a pattern using sensor data from regular smartphones that can detect between active and inactive positions.

Investigation of Data

I grouped label (1-WALKING, 2-WALKING_UPSTAIRS, 3-WALKING_DOWNSTAIRS) as activity and label (4-SITTING, 5-STANDING, 6-LAYING) as a no activity. Two separate data frame activity and no activity was created to visualize the trends.

Line plots of the prominent features of tBodyAcc mean() — x,y,z, tGravityAcc-mean() — x,y,z, tBodyAccJerk-mean() — x,y,z between no activity and activity dataframe were created.

Smoothing of data was also done through rolling average mean of 3 rows each time. This reduces some of the noise generated by the devices.

Variation of features based on no activity (towards the left) and activity data-frame (towards the right).

What does the data look like?

I’ll start by grabbing a random selection of features and plotting them.

The bar charts below show the values of each feature are normalized to [−1,1] as well as each feature being normally distributed.

3D visualization for one of the feature for no activity and activity data-set.
Heat map of input features

Is the range of values of each feature equally distributed?

Range of randomly selected features

Is the data well-behaved?

Is each activity represented about equally in the dataset?

  • To check whether each activity is represented about equally, We’ll look at the y_train class distribution.

Classification

  • Types of classification algorithms in Machine Learning
  1. Linear Classifiers: Logistic Regression, Naive Bayes Classifier.
  2. Nearest Neighbor.
  3. Support Vector Machines.
  4. Decision Trees.
  5. Boosted Trees.
  6. Random Forest.
  7. Perceptron
  8. Neural Networks.

We will try to implement all the popular classification algorithms listed above.

Logistic Regression

The accuracy achieved with this method was 96%.

Naive Bayes

Naive Bayes fall behind in accuracy and achieved 77.03%.

K- Nearest-Neighbour

K- Nearest achieved accuracy of 89.07%. This method takes little time for the computation as it calculates the nearest distance with each neighbouring points.

SVM

The accuracy achieved was 94.028%.

Cart — Decision Tree

Using the cart decision tree accuracy was 86.56%.

Gradient Boosting

In this algorithm, the MSE error calculated is 0.16, which shows that the algorithm was able to classify very reasonably.

Random Forest

Random forest computes weighted avg of precision, recall and f1-score all as 0.92.

K- Fold Cross-Validation

Accuracy was in the interval 0.93 (+/- 0.08) or nearly 93%.

Expected vs Predicted in K-Fold Cross-Validation

Perceptron

Perceptron also performs quite exceptionally with an accuracy of 94.84%.

Neural Network

🚩 Finally, the neural network outperformed everyone and was able to achieve an accuracy of 98.61%.

  • When labels were reduced from 6 to 2 under category activity(walking upstairs, walking downstairs, walking) and no activity(sitting, standing, sleeping) then our neural network model was able to get an accuracy of 99.9% despite such a noise from the sensors.

Summary and What’s Next 🙂

In this article, you have seen how to store and modify complicated accelerometer and various sensors data and work it through different classification techniques. In future, We can export this model into TensorFlow lite model, which can then be used on android and ios phones to predict on a real-time basis which activity is currently user performing.

The Jupyter notebook for this article is available on Github. You can also find the code for the same in Google Colab.

Thank you, and I am open to your suggestions.

If you like my article, please don’t forget to give a clap 👏.

--

--