Due to those limitations, the performances of conventional methods are restricted regarding classification accuracy and model generalization. Furthermore, only shallow features can be learned by those approaches, leading to undermined performance for unsupervised and incremental tasks. However, in most daily HAR tasks, those methods may heavily rely on heuristic handcrafted feature extraction, which is usually limited by human domain knowledge. In essence, it is expensive and not scalable. This expertise would be required for each new dataset or sensor modality. Statistical and machine learning models were then trained on the processed version of the data.Ī limitation of this approach is the signal processing and domain expertise required to analyze the raw data and engineer the features required to fit a model. Such methods were for feature engineering, creating domain-specific, sensor-specific, or signal processing-specific features and views of the original data. Traditionally, methods from the field of signal processing were used to analyze and distill the collected sensor data. The intent is to record sensor data and corresponding activities for specific subjects, fit a model from this data, and generalize the model to classify the activity of new unseen subjects from their sensor data. It is a challenging problem as there are no obvious or direct ways to relate the recorded sensor data to specific human activities and each subject may perform an activity with significant variation, resulting in variations in the recorded sensor data.
Generally, this problem is framed as a univariate or multivariate time series classification task. The problem is to predict the activity given a snapshot of sensor data, typically data from one or a small number of sensor types. As such, sensor data from these devices is cheaper to collect, more common, and therefore is a more commonly studied version of the general activity recognition problem. Now smart phones and other personal tracking devices used for fitness and health monitoring are cheap and ubiquitous. Historically, sensor data for activity recognition was challenging and expensive to collect, requiring custom hardware. Deep Learning for Sensor-based Activity Recognition: A Survey, 2018. Sensor-based activity recognition seeks the profound high-level knowledge about human activities from multitudes of low-level sensor readings Alternately, data may be recorded directly on the subject such as by carrying custom hardware or smart phones that have accelerometers and gyroscopes. The sensor data may be remotely recorded, such as video, radar, or other wireless methods. They may also be more focused activities such as those types of activities performed in a kitchen or on a factory floor. Movements are often typical activities performed indoors, such as walking, talking, standing, and sitting. Human activity recognition, or HAR for short, is a broad field of study concerned with identifying the specific movement or action of a person based on sensor data. Supervised Learning Data Representation.This post is divided into five parts they are: Photo by Simon Harrod, some rights reserved.
Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.ĭeep Learning Models for Human Activity Recognition