- Title
- Using deep learning to assess eating behaviours with wrist-worn inertial sensors
- Creator
- Heydarian, Hamid
- Relation
- University of Newcastle Research Higher Degree Thesis
- Resource Type
- thesis
- Date
- 2022
- Description
- Research Doctorate - Doctor of Philosophy (PhD)
- Description
- In today’s world, cardiovascular diseases such as heart attacks and stoke are the leading type of chronic diseases in terms of premature death. Unhealthy diet is among habits that increase the metabolic risk in an individual (e.g., obesity, increased blood glucose, and raised blood pressure) that may lead to cardiovascular diseases. Therefore, being able to accurately monitor dietary intake activities of individuals could play an important part in promoting a healthier diet and preventing cardiovascular diseases. Wearable motion tracking sensors, inertial sensors in particular, are now commonly used in commercial grade wearables such as smartwatches and fitness bands. These sensors can be used to provide the biometric data required to monitor dietary intake. Since an intake activity mainly consists of a series of intake-associated gestures, automatic intake gesture detection forms the basis required to achieve automatic monitoring of dietary intake. The overarching goal of this research is to design machine learning models to enhance automatic intake gesture detection process and improve the performance of detection using data recorded from wrist-worn inertial motion tracking sensors. To lay a robust groundwork to achieve the goal, we conducted a systematic literature review to synthesise research on assessing dietary intake using upper limb motion tracking sensors. Our literature review revealed that wrist-worn inertial sensors are the most used sensors in intake gesture detection. It also became evident that deep learning is gaining more attention and showing more promise than classical machine learning algorithms in this field. Therefore, we chose to utilise deep learning models using data recorded from wrist-worn inertial sensors. To provide adequate training data for our deep learning models on the one hand and to facilitate research in this fields for all researchers around the gobble on the other hand, we conducted a series of data collection experiments in two phases to collect inertial and video intake gesture data. As a result, we made the OREBA (Objectively Recognizing Eating Behaviour and Associated Intake) multimodal datasets publicly available through publishing a paper. The OREBA datasets consists of data recorded from discrete dish data collection scenario in phase one (i.e., OREBA-DIS) and shared dish data collection scenario in phase two (i.e., OREBA-SHA). In our next study, we used the OREBA datasets to propose a new deep learning model that improved the performance of intake gesture detection compared to the existing state-of-the-art models. We also clarified the effects of data preprocessing steps and inertial sensor combinations on intake gesture models. Our results showed that applying a consecutive combination of mirroring, removing gravity effect, and standardization data preprocessing steps improved the performance of detection, while smoothing had detrimental impact on the performance. In our last study, we explored the possibility of score-level and decision-level fusion of inertial and video data using the OREBA multimodal datasets. We benchmarked score-level and decision-level fusion approaches against no fusion (i.e., using individual inertial model or individual video model) approaches. Our results showed that the score-level fusion approach of max score model outperforms all other fusion approaches considered. However, it may not always outperform the no fusion approaches. We introduced an assessment that can be performed to determine the potential of score-level and decision-level fusion (i.e., fusion of outputs of the individual inertial and video models). Our assessment showed that fusion of the outputs of individual models is more promising on the OREBA-DIS dataset compared to OREBA-SHA dataset.
- Subject
- eating activity detection; hand-to-mouth movement; inertial; video; camera; dietary monitoring; eating behaviour assessment; communal eating; 360-degree video camera; wrist-mounted motion tracking sensor; accelerometer; gyroscope; deep learning; intake gesture detection; wrist-worn; score-level fusion; decision-level fusion
- Identifier
- http://hdl.handle.net/1959.13/1439012
- Identifier
- uon:40792
- Rights
- Copyright 2022 Hamid Heydarian
- Language
- eng
- Full Text
- Hits: 2752
- Visitors: 2485
- Downloads: 469
Thumbnail | File | Description | Size | Format | |||
---|---|---|---|---|---|---|---|
View Details Download | ATTACHMENT01 | Thesis | 5 MB | Adobe Acrobat PDF | View Details Download | ||
View Details Download | ATTACHMENT02 | Abstract | 324 KB | Adobe Acrobat PDF | View Details Download |