I am Postdoctoral Researcher in translational neuroengineering in the Neuroprosthetics Lab at UC Davis and BrainGate Consortium. I build neurotechnologies to enable people with nervous system injury or disease (e.g., stroke, ALS, dementia, Parkinson’s disease) to restore their lost movement and speech abilities via brain-computer interfaces (BCIs) and provide neurorehabilitation and assistive technologies. My research focuses on breaking barriers between humans and technology by developing intuitive modes of interaction with technology using brain signals, movement, and natural language. My work spans multidisciplinary areas of human-centric AI, machine learning, signal processing, time series analysis of neurophysiological signals, inertial sensors and speech signals, neural decoding, natural language processing and social robots to build neurotechnologies for healthcare. I am primarily interested in understanding brain signals and other human physiological signals to build life enabling technologies.
Currently, in my postdoc research, I am developing intracortical BCI to restore lost speech in people with severe brain injury or disease by decoding their speech-related neural activity recorded using Utah multielectrode arrays implanted on brain and translating it to continuously synthesised speech. Thus, enabling people to speak using their brain signals.
Previously, I was Postdoctoral Researcher in affective robotics in Biomechatronics Lab at Imperial College London and UK Dementia Research Institute where I developed affective social robots and conversational AI to support people with dementia by improving their engagement, providing personalised interventions, and interactively assessing their health and wellbeing. I was also venture lead in MedTech SuperConnectorTM accelerator where I led the development of social robot platform - Brainbot for mental health and telemedicine.
I received PhD degree in Cybernetics and MEng degree in AI and Cybernetics from the University of Reading, UK. During my PhD research in Brain Embodiment Lab, I studied changes in temporal dynamics of broadband brain signals (EEG) during voluntary movement. I developed a novel approach of modelling broadband EEG (instead of brain waves with narrow frequency bands) using non-stationary time series model to predict movement intention for motor control BCI.
I was postgraduate research assistant at the University of Reading where I developed interactive neurorehabilitation tools providing combined motor and language therapy for stroke and brain injury in home environment. This technology was transferred for commercialisation. I was also postgraduate research assistant in the SPHERE project at the University of Reading and University of Southampton, where I worked on modelling motion kinematics and classifying movements from wearable inertial sensors for people with Parkinson's disease.
I enjoy collaborating with multidisciplinary teams of medical practitioners, patients, designers, industry experts to find technological solutions to real world health challenges. I also love to share my research with the general public via science outreach. I have presented live demos of my BCI and neurorehab tech in Science Museum London, Royal Institution, hospitals, schools, and universities. My BCI was featured in the Royal Institution Christmas Lecture.
The ability to speak is a key determinant of quality of life, but it is disrupted in people with brain injury and neurodegenerative diseases such as ALS. Brain-computer interfaces (BCIs) can potentially restore speech in individuals who have lost the ability to speak by interpreting their speech-related neural activity. Current intracortical BCIs can enable users to communicate via point-and-click and handwriting with high accuracy, but these modes of communication are slow and do not capture the full expressive range of speech. Intelligible speech synthesis from brain signals has not yet been demonstrated. In my current postdoctoral research in the Neuroprosthetics Lab at UC Davis, I am building speech-BCI to enable people speak by decoding their neural activity and translating it to speech.
This research is part of BrainGate clinical trial. Intracortical high-resolution neural activity is recorded from the speech motor cortex of human participants using chronically implanted Utah multielectrode arrays. My research focuses on developing a neural decoder to synthesise speech directly from this intracortical neural activity. I am integrating deep learning models with signal processing to uncover neural dynamics and correlates of speech production and translate these into synthesised speech. This neural decoder can instantaneously synthesize voice from intracortical neural activity and provides real-time closed-loop audio feedback. The show improvement over the state-of-the-art prior research in speech synthesis from brain activity. This approach is suitable for real-time applications in BCIs to restore lost speech, which is the focus of my ongoing work.
[More info]
Brain-computer interfaces (BCIs) provide a direct mode of interaction with external devices using brain signals. Movement is our fundamental mode of interaction with the environment, and hence detecting movement intention reliably from brain signals is important to develop intuitive motor control BCI. During my doctoral research at the University of Reading, I investigated temporal dynamics of broadband EEG to identify robust markers of movement intention. Brain activity is composed of oscillatory and broadband arrhythmic components and undergoes complex changes during voluntary movement. Traditionally characterisation of movement from EEG has focused mostly on narrowband oscillatory processes such as event-related (de)synchronisation in sensorimotor rhythms and slow non-oscillatory event-related potentials such as motor-related cortical potentials. However, temporal dynamics of broadband arrhythmic EEG remain unexplored and broadband EEG was considered as background noise.
I discovered new neural correlates of movement intention in long-range temporal correlations of broadband EEG which are complementary to the above conventional correlates, providing previously inaccessible motor information from EEG leading to the earlier prediction of movement intention before its onset and improved classification accuracies. I developed a novel approach to modelling these long- and short-range temporal correlations in broadband EEG using non-stationary time series ARFIMA model and machine learning classifiers to predict movement intention for robust BCI.
[More info]
Social robots are anthropomorphised robots capable of using natural language and facial expressions for engaging interactions. Dementia is a neurodegenerative disorder leading to progressive decline in cognitive abilities requiring continuous care and support. Social robots can promote independence, improve cognition and social interactions, provide assistance and interventions to maintain quality of life, and can be used for telemedicine and remote care of persons with early stages of dementia and mild cognitive impairment. In my postdoctoral research at Imperial College London and UK Dementia Research Institute, I am developing different types of conversational AI, affective social robots, and a framework for interactive robotic interventions for dementia care. I am conducting longitudinal experiments with persons with dementia to investigate how such robots can be used to monitor their health and wellbeing automatically and non-intrusively by analysing human-robot interactions using machine learning and natural language processing. Early results have shown that interactions with robots give insight into their health and wellbeing, and which will help in personalising the robot’s functionality for adaptive support. I am also studying physiological responses in EEG to human-robot interaction.
I was venture lead in MedTech SuperConnectorTM accelerator where I led the development of BrainBot, an affective social robot that interacts with users with natural language and human-like facial expressions. It also provides an interactive robotic telemedicine platform for clinicians for remote therapy. We are testing our robotic platform for remote cognitive engagement therapy in collaboration with SCARF India to assess its use as an affordable tool for dementia care and remote therapy.
[More info]
(Featured in the American Journal of Geriatric Psychiatry editorial)
Neurorehabilitation is an essential component of recovery after stroke and brain injury. The functional connectivity and structural proximity of elements of the language and motor systems result in frequent co-morbidity post brain injury; however, treatment for language and motor functions often occurs in isolation. Due to the care burden in rehab centres and hospitals, it is not possible to provide the necessary amount of high-intensity therapy to patients. Hence, in this project as a research assistant at the University of Reading, I have developed interactive combined motor and language therapy tools (MaLT) for long term home-based rehabilitation in collaboration with language therapists, assistive technology researchers, physiotherapists and clinicians from NHS, and patient and carer groups. MaLT comprises a suite of Kinect-based interactive games targeting both language and upper-limb motor therapy and records patient performance for therapists to assess progress. The games target four major language therapy tasks involving single word comprehension, initial phoneme identification, rhyme identification and a naming task with eight levels of difficulty by programmatically generating appropriate questions providing unique gameplay every time. MaLT was tested with stroke survivors at home and in NHS hospital with a positive response.
MaLT technology has been transferred for commercialisation to Evolv and is now included in their suite of virtual rehabilitation tools. I have also developed a mobile app for speech and language therapy (SpeLT).
Studying kinematics of movement can provide insight in the assessment of neurodegenerative Parkinson's disease, monitoring its progression, and developing rehabilitation strategies. Wearable inertial sensors are a cost-effective mode of assessing movement in the clinical setting and home environment. As a research assistant in the SPHERE project at the University of Reading, I worked on developing a new approach of integrating modelling and classifying sit-to-stand movement kinematics using extended Kalman filter, logistic regression, and unsupervised machine learning with only two inertial sensors placed on shank and back. Sit-to-stand transitions are an important part of activities of daily living and play a key role in functional mobility in humans and are often affected in older adults due to frailty and in people with motor impairments. This model was successfully used to characterise and compare sit-to-stand angular kinematics in younger healthy adults, older healthy adults, and people with Parkinson's disease.
In the SPHERE project at the University of Southampton, I developed an algorithm to estimate motion intensities and energy expenditure for assessing mobility in people with Parkinson's and stroke during different activities using wearable inertial sensors. We used this to monitor movement in the free-living condition in the home environment continuously for multiple days to study how mobility is affected by different physiological conditions.
[More info]