Read the First Edition of The Lens, a new magazine by LSI arrow-icon

Charles Nduka, Emteq Labs - An Open Platform to Measure & Analyze Raw Facial Data | LSI Europe '24

The world's first eyewear that combines wireless non-contact sensors with a machine learning platform, enabling you to effortlessly collect and analyse facial data and activities.
Speakers
Charles Nduka
Charles Nduka
Founder and Chief Scientist, Emteq Labs

Charles Nduka 00:02
So I'm Charles Nduka. I'm the founder and chief scientist at Emteq Labs, a company revolutionizing the way we understand human behavior. I'm a plastic surgeon by background, helping to restore patients' facial expressions. But there's a problem: when the patients are outside the clinic, I have no way of understanding if they are performing their rehabilitation exercises as they should. Patients know what they need to do, but sometimes they just don't do it, and that can affect their outcomes. And that problem also affects other patients, a much wider, larger market of patients with depression. According to the World Health Organization, depression is the biggest cause of disability worldwide, and that, combined with dietary disorders such as obesity and its complications, leads to tremendous numbers of premature deaths. Despite this, we have no real way of understanding behaviors objectively, in real time, in the real world. And this is kind of bizarre. For the last 200 years, mental health care and psychiatry haven't really changed in their way of assessing patients. We've gone from the left-hand side, where you have a face-to-face consultation, the patient gets asked some questions. They may fill out a survey, a questionnaire. They may have some sort of assessment, such as the Hamilton Depression Scale, and the patient goes off. They ask you, "How have you felt the last two weeks? Last six weeks?" Have some medication to go away. They come back and say, "How are you feeling now? How do you compare to how you felt six weeks ago?" I don't know about you, but I can't remember how I felt six weeks ago, and it's kind of bizarre. We wouldn't consider diabetes management by asking patients how they feel with diabetes management. We revolutionized it by going to continuous real-time assessment to personalize the intervention. Now obviously we have moved forward a little bit in the use of smartphones, understanding behavior, but our proposition is that through eyewear, which is worn by 50% of the population, up to 80-90% in the elderly, will become the platform for understanding behaviors in real time in the real world, as a transdiagnostic digital biomarker platform. So Apple has introduced self-assessments, the PHQ-9 and GAD-7, within their HealthKit as a way of understanding behaviors, and that's fine, but the problem is that these measurements are being done episodically. If you take a measurement this week and then next week or this month and then next month, you may get the same outcomes. But without what's going on between times, there could be tremendous fluctuations between times that you're totally blind to, and this cuts across anybody developing technologies or devices or interventions. I suspect that in many cases, studies are failing to reach significance because of that lack of granularity. Across cardiovascular health, we have many tools now for measuring patients' biometric responses in the real world, but we have nothing to understand moods and emotions. And ultimately, patients go to the doctor because they're not feeling well. Yet we have no way of measuring this. So introducing Sense Eyewear, this is the first platform to measure in real time patients' behavior via facial expressions, behaviors, and moods. This is based on our last nine years of development, initially with facial electromyography (EMG) incorporated into a VR headset that we used to build our models to understand what the person was seeing and hearing and mapping that to their behavioral responses. We subsequently developed a brand new category of technology called facial optomyography. And this is an optical-based system, but without cameras, that samples the face at between two and 6000 times a second. So it's not capturing images, but capturing muscle vectors, which enables us to approach the sensitivity of EMG, but without the disadvantages of having to have contact on the skin and works in darkness like cameras and in the real world. So why the face? Well, apart from the fact that, as represented in this room, at least half of people have corrected vision, it's also the way that we as people interact and learn about each other. People that you know, people that you know well, you can infer their mood: "Are you tired? How are you feeling? What's up?" You're doing this because you're acting as a sensor measuring their behavior, their expressions, their posture, their movements, and the glasses do the same thing. If a human can do it, a machine can do it with higher sensitivity, so it goes beyond expressions to emotions, fatigue, confusion, frustration, and a whole category of other behaviors that had never been previously possible, such as dietary behaviors. We can measure individual chews. So this is the case where chews become the new steps because we know that dietary behaviors are also important, not only in psychiatric conditions but also in metabolic diseases as well.

Charles Nduka 05:15
Emteq Labs has developed facial optomyography, a new method to sense expressions and emotions. Unlike facial EMG, it provides contact-free measurements with high sensitivity. OCO Sense measures the tiny changes in facial muscle activations thousands of times a second. This data is interpreted by our proprietary AI algorithms for wireless transmission.

Charles Nduka 05:42
So this is a whole new paradigm for measurement that is high sensitivity, low power, and has applications across a range of industries. Outside of this, our platform comprises the device, the glasses, software in the form of a smartphone app for the patient, and a back-end platform, either for the researcher or the clinician, to understand the behaviors and our cloud-based analytics platform. As I mentioned, the metrics are broad. We can measure things that were previously just impossible to access; you can't get from a ring or a watch or other wearable devices: expressions and emotions, dietary behaviors, again, oncological studies or conditions that affect gut health. These are all affected by what a person is doing and behaving, and if you don't understand those factors, well, how can you be sure that your results are meaningful? Attention? We measure attention not through eye movement, in this case, not yet anyway, but through head movement, because an IMU on the head gives you a far better understanding of a person's behavior than you can get from measuring it from the wrist. We can also measure a whole range of activities that you can't do with any other wearable device without even calibration: sitting, standing, walking, stooping, exercises, all calibration-free, and optionally, we can also measure dietary behaviors as well with a camera that can capture food or medications being taken, automatically detecting ingestion. So our North Star is depression, a condition that causes tremendous impact on quality of life. And this is our development pathway for that with partners in a number of universities. This is against the gold standard current tool, which is obviously things like the PHQ-9, which compress a lot of data into these four categories and therefore lose granularity. We've published data on our activity recognition that shows high accuracy and also published our work with depression, where we're comparing depressed versus non-depressed people and can detect group differences with the same accuracy as current standard paper-based tools, and it's a study that was done by Professor James Stone at Sussex University.

Charles Nduka 08:09
This new category of technology has wide applications. As I mentioned, we're going to focus on depression. Dietary behaviors are another area that will be an adjacent market, as well as a whole range of other opportunities via licensing for our technology. The current eyewear, which I can show you later on today if you're interested, is rather bulky, and we're raising investments to miniaturize the device to make it usable in real-world settings. So we monitor the whole gamut of behaviors, activities, how a person is feeling, dietary behaviors, and can do things that are just impossible, as I say, with existing wearables. We currently are selling to researchers in some leading universities, and the plan is for us to bring forward to wider markets. We have an amazing team. Steen Strand was head of eyewear at Snapchat and launched the previous iteration of Spectacles. Charlotte Roch is top 2% on Google Scholar for smart AI wearable technologies. We're raising $5 million next year to flesh out the technology platform for some validation studies and for regulatory approvals.

LSI USA ‘25 is filling fast. Secure your spot today to join Medtech and Healthtech leaders.

March 17-21, 2025 Waldorf Astoria, Monarch Beach | Dana Point, CA Register arrow