According to the researchers, who will present these findings on April 8 at the Computer Machinery Association (ACM) conference on health and disorders, this system will be less triggered by different cameras, lighting conditions or facial features, such as skin color. and learning.
“Every person is different,” said study author Xin Liu, a doctoral student at UW.
“So this system needs to be able to quickly adapt to each person’s unique physiological signature and distinguish it from other versions, such as what it looks like and what environment it is in.”
The system then used spatial and temporal information from the videos to calculate both vital signs.
Although the system worked well on some datasets, it still struggled with others that contained different people, backgrounds, and lighting. This is a common problem known as “overcrowding,” the team said.
The researchers improved the system by creating a personal machine learning model for each individual.