Smart speakers and home tech are still raising serious security and privacy concerns. A lot of us own smart devices due to the fact that the comfort it brings is more than the downsides it can cause in our lives and surroundings. Even if these data collection scenarios are associated with complex ethical problems that a lot of us battle with in our day-to-day living, it is vital to not forget that the practices are morally skeptic.
Recently, the University of Washington researchers found out that repeated data collection from your smart speaker can actually save your life.
Just imagine someone in an emergency situation asking Alexa to seek help and the smart device makes use of its various connections to look for assistance. You do not even have to imagine it since it took place already even if it was not intentional.
Regulatory troubles stop Amazon and co from including emergency features seamlessly but it is surely no tech restriction. If you didn’t know, Alexa actually knows how to make 911 calls to the amazement of everyone involved (even if it is clearly due to the Bluetooth phone connection and not cos Alexa suddenly started to feel it perceive things.)
However, what will happen when you are unable to say the command to seek help in the first place? It might not always be safe to yell out commands during an emergency since it might be physically impossible or doing it will endanger your life. Smart devices can proffer a different solution, starting with cardiac arrest and your manner of breathing.
The first signals of a heart attack are irregular gasps of breath a.k.a agonal respiration, which offers a special sonic pattern a machine will be able to detect – as long as it listens all the time.
An assistant professor of anesthesiology and pain medicine at the University of Washington, Dr. Jacob Sunshine had this to say recently, “This kind of breathing happens when a patient experiences really low oxygen levels. It’s sort of a guttural gasping noise, and its uniqueness makes it a good audio biomarker to use to identify if someone is experiencing a cardiac arrest.”
The researchers engaged in the training of their first AI model using 7,316 2.5-second audio clips that were recorded by smart devices and mobile phones for 8 years.
They also used about eighty-three hours of regular sleeping and breathing sounds to offer negative data to decrease the misidentification of agonal respiration in the model. Their system could accurately identify agonal breathing events up to 6 meters away and act.
While the system can seamlessly run on systems with hardware in existing smart speakers such as the Amazon Echo, plus the fact that proper work has gone into improving accuracy, the researchers still feel their model requires additional training for it to work dependably in real-world situations
It is a wise move I must say because, if it malfunctions, it will only be tampering with your privacy and be useless at the same time. No one wants that.