How "good enough" AI could make voice assistants more ethical
In 2019 it was revealed that Amazon, Google and even Apple employed thousands of people to listen to voice recordings from their smart assistants. They defended this practise by saying it is necessary to improve the quality of their voice assistants. But is this really a valid argument?
To explore this issue I wrote an opinion piece on the Candle website. It point out that at a certain point, an algorithm will have learned enough. After 10.000 recordings of "Turn on the light", it won't get any better at understanding that command. By now, having your voice used to train an algorithm should be "opt in" - a situation Apple has now made their default.
I argue that we should think of it like going to school. After a while, an algorithm graduates, and from that point on it should be perfectly able to run locally, in the home, instead of in the cloud. That way we don't have to send recordings of our voice, which obviously limits the risk consumers are exposed to.
It's faster too. Running a voice recognition in the home means you eliminate the bottleneck of having to send things back and forth of the internet. It also means the system will still run fine if the internet is down.
If you ask me, fully local voice control is the future.