Read the full piece on QZ.com and then think about the implications of this … it boggles the mind. If this cool MotoX phone has a mic that is ‘always listening’ then I, personally, wouldn’t feel save about having it around – much like the web camera on my Samsung Smart TV that I have covered with a piece of tape the very same minute I set it up. Given what we now know about the global data dragnets and mass surveillance activities in the 5Eyes countries (US, UK, CAN, NZ, AUS) and everywhere else in some modified form, would YOU trust a mobile phone that can always listen to what you say?
And if Google’s goal is to some day be BETTER than humans (whatever that means), e.g. with more and deeper context such as location information, real-time ‘likes’ and search history etc, I wonder if that is really a good thing. Is the purpose of these technologies to make human lives richer or ‘better’ or is the primary purpose to create value for the parties that run these technologies and platforms (in other words, the oil companies of the future) i.e. to a) create more and deeper data that can be turned into marketing intelligence, in the essence ‘instrumentalizing’ us b) create a deep profile on pretty much any user that can be used for global, instant surveillance and ‘security’ purposes (and, as in the USA, without indivudual warrants) c) get us more addicted to technologies that ‘augment’ our human lives so that we are 100% hooked on using them….? You tell me:)
“Google is already moving rapidly to enable voice commands in all of its products. On mobile phones, Google Now for Android and Google’s search app on the iPhone allow users to search the web via voice, or carry out other basic functions like sending emails. Similarly, Google Glass would be almost unusable without voice interaction. At Google’s conference for developers, it unveiled voice control for its Chrome web browser. And Motorola’s new Moto X phone has a specialized microchip that allows the phone to listen at all times, even when it’s asleep, for the magic word that begins every voice conversation with a Google product: “OK…”
“What we’re really trying to do is enable a new kind of interaction with Google where it’s more like how you interact with a normal person,” says Huffman. To illustrate, he picks up his smartphone and says “How far is it from here to Hearst Castle?” Normally, getting an answer to such a seemingly simple question would require googling “Hearst Castle,” clicking on a map, and typing in your own address. But Huffman’s phone gets the answer right on the first try—a neat illustration of how voice commands can save time and effort. In a way, it’s part of the natural progression of convenience in computer interfaces: 10 years ago writing an email required walking over to a computer, five years ago we could whip out our phones, and in the near future we’ll simply start talking.
…this context doesn’t just make Google’s voice interfaces usable—some day, it could make them even better than humans. “Today, automatic speech recognition is not as good as people, but our ambition is, we should be able to be better than people,” says Huffman. In order to achieve that, Google will leverage the intimate knowledge it has of its users. “In some sense Google has a lot of context that [a human transcriptionist] doesn’t have,” says Huffman. “We know where you are based on your phone’s location and there is some context around what you’ve been talking about lately. Therefore that should help us understand what kinds of things you might be saying.” But commanding a computer by voice is more like the old model of interaction with a computer—the command line. It’s a potentially powerful interface—Huffman imagines a future in which we might even communicate with our computers via a verbal short-hand—but it would require that humans learn a whole new way to control computers, and learn anew the capabilities of all the software that might be used in this way…”