Apple News

Stacking up Apple HomePod, Amazon Echo and Google Home


Google Home supports:

  • Google Play Music
  • Spotify
  • YouTube Music (U.S. and Australia only)
  • Pandora (U.S. only)


To set the default music service, you have to use the app to make the change. For Amazon, by comparison, you do it by speaking directly to the speaker. One problem Google faces right now is that Amazon Echo is available in 89 countries, while Google Home is available in just 7.

Calling, messaging

Google, for their part, have the advantage on Amazon of having Google Voice as a property, which allows easy interfacing with the plain old telephone system. Google Home can call phone numbers in the address book out of the box. With a little setup, it can have your correct caller ID, but by default it sends Unknown when placing a call. It does not send text messages, saying, “Sorry, I can’t send texts yet.”

SmartHome

To set up smarthome integrations with Google Home, you use the Google Assistant application. It’s not easy, but it’s four taps away once you launch the Google Assistant application. There’s a tap in the upper right on an icon that looks sort of like an old inbox/outbox tray, followed by a tap on a ellipses menu icon, followed by a tap on Settings, a tap on Home Control, with a tap on “+” to add a home control integration. Then, select from a list of supported devices and you’re off and going.

How Siri / HomePod Stacks Up

Apple claims Siri has 500M users; the company did not provide specifics on what constitutes an active user, or which devices see the most activity; the figure is up from 375M in June 2017 Siri works in 21 languages, localized for 36 countries. This is important, especially as Apple Music is available in 113 countries, 59 countries where Spotify is completely unavailable.

Apple knows how people use Siri, which is one of the things Phil Schiller told Sound and Vision recently, saying, “Voice technologies like Siri are also gaining in popularity with Siri responding to over 2 billion requests each week. This helps us understand how people actually interact with their devices, what they ask, and helps us create a product for the home that makes sense.”

HomePod can do translation in French, German, Italian, Mandarin Chinese, and Spanish.

HomePod can do translation in French, German, Italian, Mandarin Chinese, and Spanish.

When it comes to accessing calendars, notes and lists, Schiller said, ” In addition to Siri’s deep knowledge of music, Siri understands over a dozen categories, Home as we’ve discussed, News, Alarms & Timers, Weather, Sports, Messages, and more. We also opened up SiriKit for HomePod, which allows you to use Siri to access your favorite messaging apps or add reminders, notes, and lists to the apps you use on your iPhone. And what’s important for all of this is the reason we call these ‘domains.’ Siri understands these topics deeply and understands what you’re looking for even though we all might ask for things in different ways. Siri understands meaning and intention, so it enables a more natural interaction.”

That is, Siri on HomePod could potentially access any app that gets on board with SiriKit. Obviously, Apple’s own Notes, Reminders, and Calendar support it. SiriKit understands a number of different “domains.”

SiriKit’s domains:

  • VoIP Calling – Initiate calls and search the user’s call history.
  • Messaging – Send messages and search the user’s received messages.
  • Payments – Send payments between users or pay bills.
  • Lists and Notes – Create and manage lists and to-do items
  • Visual Codes – Convey contact and payment information using Quick Response (QR) codes.
  • Photos – Search for and display photos
  • Workouts – Start, end, and manage fitness routines.
  • Ride bookings – Book rides and report their status.
  • Car telematics – Manage vehicle door locks and get the vehicle’s status.
  • CarPlay – Interact with a vehicle’s CarPlay system.
  • Restaurant reservations – Create and manage restaurant reservations with help from the Maps app.

Missing is the obvious Music domain. All of these domains are usable for third party applications, so that you can tell Siri to place a call to a contact name using an application, and it will place the call, for example, “Hey Siri, call Brian Humphris via WhatsApp.” Siri requests permission to access WhatsApp data the first time this is used, and then places the call.

Siri lets you know when a service isn’t using SiriKit by returning the message, “I wish I could, but [application name] hasn’t set that up with me yet.” The exception is music, where “Hey Siri, play the Ramones from iHeartRadio” returns “I can’t play from iHeartRadio” and prompts to search Apple Music.

We know from Schiller’s interview that Apple has been working to make sure Siri’s command of the music domain for Apple Music is stronger than Siri’s command of music in the past.

“That’s why we’ve worked hard to improve Siri’s understanding of music to deliver a more personalized experience. This tight integration of Siri and Apple Music allows HomePod to understand your music tastes and preferences, and lets you tune them by simply saying, ‘I like this song’ or ‘play more like this.’ Using the latest advancements in machine learning and AI (artificial intelligence), we’re also able to play music based on a particular genre, mood or activity, or a combination of those, so HomePod knows what ‘dinner music’ sounds like — for you — or what you mean when you want to relax,” Schiller said.

Apple’s recent purchase of Shazam can only help. Shazam is the music recognition company and app that identifies music playing around you. It also may have an impact on Apple’s augmented reality plans — when it recognizes audio is playing, it can supplement that audio with visual cues in AR — but for HomePod, it’s possible that Siri will be able to recognize music not being played on HomePod and take a command to “play more of what I just heard.”

Apple Music streams at 256kbps AAC. Some people contend the human ear can’t discern differences above this bitrate anyway, and that the Mastered for iTunes program gets as much as you could ever need out of it, but when thinking of improving audio, you do it by fixing each weak link in the audio chain: speaker placement, speakers, amplifiers, interconnects, source audio. HomePod takes care of speaker placement, speakers, amplifiers and interconnects, but source audio has stayed the same. It’s not as if Apple doesn’t have the lossless source in place. They lack either the desire or the licensing to do something about it.

Thinking about HomePod and the music domain can be done both ways — presumably, it wouldn’t hurt Apple any to interoperate with other music services. Apple used to have a history of making products that interoperated with competitors. Mail is an obvious one. iChat AV integrated with AOL and Google chat. It was Steve Jobs who said, “We have to let go of this idea that for Apple to win, Microsoft has to lose…”

The result is that HomePod is a product that relies on the iPhone, where the original iPod wasn’t reliant for very long on the Mac, and could be used by any customer with any computer. That allowed it to become a halo product, one that anyone could buy and use, and that might attract consumers to buy other Apple products.

The other way that we can think about HomePod is instead of thinking about the product first and then music services, we consider it from the music service as the starting point. The customer already owns an iPhone. That same consumer already has an Apple Music subscription, because it’s easy, because it comes with a trial with the phone, and inertia is a powerful thing. Because that consumer already has Apple Music, there’s really only one choice: HomePod. Apple spends more money on sound quality, which makes the purchase easier to stomach.

For users that have become accustomed to telling Alexa to play radio from TuneIn or a radio station from iHeartRadio, this poses a conflict. If HomePod is the best sounding speaker, does it matter if our music (or radio) isn’t in Apple Music? If you’re a user focused on the best sound quality and your lossless audio is stored in iTunes, is it treated like a second class citizen because you have to push it via AirPlay to the speaker rather than pull it via a Siri command?

Calling and Messaging

Siri understands calling and messaging on the phone, the watch, and CarPlay, but does not understand them on Apple TV. It will support them at the HomePod, albeit through a connected iPhone. It does not currently understand multiple users by voice as Google Home does.

SmartHome integrations

Siri supports HomeKit and shared homes within HomeKit. The setup here has traditionally been to open the Home App, add an accessory by tapping on a plus symbol in the upper right, scan a QR code or 8-digit numerical code, and the accessory identifies and asks to be added to a room within your Home app’s home. In the near future, these devices will also be added via NFC, so you’ll just hold up the device to your phone and it will add much more easily. HomeKit isn’t controlled by Siri on Mac, Siri on Apple TV, or Siri on CarPlay. It does work from the phone, iPad, and Watch. Siri on HomePod also understands HomeKit instructions.

Privacy is a real concern for users with devices equipped with microphones. It’s not a new thing to say, “If you’re not paying for the product, you are the product.”

Apple’s customers buy products, Google’s customers buy AdWords. This is an oversimplification given Google’s adventures in phones and making smart speakers, but it’s true enough that the largest part of Google’s income still comes from the ad business. Google does do somethings on behalf of user privacy and security, including building ad-blockers into Chrome browser’s latest version, encouraging the implementation of https everywhere. At the same time, it’s very clear that users of their products are used to build large data sets which Google can learn from.

Google warns you that if you’re going to have this device in a room where you have guests, that you inform them they may be recorded. Google early on had an issue with Google Home Mini where it was recording all the time instead of just the information after the wake word. That was limited to preview hardware sent to reviewers, but it highlights why people might be reasonably nervous about these devices.

Amazon Alexa’s privacy policy notes that they will store information about the calls and messages you make when interacting with Alexa, and that they will share information with third party providers in order to make services function as expected (weather requests go to a weather service, for example).

“Amazon processes and retains your Alexa Interactions and related information in the cloud in order to respond to your requests (e.g., ‘Send a message to Mom’), to provide additional functionality (e.g., speech to text transcription and vice versa), and to improve our services. We also store your messages in the cloud so that they’re available on your Alexa App and select Alexa Enabled Products. You or other call participants may be able to ask Alexa to help with certain functions during a call, such as ‘Alexa, volume up” and “Alexa, hang up.’ Certain Alexa Calling and Messaging services are provided by our third party service providers, and we may provide them with information, such as telephone numbers, to provide those services.”

Amazon was asked for logs of what the Echo device heard in the room during a case called the Hot Tub Murder, in which an Arkansas man was accused of killing his friend, a former police officer. Amazon stonewalled for a while, until eventually the accused voluntarily agreed to handing over the data, which Amazon provided the same day. Amazon’s objection to providing the data was that the request was too broad. Echo works by listening for the wake word (“Alexa”), and then sending the recording of your voice command to Amazon’s servers for processing. Recordings are saved remotely. This is visible to the user when you can see and review your voice requests made to an Echo device in the Alexa app.

Siri, on the other hand, uses local processing. First of all, this means that HomePod is not recording or sending information to Apple until after the “Hey Siri” wake word is positively understood. Once Hey Siri is recognized, the request is sent to Apple’s servers using a random ID, rather than an identifier with a user account, like your iCloud account. If you turn Siri off, it deletes those requests, deletes any user data associated with those requests, and has to start learning again when you re-enable Siri.

Apple's data path for differential privacy

Apple’s data path for differential privacy

Apple also uses “differential privacy.” The idea is that differential privacy governs how much data can be accessed by a company in the first place, so that the company could be a bad actor and the user could still trust that their information is private. Differential privacy prevents against correlation of data that could identify a user. An article on Apple’s Machine Learning Journal says, “It is rooted in the idea that carefully calibrated noise can mask a user’s data. When many people submit data, the noise that has been added averages out and meaningful information emerges.”

Apple employs differential privacy locally, in device, rather than storing it on a server. The benefit to the user here is that the data is randomized in device before it ever hits a server. Google employs differential privacy in Chrome, but it isn’t implemented system wide, not in GBoard, not in Google Home or assistant.

The HomePod is not a multi-user device, and will let you read and send messages by voice. If you’re not in the room and your family or roommate is, they can prank you by hearing your messages and then replying with rude responses. There are definitely some practical privacy considerations that the user needs to make that Apple hasn’t addressed yet. The question remains, who do you trust-Apple, Google, or Amazon, and to what degree.

The HomePod is not a Halo product. The iPod was a Halo product. The iPhone is a Halo product. If the HomePod were a Halo product, it would be one you could purchase stand-alone and use out of the box without anything else. And that’s sort of true, but with a lot of caveats.

You need an iPhone to set it up. You need Apple Music to kick off music by voice command. You need an iTunes library or other app with music in it on an iOS device in order to stream over AirPlay to it, but AirPlay can be considered a second-class citizen here to using it by voice, if you’re interested in a Voice-First interaction paradigm. If you attempt to use it as an AirPlay speaker for Apple TV audio, that’s going to work until the moment you send AirPlay audio to it from another device, or play Apple Music through it, and then you’ll need to re-establish the connection from the Apple TV.

There’s a lot of discussion framing the HomePod as a speaker that happens to be smart (with assistants), versus Google Home and Echo products that are smart assistants that happen to be in the form of speakers. There’s some truth to this, although I firmly believe the future is one where these assistants do become a primary form of computing input.

The history of computing is one where adoption has gone up as the interface has gotten easier, from the command line to the graphical interface, to touch, and now to voice. While thinking historically on the matter, we should address iPod Hi-Fi. Apple has retail stores that do the most amount of dollar value in sales per square foot. Apple has always had a section devoted to speakers, selling Bose, Bowers & Wilkins, Harman Kardon/JBL, and even Bang and Olufsen, Libratone, and Devialet.

Apple tried to launch iPod Hi-Fi because they wanted to make a product with a design they could love, and to capture some of those sales. It didn’t work. But among the speakers in that space and price segment, only Libratone and Sonos have added smart assistants, so there’s still plenty of room for Apple to take a share of the market.

What the HomePod does have going for it is the fact that it’s possible to play music from any iOS device with ease, that it will sound, by all accounts, amazing when it does so, and that it’s going to work for many languages in many countries, where Alexa and Google Assistant aren’t available or aren’t available in the primary language. That alone makes it well-positioned for success.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *