taking photo on smartphone at concert Getty Images

Google’s Pixel Gets Unorthodox Zoom From AI

Instead of adding multiple cameras with complicated optics to its phones, Google has opted for a single extra lens that relies on AI and processing to fill in the quality gap.

(Bloomberg) -- Google’s latest smartphone demonstrates how artificial intelligence and software can enhance a camera’s capabilities, one of the most important selling points of any mobile device.

The Pixel 4, the latest entrant in a phone line defined by its cameras, touts an upgraded ability to zoom in when shooting photos as its biggest upgrade. But the Alphabet Inc. company isn’t going about it the way that Samsung Electronics Co., Huawei Technologies Co. or Apple Inc. have done -- instead of adding multiple cameras with complicated optics, Google has opted for a single extra lens that relies on AI and processing to fill in the quality gap.

In place of the usual spec barrage, Google prefers to talk about a “software-defined camera,” Isaac Reynolds, product manager on the company’s Pixel team, said in an interview. The device should be judged by the end-product, he argued, which Google claims is a 3x digital zoom that matches the quality of optical zoom from multi-lens arrays. The Pixel 4 has two lenses with a magnification factor between them that’s less than 2x, and the tech that extends that useful range is almost entirely software.

The success of the Pixel’s camera is instrumental to Google’s broader ambitions: it drives Google Photos adoption, provides more fodder for Google’s image libraries, and helps create better experiences with augmented-reality applications -- such as this year’s new on-screen walking directions in Google Maps.

Super Res Zoom, a feature Google launched last year, uses the slight hand movements of a photographer when capturing a shot -- usually a hurdle to creating crisp images -- as an advantage in crafting an image that’s sharper than it otherwise would be. The camera shoots a burst of quick takes, each one from a slightly different position because of the camera shake, then combines them into a single image. It’s an algorithmic trick that lets Google collect more information from imaging hardware, and potentially also a moat against any rivals trying to copy Google -- because others can’t just buy the same imaging sensors and replicate the results.

To augment its reliance on AI and machine-learning tasks, Google has designed and added its own Pixel Neural Core chip for the Pixel 4 lineup. It accelerates the machine-learning speed of the device and, again, is intended to differentiate Google’s offering from other Android smartphones on the market with a Qualcomm Snapdragon processor at its core.

The other major tool in Google’s AI kit is called RAISR, or Rapid and Accurate Image Super Resolution, which trains AI on vast libraries of images so it can more effectively enhance the resolution of images. The system is taught to recognize particular patterns, edges and visual features, so that when it detects them in lower-quality shots, it knows how to improve them. That’s key to creating zoom with “a lot smoother quality degradation,” as Reynolds put it. With more than a billion Google Photos users, the U.S. company has a massive supply of images to train its software on.

Among the other features that Google offers with the Pixel 4 is the ability to identify the faces of people that a user photographs most often and ensure that they’re prioritized when capturing new snapshots -- making sure the camera focuses on them and that their eyes aren’t closed, for instance. That use of software technology has defined Google’s devices to date and is also evident in the way Facebook Inc., Amazon.com Inc. and Apple aim to employ their own AI systems.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish