It says something when you can deploy one lens on a phone, yet produce results that not only rival others with multiple lenses, but even surpass them.
That’s the Google Pixel 3 and Pixel 3 XL in a nutshell. Where the company boasted that it had made “the best camera” in last year’s Pixel duo, this year’s rhetoric was lowered in place of visual proof.
The proof was indeed in the pudding often when shooting with these phones. New features offer tangible benefits and make it easier than ever to capture excellent images.
Sticking with it
Staying consistent, Google used the same cameras and software in both phones. That included defying rumours and not sticking in an additional lens.
Both phones use the same 12.2-megapixel image sensor with f/1.8 aperture lens and 1.4 Micron pixels in the rear. Those numbers are identical to the Pixel 2 devices, except the sensor here is better. The extra lens was instead put on the front for dual 8-megapixel shooters, with a 97-degree field of view and f/2.2 aperture for the wide-angle one. The standard lens is 75-degrees and f/1.8, respectively.
Despite the physical improvements, it’s the software that mostly carries the load. Google’s camera has hardly been feature-laden, though adding Google Lens can have real benefits out in the world. It’s the newer features, like Night Sight, Top Shot and Super Res Zoom, that help build on top of what is among the best mobile photography choices out there.
Shooting it all
There is a subtlety to the Pixel cameras that I can’t deny. In most cases, I just framed, pointed and shot, not really worrying about whether I used the right mode or not. That kind of trust is huge for the average user otherwise intimidated by more intricate modes.
The timing also couldn’t be better, given the iPhone XS/XS Max, Huawei Mate 20 Pro and OnePlus 6T all launched within a short period of time. I shot relentlessly with both Pixel smartphones, here and in places like London, U.K. The results, in some instances, were beyond my expectations, and that’s saying something.
The tech giant’s HDR+ is the mainstay of the AI that Google has bet on so industriously. If you make software smart enough to interpolate a scene and treat it like data it can correct, would results not turn out to be impressive? With what’s going on here, the answer is undoubtedly yes, albeit with some caveats.
Last year, HDR+ Enhanced processed images further to better interpret what was shot. This time around, it’s back and appears to be better than ever. Outside of exposure and white balance controls, manual input is limited in scope. Normally, I would be more harshly critical of Google’s barebones approach to manual control, but somehow, the arrangement here works far more often than not.
When using the Pixel 3/3 XL, I felt like a good spot-up three-point shooter in basketball, who catches a pass and pulls up for a shot with a good chance of making it. Shots usually looked good, sometimes bordering on superb or spectacular.
Google’s software skews more toward shadows and contrast, which is even more evident when viewing images on the OLED displays of either phone. The result usually comes off looking more dynamic, but sometimes, a softer touch is required. Post-processing is a little subtler compared to what others, like Samsung and Huawei, tend to do. Google doesn’t try to sharpen a photo after the fact because it is essentially already doing that while the photo is taken.
The software at work here feels like trickery, but it’s actually utilizing HDR in ways that make photos interesting. Still, I’d like to see Google offer a little more manual control for experienced shooters to decide when and how to utilize the available tools.
The Google Pixel 3’s Portrait Mode is on the left, while the iPhone XS’ Portrait Mode is on the right.
I was impressed with the previous Pixels’ capabilities in producing good portraits with only one lens. What Google needed to figure out was how to improve upon that in tangible ways, given no second lens was coming.
One thing that becomes important throughout is the rear camera’s shorter focal length and narrower viewing angle compared to other competing mobile shooters. The Pixels basically use 28mm lenses (based on 35mm equivalent), compared to 26mm for the iPhone XS, Samsung Galaxy Note 9 (and Galaxy S9/S9+) and 27mm for the Huawei Mate 20 Pro. They all have wider viewing angles to varying degrees.
It may not seem like much, but I often had to stand further back to cram the same subject into the frame. This became problematic when shooting larger structures or capturing a scene where a wider angle would’ve delivered more perspective.
But these conditions are ideal for portraits. Without a telephoto lens to help create the bokeh effect, Google’s AI works to blur out the background, while also letting you adjust the effect afterwards at any time. Rather than overwrite the image, it saves your selection as another copy.
Portrait shots were great on the Pixel 2, except issues would arise when the foreground subject was at an angle. Hair and other artifacts would also blend into the blur. It’s not perfect in the Pixel 3, but it certainly is better. What stuck out more to me was the level of contrast and low-light portrait performance.
There’s a ‘pop’ to these shots that give them character, though sometimes, a little warmth is called for, which is what I would get with the iPhone XS. I’m not crazy about skin smoothing effects, though they don’t appear as pronounced here as they are with the latest iPhones — an issue Apple is finally going to address.
I’d like Google to experiment with studio lighting of the likes Apple has done, where the camera’s contrast and vibrancy would be interesting with effects like that.
As I’ve stated repeatedly in camera reviews, I’m not a selfie guy. But plenty of people are, and the wider angle lens on the front makes backdrops and group photos a lot easier to pull off. The same Portrait mode bokeh effect can apply here, and rather than make the second lens a dedicated option to tap, just zooming out is all I had to do to engage it.
I mentioned this last year, too, but I’d like the squeeze feature to be something users can assign for shooting photos. Taking portrait shots or selfies with the superior rear camera would be easier than using a timer or angling to press the volume button as a shutter.
The Pixel 3’s Night Sight off on the left and the feature activated on the right.
This mode wasn’t officially released at the time of this review, but I got to test it in advance through a pre-release APK, and then a final version prior to its rollout. Other than faster rendering, the official release was essentially the same.
The results are almost mind-boggling. The mode can take a really dark scene that would otherwise be pointless to shoot with a phone and literally light it up. AI is the buzz term Google (and everyone else) uses to encapsulate what the software is doing, which is truly impressive for its simplicity. Point, shoot and in a few seconds, it processes the image and saves it.
This is not unlike the Night Mode in the Huawei P20 Pro and Mate 20 Pro, and Nightscape in the OnePlus 6T. Google’s HDR does a lot of math in the background to make this work, capturing a burst of images at various exposures and ISO levels with slow shutter speeds. This works handheld, which would be all but impossible to do with a regular camera without a tripod.
Google first said machine learning algorithms and automatic white balancing fuse the images together and add what is essentially a set of filters to reduce noise and pop out highlights and shadows. That’s the general overview, but the details are more complicated.
Unlike Huawei, Google doesn’t show the image being composed in real-time, so it’s not evident when shooting. Either way, the results are consistently impressive. I would just prefer a way to manage or control the effect the way Huawei allows through adjusting exposure, ISO, shutter speed and duration.
Top Shot and Super Res Zoom
The image on the right doesn’t feature Super Res Zoom, while the one on the left has the Pixel 3’s Super Res Zoom feature turned on.
These were the two other AI-assisted modes Google highlighted as useful breakthroughs for its phones. Top Shot captures multiple frames each time you snap a shot, letting you select one for the best timing. It’s a feature I wish I had on trips abroad over the last 12 months.
The beauty of it is that it’s always in play, meaning you just take a photo, tap to preview and then swipe up to see the timeline. You do have to have Motion Photos turned on, and the catch is that HDR+ Enhanced can’t be on at the same time. Reducing blinking is fine, but I saw it being more useful for moving subjects where it was hard to freeze the perfect moment, which is what it’s probably best for,
Super Res Zoom is Google’s way of offsetting the fact the Pixel 3 phones have no optical zoom. At 2x digital zoom, the results make images appear like they were shot with actual glass zooming in, whereas going beyond that makes an image fall off the cliff.
Next to the flash, digital zoom has always been one of those tools I avoid in mobile photography. But it works surprisingly well and is something I would use in a pinch. There’s a fair bit of software working to do this, as outlined in this explainer.
Google didn’t make dramatic changes on the video side, but increased stabilization should make for smoother footage. The ‘video stabilizer’ mode is on by default, just as it was last year. Unfortunately, 4K is still set at a max of 30fps, and slow-motion video is still limited to 120fps and 240fps — without an option for super slow-motion clips at 960fps.
Strangely, Google still hasn’t addressed the ability to turn a Motion Photo into a GIF within the camera app. It still forces users to use third-party apps to do it. In an age where GIFs are common on social media, it seems an odd omission.
I would’ve liked to see more in the video setting. There’s little to adjust in framerate, which is a shame when 24fps would look amazing with this kind of software. Don’t get me wrong, I would still use it extensively for capturing footage when necessary, but would usually defer to another phone offering greater flexibility.
I should note that Google finally started supporting RAW, giving users even more to work with in post-processing. Move some of those images to Adobe Lightroom, or something similar, and the results could be pretty astounding. Those images are separated into a different folder within Google Photos, making it easier to share and access after.
For me, the mark of an excellent phone camera is its consistency and versatility. How well does it shoot, and how often can it do it when conditions change? The Pixel 3 and 3 XL have already proven their mettle in both regards, proving so popular that it’s being sideloaded onto other Android handsets.
The inclination, and it is understandable, is to compare the Pixel 3 camera performance to the iPhone, but Google’s handsets brings about confidence just about every time the shutter snaps.