The Verge Awards at CES 2018: Hey Google

[ad_1]

The CES crowd seems to be dwindling every year as more tech companies wait until events like Mobile World Congress or their own developer shows to announce new products. But at CES 2018, even though we saw fewer things, they were also more interesting than previous years. We also saw Google, for the first time, make a spectacle out on the convention floors, fighting back against Amazon’s Alexa takeover.

Cover CES enough times, and you’ll also see trends start and grow from the show floor booths. Fingerprint readers that are directly embedded into touchscreens, for example, might be the new standard — just as the Qi wireless standard became universal this year now that Apple has adopted it on its latest iPhone models.

While there’s no one breakout product that stole the show this year, these are the pieces of technology that will shape 2018, and beyond, in tech. —Natt Garun


Photo by James Bareham / The Verge

Best in Show

CES tends to have this weird duality where devices from the far-out future sit alongside iterative updates to gadgets from the here and now. There’s rarely a futuristic object from the near future. But this year, we got exactly that in the shape of a Vivo smartphone with the fingerprint reader built into the display. A Synaptics optical sensor sits just under the display and peers through the gaps between the OLED dots to recognize the unique pattern of your fingerprint. Using it feels just like any of the physical biometric sensors we’re used to — except it no longer requires dedicated space on the front of your phone, and thus allows for some very sleek, no-compromise designs. The best part is that the Synaptics optical fingerprint reader is already in mass production, and 2018 looks set to be full of new smartphones that use it. Vivo is just the beginning. —Vlad Savov


Photo by James Bareham / The Verge

Most in Show (Formerly best hype)

How do you beat a voice assistant that’s absolutely everywhere at CES? By being even more everywhere. After two years of Alexa being the dominant voice assistant at the show, Google came out in full force, making sure the Google Assistant was inside more gadgets — and more types of gadgets — than its opponent. Now, maybe that’s because everything already seems to have Alexa built in. (Many of this year’s gadgets supported both assistants, if not more.) Google also got its Assistant into a wide variety of products, ranging from speakers, screens, headphones, pet feeders, infotainment systems, TVs, a light switch, and yeah, even more speakers. You really couldn’t go more than a few minutes this year without hearing about a new Google Assistant integration. While Alexa may be integrated in more things (roughly 4,000 products to Google’s 1,500 compatible devices), more people paid attention to Google Assistant this year than ever.

We certainly don’t know that all of these will be good, or even useful, but this year made it very clear that the battle over voice assistants is still getting started: Google is keeping pace with Amazon, third parties aren’t choosing sides, and we’re all going to end up with a lot of choice — even if that could lead to a lot of confusion when one of your smart gadgets accidentally gets set to Bixby. —Jake Kastrenakes


Photo by Chris Welch / The Verge

Best TV

In its quest to come up with a better TV than LG’s OLEDs, Samsung took a major leap forward at CES 2018 by introducing The Wall. The Wall is a modular TV that uses MicroLED technology — with many of the same perks as OLED, but fewer drawbacks — to create its incredibly bright, splashy picture. The Wall’s modularity allows it to be customized to any practical size. Is it going to be expensive when it launches this spring? Undoubtedly. Is it a bit weird to see seams running through a TV when you’re looking at it up close? Sure. But The Wall stood out as something new when most TV makers like LG, Sony, and TCL played it safe this year with modest improvements to their products from a year ago. —Chris Welch


Photo by Sam Byford / The Verge

Best Robot

While robots struggle to become useful beyond vacuum work, many manufacturers have been relying on charm to get by. Huge LCD faces, emotive eyes, endearing movements, and… not a lot of robotics. But this year, Sony showed how a truly charming robot should look and act, and resurrected a beloved product in the process: Aibo is back. Filled with sensors and servos, Aibo’s adorable movements, lifelike responses, and undeniable status as a good robot dog won our hearts this year. And after all, what could be more useful than love? —Paul Miller


Photo: Sennheiser

Best Headphones

The headphones industry is in a state of major flux, so it’s fitting that its best exemplar at CES was an unfinished but gorgeous pair of audiophile cans. Sennheiser brought only four demo units of its brand-new HD 820 closed-back headphones, each of them handmade especially for the big show. We won’t be seeing these $2,400 headphones on sale until the summer, but they still wowed CES visitors with their unique Gorilla Glass window on the sides and that characteristic Sennheiser flagship sound. Sennheiser has a supremely well-regarded set of audiophile headphones in the HD 800 S, and the HD 820 are simply a closed version: the difference is that the new headphones can be listened to without disturbing others around you. —Vlad Savov


Photo by Chaim Gartenberg / The Verge

Best CES

It might not be practical. It probably won’t ever ship. But Razer’s Project Linda prototype is undeniably the most “CES” product out on the show floor. Let Google throw around how useful and practical its Assistant is now. Across the hall, Razer is shoving a smartphone into a laptop, using it simultaneously as the brains and the trackpad, and forcing you to challenge what you thought the line between a phone and a laptop could be. I mean, just look at this thing! It’s like a concept car for gadgets, being both aspirational and sensational, even though it might never ever ship.

Project Linda feels like a product that was pulled out of a future that might never be, yet somehow it’s here in the real world to see and touch and use today years ahead of schedule. And even if that vision will be shattered when everyone leaves Vegas and Linda ends up shelved alongside dozens of other CES prototypes, at least for one week we all could dream. —Chaim Gartenberg


Photo: Nvidia

Best Gaming

I’m not really a proponent of 4K gaming. It doesn’t make a profound difference on all but the biggest of TVs, and even then I think it’s often not worth the hit to performance and frame rate. But Nvidia’s Big Format Gaming Display — the BFGD — gets you the best of both worlds. It’s a display spec with manufacturers including Asus, HP, and Acer on board; all models use 65-inch 4K 120Hz panels with HDR capability. But the real breakthrough is that they work with Nvidia’s G-Sync technology, bringing the unbeatable smoothness of a high-end gaming monitor to a far bigger screen size. And although Nvidia is careful to point out that BFGDs aren’t TVs, the company is building in its excellent Shield streaming box anyway. —Sam Byford


Photo by Natt Garun / The Verge

Most Touching

Point blank, a lot of CES is just useless junk, and we fully expected something with Aflac branding all over it to be equally bad. Instead, we got a heartwarming robot toy that has no camera, no voice assistant, and no games. It does, however, have the capability of comforting children battling cancer to help them feel less alone in their fight, and take control of their emotions in a situation that’s very uncontrollable. Sniff. The toy won’t be sold, but instead given to children at care centers nationally for free. Extra sniff.Natt Garun


Photo by Sam Byford / The Verge

Best Monitor

LG’s 34WK95U is looking like an amazing monitor for desktop productivity. It’s a 34-inch 21:9 5120 x 2160 panel with 98 percent P3 color gamut coverage and Thunderbolt 3 connectivity. In other words, it’s ultrawide, ultra vibrant, and ultra convenient. It also supports HDR for times when you want to kick back with a movie after a hard day’s multitasking. It’s going to be expensive, but there’s nothing I want more on my desk this year. —Sam Byford


Photo by Jake Kastrenakes / The Verge

Best Laptop

It’s been a relatively quiet CES year for crazy laptops, but Dell has managed to stand out. The new XPS 15 2-in-1 combines the tablet features you’d expect with a new Maglev keyboard. Dell is using a brand-new mechanism that relies on magnets. Magnets beneath each key now provide additional feedback, creating a clickier keyboard than you’d normally expect from 0.7mm of travel. It feels a little odd at first, but I liked the clicky keyboard after using it a little more.

The XPS 15 also flips over like a Lenovo Yoga, and it’s one of the first to include Intel chips powered by discrete AMD Radeon RX Vega M GPUs. Intel and AMD haven’t worked together since the ‘80s, and I’m hopeful this might mean we’ll see a lot more regular (not giant) laptops that are capable of playing games in the future. —Tom Warren


Photo by Eargo

Most Useful Health

I’m deeply skeptical about CES health gadgets because most of them are useless. You have headsets and trackers and connected everything to harvest data, but half the time the data is inaccurate and the gadget is overpriced. And even when it’s accurate, it’s usually more information than any of us know what to do with. Tracking every minute of your sleep or every beat of your heart doesn’t mean anything without useful context, which is exactly what gadgets are bad at providing.

The one category that shows promise this year is smart and stylish hearing aids from companies like ReSound, Oticon, and Eargo. About 80 percent of people with hearing loss don’t wear aids, in part because of the stigma. By making hearing aids “cool”— whether through design or through their ability to be connected with doorbells — these companies are encouraging people to get the help they need. Instead of inventing a problem or saddling us with unrealistic performance expectations, these companies improve tech that serves an existing problem that more and more people will have as we age. —Angela Chen


Photo: DJI

Best Accessory

DJI’s Osmo smartphone stabilizer is fantastic for turning your smartphone videos into smooth, professional-looking shots, but the original $299 price tag was just too high for most consumers. The new Osmo Mobile fixes that, with a $129 price tag that makes it way more affordable, along with nearly quadruple the battery life. Your new vlogging career awaits. —Chaim Gartenberg


Photo: Daimler AG

Best Car Tech

I was all set to mock Mercedes-Benz for using CES to first, launch its own infotainment system for future cars (yawn) and second, making everyone say “Hey, Mercedes,” to issue voice commands. But the mocking stopped when I saw the new Mercedes-Benz User Experience in action. Automakers have been getting closer and closer to making their own in-car infotainment systems act more like operating systems from Apple or Google, and therefore be more natural to use. It’s not perfect (there’s no Google search, for example), but the fact you can tell your car it’s too cold or warm inside and it actually changes the temperature of the heating / ventilation system is cool. And Mercedes made it work, along with awesome ambient lights that either glow cooly or warmly, depending on what you told it to do. Alexa who? —Zac Estrada


Photo by Sean O’Kane / The Verge

Best Rideable

It may not be the most thrilling or unique idea we saw at CES this year, but the Ford Ojo scooter is one of the best light electric vehicles I’ve ridden. California company Ojo has taken the best things about a number of other electric scooters and put them all here in one place. It’s fast, has an abundance of range (up to 50 miles), and it’s built like a tank. The $2,150 price tag for the fully loaded model might spook some people, but this is the rare case where it feels justified.

Like other years, there were many different forms of electric skateboards, scooters, and bikes at CES this year. Some of them are wild, but many of them aren’t very practical. Other designs might turn a few more heads on the street, but the Ford Ojo scooter is one of the rare rideables that can get you where you’re going even if it’s far away. —Sean O’Kane


Photo by Tom Warren / The Verge

Best VR / AR

HTC’s new Vive Pro comes with all the upgrades its virtual reality headset needed: a noticeable resolution bump, built-in headphones, and a new wireless adapter. That means Vive owners will be able to use full room-scale VR without any cumbersome cables. Of course, you still need to set up Valve’s obnoxious laser towers, and you still need a powerful gaming PC to use SteamVR software. But the Vive Pro is an interesting challenge to the Rift, which Oculus has been aggressively discounting for months to spur VR adoption. We’ll see how the VR platform wars heat up after this. —Nick Statt


Photo by Dami Lee / The Verge

Best Disaster

It’d be easy to just write this category off as a fun joke about how all of CES is a disaster. But here at The Verge, we take things more seriously, which is why the Best Disaster award goes to the torrential CES flood that drowned Las Vegas on the first day of the show. Drenching the city with its first precipitation in months, the rain threw all of CES for a loop, flooding streets, closing Google’s much-hyped playground, and leading to endless taxi lines.

But the biggest ramification of the rain wouldn’t hit until a day later when condensation from the downpour would blow a transformer, leading to a massive power outage in the now infamous #CESBlackout that shut down the convention center for nearly two hours, right at the peak of the conference. —Chaim Gartenberg


Photo by Alix Diaconis / The Verge

Least-Threatening AI

I’m not being sarcastic when I say that this massive AI ping-pong robot isn’t in the least bit threatening, but rather the opposite. It doesn’t have voice or facial recognition, so there’s no chance of it ever abusing its AI powers to spy on us. Rather, the only thing it recognizes is a good, clean game of table tennis. (Okay, technically, it recognizes the ball in a 3D space, and the player’s movements to determine their skill level, too.) As Forpheus and I played, a cutesy, high-pitched voice encouraged me with messages like “This rally is fun!” and “You’re getting better!” Listed in the Guinness World Records as the world’s first table tennis tutor, its LED net displays the player’s skill level, which is constantly updated as the robot learns more about you.

After starting out at a skill level of 55 out of 100, I’m proud to say that training with Forpheus brought me up to 77. It’s too bad that Forpheus isn’t for consumers (it was just created to showcase Omron’s technologies), because my experience was just like playing with a friend who’s really, really good at ping-pong. —Dami Lee


Photo: LG Display

Best Prototype

LG Display is a nominally separate company from LG Electronics, whose job it is to advance the state-of-the-art of display technology and wow CES visitors with outlandish prototypes. The 65-inch rollable display that the company brought to this year’s show is a perfect example. It’s OLED, it has a 4K resolution, and it looks absolutely stunning. But it can also roll down into its base, rather like a projector screen flipped on its head, and thereby adopt different aspect ratios. The TV goes from its native 16:9 to a wider 21:9 cinema mode at the press of a button, and it can also be hidden away entirely for a discreet home theater look. —Vlad Savov

[ad_2]

Source link

قالب وردپرس

Keeping Spectre secret

[ad_1]

When Graz University of Technology researcher Michael Schwarz first reached out to Intel, he thought he was about to ruin the company’s day. His team had found a problem with their chips, a vulnerability that was both profound and immediately exploitable. His team finished the exploit on December 3rd, a Sunday afternoon. Realizing the gravity of what they’d found, they emailed Intel immediately.

It would be nine days until Schwarz heard back. But when he got on the phone with someone from Intel, Schwarz got a surprise: the company already knew about the CPU problems and was desperately figuring out how to fix them. Moreover, the company was doing its best to make sure no one else found out. They thanked Schwarz for his contribution, but told him what he had found was top secret, and gave him a precise day when the secret could be revealed.

The flaw Schwarz — and, he learned, many others — had discovered was potentially devastating: a design-level chip flaw that could slow down every processor in the world, with no perfect fix short of a gut redesign. It affected almost every major tech company in the world, from Amazon’s server farms to the chipmakers like Intel and ARM. But Schwarz had also come up against a secondary problem: how do you keep a flaw this big a secret long enough for everyone involved to fix it?

Disclosure is an old problem in the security world. Whenever a researcher finds a bug, the custom is to give vendors a few months to fix the problem before it goes public and bad guys have a chance to exploit it. But as those bugs affect more companies and more products, the dance becomes more complex. More people need to be told and kept in confidence as more software needs to be quietly developed and pushed out. With Meltdown and Spectre, that multi-party coordination broke down and the secret spilled out before anyone was ready.

That early breakdown had consequences. After the release, basic questions of fact became muddled, like whether AMD chips are vulnerable to Spectre attacks (they are), or whether Meltdown is specific to Intel. (ARM chips are also affected.) Antivirus systems were caught off guard, unintentionally blocking many of the crucial patches from being deployed. Other patches had to be stopped mid-deployment after crashing machines. One of the best tools available for dealing with the vulnerability has been a tool called Retpoline, developed by Google’s incident response team, initially planned for release alongside the bug itself. But while the Retpoline team says they weren’t caught off guard, the code for the tool wasn’t made public until the day after the official announcement of the flaw, in part because of the haphazard break in the embargo.

Perhaps most alarming, some crucial outside response groups were left out of the loop entirely. The most authoritative alert about the flaw came from Carnegie Mellon’s CERT division, which works with Homeland Security on vulnerability disclosures. But according to senior vulnerability analyst Will Dormann, CERT wasn’t aware of the issue until the Meltdown and Spectre websites went live, which led to even more chaos. The initial report recommended replacing the CPU as the only solution. For a processor design flaw, the advice was technically true, but only stoked panic as IT managers imagined prying out and replacing the central processor for every device in their care. A few days later, Dormann and his colleagues decided the advice wasn’t actionable and changed the recommendation to simply installing patches.

“I would have liked to have known,” Dormann says. “If we’d known about it earlier, we would have been able to produce a more accurate document, and people would have been more educated right off the bat, as opposed to the current state, where we’ve been testing patches and updating the document for the past week.”

Still, maybe that damage was inevitable? Even Dormann isn’t sure. “This happens to be the largest multi-party vulnerability we’ve ever been part of,” he told me. “With a vulnerability of this magnitude, there’s no way that it’s going to come out cleanly and everyone’s going to happy.”


The first step in the Meltdown and Spectre disclosures came six months before Schwarz’s discovery, with a June 1st email from Google Project Zero’s Jann Horn. Sent to Intel, AMD and ARM, the message laid out the flaw that would become Spectre, with a demonstrated exploit against Intel and AMD processors and troubling implications for ARM. Horn was careful to give just enough information to get the vendors’ attention. He had reached out to the three chipmakers on purpose, calling on each company to figure out its own exposure and notify any other companies that might be affected. At the same time, Horn warned them not to spread the information too far or too fast.

“Please note that so far, we have not notified other parts of Google,” Horn wrote. “When you notify other parties about this issue, please don’t share information unnecessarily.”

Figuring out who was affected would prove difficult. There were chipmakers to start, but soon it became clear that operating systems would need to be patched, which meant looping in another round of researchers. Browsers would be implicated, too, along with the massive cloud platforms run by Google, Microsoft, and Amazon, arguably the most tempting targets for the new bug. By the end, dozens of companies from every corner of the industry would be compelled to issue a patch of some kind.

Project Zero’s official policy is to offer only 90 days before going public with the news, but as more companies joined, Zero seems to have backed down, more than doubling the patch window. As months ticked by, companies began deploying their own patches, doing their best to disguise what they were fixing. Google’s Incident Response Team was notified in July, a month after the initial warning from Project Zero. The Microsoft Insiders program sent out a quiet, early patch in November. (Intel CEO Brian Krzanich was making more controversial moves during the same period, arranging an automated stock sell-off in October to be executed on November 29th.) On December 14th, Amazon Web Server customers got a warning that a wave of reboots on January 5th might affect performance. Another Microsoft patch was compiled and deployed on New Year’s Eve, suggesting the security team was working through the night. In each case, the reasons for the change were vague, leaving users with little clue as to what was being fixed.

Still, you can’t rewrite the basic infrastructure of the internet without someone getting suspicious. The strongest clues came from Linux. Powering most of the cloud servers on the internet, Linux had to be a big part of any fix for the Spectre and Meltdown. But as an open-source system, any changes had to be made in public. Every update was posted to a public Git repository, and all official communications took place on a publicly archived listserve. When kernel patches started to roll out for a mysterious “page table isolation” feature, close observers knew something was up.

The biggest hint came on December 18th, when Linus Torvalds merged a late-breaking patch that changed the way the Linux kernel interacts with x86 processors. “This, besides helping fix KASLR leaks (the pending Page Table Isolation (PTI) work), also robustifies the x86 entry code,” Torvalds explained. The most recent kernel release had come just one day earlier. Normally a patch would wait to be bundled into the next release, but for some reason, this one was too important. Why would the famously cranky Torvalds include an out-of-band update so casually, especially one that seemed likely to slow down the kernel?

It seemed even stranger when month-old emails turned up suggesting that the patch would be applied to old kernels retroactively. Taking stock of the rumors on December 20th, Linux veteran Jonathan Corbet said the page table issue “has all the markings of a security patch being readied under pressure from a deadline.”

Still, they only knew half the story. Page Table Isolation is a way of separating kernel space from user space, so clearly the problem was some kind of leak in the kernel. But it still wasn’t clear how the kernel was breaking or how far the mysterious bug would reach.

The next break came from the chipmakers themselves. Under the new patch, Linux listed all x86-compatible chips as vulnerable, including AMD processors. Since the patch tended to slow down the processor, AMD wasn’t thrilled about being included. The day after Christmas, AMD engineer Tom Lendacky sent an email to the public Linux kernel listserve explaining exactly why AMD chips didn’t need a patch.

“The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault,” Lendacky wrote.

That might sound technical, but for anyone trying to suss out the nature of the bug, it rang out like a fire alarm. Here was an AMD engineer, who surely knew the vulnerability from the source, saying the kernel problem stemmed from something processors had been doing for nearly 20 years. If speculative references were the problem, it was everyone’s problem — and it would take much more than a kernel patch to fix.

“That was the trigger,” says Chris Williams, US bureau chief for The Register. “No one had mentioned speculative memory references up to that point. It was only when that email came out that we realized it was something really serious.”

Once it was clear this was a speculative memory problem, public research papers could fill in the rest of the picture. For years, security researchers had looked for ways to crack the kernel through speculative execution, with Schwarz’s team from Graz publishing a public mitigation paper as recently as June. Anders Fogh had published an attempt at a similar attacks in July, although he’d ultimately come away with a negative result. Just two days after the AMD email, a researcher who goes by “brainsmoke” presented related work at the Chaos Computer Congress in Leipzig, Germany. None of those resulted in an exploitable bug, but they made it clear what an exploitable bug would look like — and it looked very, very bad.

(Fogh said it was clear from the beginning that any workable bug would be disastrous. “When you start looking into something like this, you know already that it’s really bad if you succeed,” he told me. After the Meltdown and Spectre releases and the ensuing chaos, Fogh has decided not to publish any of his further research on the topic.)

In the week that followed, rumors of the bug started to filter downstream through Twitter, listserves, and message boards. A casual benchmark shared on the PostgreSQL listserve found a 17 percent decline in performance — a terrifying number for anyone waiting to patch. Other researchers wrote informal posts rounding up what they knew, careful to present everything they knew as just a rumor. “[This post] mostly represents guesswork until such times as the embargo is lifted,” one recap wrote. “Many fireworks and much drama is likely when that day arrives.”

By New Year’s Day, the rumors had become impossible to ignore. Williams decided it was time to write something. On January 2nd, The Register published its piece on what they called an “Intel processor design flaw.” The piece laid out what had happened on the Linux listserve, the ominous AMD email, and all the early research. “It appears, from what AMD software engineer Tom Lendacky was suggesting above, that Intel’s CPUs speculatively execute code potentially without performing security checks,” the piece read. “That would allow ring-3-level user code to read ring-0-level kernel data. And that is not good.”

Publishing the piece would prove to be a controversial decision. Everyone in the industry assumed there was an embargo to give companies time to patch. Spreading the news early cut into that time, giving criminals more of a chance to exploit the vulnerabilities before patches were in place. But Williams maintains that by the time The Register published, the secret was already out. “I thought we had to give people a heads up that, when the patches come out, these are patches you should really install,” Williams says. “If you’re smart enough to exploit this bug, you probably could have worked it out without us.”

In fact, the embargo would only hold for one more day. The official release had been planned for January 9th, in line with Microsoft’s patch Tuesday cycle and square in the middle of the Consumer Electronics Show, which might dampen the bad news. But the combination of wild rumors and available research made the news impossible to contain. Reporters flooded researchers’ inboxes, and anyone involved had to do their best to keep quiet as it seemed less and less likely that the secret would keep for another week.

The tipping point was brainsmoke himself. One of the few kernel researchers who wasn’t subject to the developer embargo, brainsmoke took the rumors as a roadmap and set out to find the bug. The morning after The Register’s story, he found it, tweeting out a screenshot of his terminal as proof of concept. “No page faults required,” he wrote in a follow-up tweet. “Massaging everything in/out-of the right cache seems to be the crux”

Once researchers saw that tweet, the jig was up. The Graz team was determined not to spill the beans before Google or Intel, but after the public proof of concept spread, word came from Google that the embargo would lift that day, January 3rd, at 2PM PT. At zero hour, the full research went live at two branded websites, complete with pre-arranged logos for each bug. Reports flooded in from ZDNet, Wired, and The New York Times, often with information that had been gathered only hours before. After more than seven months of planning, the secret was finally out.


It’s still hard to know how much that early breakdown cost. Patches are still being deployed, and benchmarks still tallying up the ultimate damage from the fixes. Would things have gone more smoothly with an extra week to prepare? Or would it have only delayed the inevitable?

There are plenty of formal documents telling you how a vulnerability announcement like this should happen, whether from the International Standards Organization, the US Department of Commerce, or CERT itself, although they offer few hard answers for a case as sprawling as this one. Experts have been struggling with these questions for years, and the most experienced have given up looking for a perfect answer.

Katie Moussouris helped write Microsoft’s playbook for these events, along with the ISO standards and countless other guides through the multi-party disclosure mess. When I asked her to rate this week’s response, she was kinder than I expected.

“This is probably the best that could have been done,” Moussouris told me. “The ISO standards will tell you what to consider, but they won’t tell you what to do in the heat of that moment. It’s like reading the instructions and running a couple of fire drills. It’s good to have a plan, but when your building is on fire, the way you act will not be according to plan.”

The stranger thought is that, as technology becomes more centralized and interconnected, this kind of five-alarm fire may be harder to avoid. As protocols like OpenSSL spread, they raise the risk of a massively multi-party bug like Heartbleed, the internet version of a monocrop blight. This week showed the same effect in hardware. Speculative execution became an industry standard before we had time to secure it. With most of the web running on the same chips and the same cloud services, that risk multiplies even further. When a vulnerability finally surfaced, the result was an almost impossible disclosure task.

As messy as it is, that scramble has become hard to avoid whenever a core technology breaks. “In the ‘90s we used to think one-vulnerability, one-vendor, and that was the majority of the vulnerabilities you saw. Now, almost everything has some multi-party coordination element.” says Moussouris. “This is just what multi-party disclosure looks like.”

[ad_2]

Source link

قالب وردپرس

2017: A year in photographs on The Verge

[ad_1]

As 2017 finally draws to a close, we can all look back on a year that in many ways seems to have lasted 10. And we thought a lot happened last year. 2017 has been very interesting and emotional. Again.

When it comes to the photography on The Verge, 2017 was the year of “more.” Not only have we taken and published more original photographs than ever before, more people at The Verge have been taking them. More of our writers and reporters are now regularly shooting their own photos to accompany their stories, and I hope even more will pick up their cameras and phones in 2018.

While the range of subjects and the shear volume of gadgets we photographed increased exponentially, we also created a number of new photographic formats during 2017: We photographed cars as gadgets for ScreenDrive; arranged items neatly for What’s in Your Bag; shot artists and their art for Technographica; and created isometric patterns with gadgets for Guidebook and our Back-to-School and Holiday Gift Guides.

This past year also saw a dramatic increase in the amount of original content we created for our social media channels (particularly Instagram Stories) and a far greater use of movement in the imagery, particularly the use of stop motion animation.

So once again it is time to select our favorite photos from this past year, including many from a growing number of our regular freelancers. As we have so many photographs to choose from, making a final selection has been incredibly hard and no doubt we have missed out many that should have been included. But after all, there’s only so many photos of smartphones, laptops, headphones and iPhone cases you can reasonably fit in to one single post, especially when you factor in the need to leave room for a photo of cuddly pillow with a wagging tail.

The photographs below are arranged chronologically by the date they were first published. —James Bareham


JANUARY

CES LAS VEGAS: 2017

2017 was my first Consumer Electronics Show. I think that it’s safe to say that it was a little different from what I had imagined. Because in amongst the usual collection of bizarre gadgets like connected underwear, I was lucky enough to photograph the most ridiculously over-the-top $40,000 turntable and $100,000 pair of diamond encrusted headphones. Who says technology is getting cheaper every year? —James Bareham


THE SECOND AVENUE SUBWAY (FINALLY!)

In New York City, the Second Avenue subway line was like a myth passed down through generations. First proposed almost a century ago, I think few people ever thought it was going to be completed. But on New Years Day, 2017, I rode the first train to go south from 96th street and Second Avenue with hundreds of other excited strangers. I caught this quiet moment after that initial ride. —Amelia Holowaty Krales


AIRPODS ARE SO HOT RIGHT NOW

Wireless earbuds seem like old news at this point, but a month after Apple’s AirPods were first released we were all pretty jazzed to try them out. This shot was the lede image for a Racked/Verge collaboration that asked the question “Are AirPods fashionable?” I photographed each of the contributors in a style inspired by Apple’s early iPod ads. —Amelia Holowaty Krales


WOMEN’S MARCH

The historic Women’s March brought millions of people onto the streets in cities across the world. The Verge had folks on the ground in a handful of US cities and we compiled their photos into a group photo essay. I took this shot while standing on the overpass that cuts across 42nd Street and over Grand Central. The crowd stretched east for as far as the eye can see. —Amelia Holowaty Krales


LEICA M10

The shoot with the Leica M10 was the first a number of still-life shoots of cameras in 2017. All of them were shot on the same black seamless background with identical lighting. The intent was to give the reader a simple way to compare and contrast the different models, as well as give both Amelia and I a good excuse to shoot some moody pictures of cool cameras. —James Bareham


FEBRUARY


BUGATTI CHIRON UNDER CONSTRUCTION

London-based motorsports photographer Patrick Gosling travelled to the Bugatti factory in Molsheim, in the Alsace region of France, to shoot an exclusive portfolio of beautiful behind-the-scenes photographs of the $2.6 million Chiron. Even when it is in pieces, the Chiron still looks like a work of art. But then again, with that price tag, maybe it should be. —James Bareham


SAMSUNG CHROMEBOOK

Vjeran Pavic, who is based out of The Verge’s San Francisco office, has had a very busy 2017 working both as a video director and a photographer — often on the same shoot. I chose this series of Vjeran’s photos because of his subtle use of different color papers to create a graphic background for this Samsung Chromebook. So simple, and yet so effective. —James Bareham


LEXUS LIT IS

This one of a kind car was quite a spectacle and fun to photograph, especially up close. Its 41,999 RGB LED lights on 2,460 strips were mesmerizing. Amelia Holowaty Krales


INBOARD M1 SKATEBOARD

Reporter Sean O’Kane and I chose what seemed like the snowiest moment to shoot this electric skateboard. Though it was freezing cold, the swirling movement of the snow added some real movement to the image. —Amelia Holowaty Krales


DAYTONA 500

There’s nothing I love shooting more than events. I spent years shooting concerts before coming to The Verge, but I love anything with time and space constraints. Put me in a studio with endless lights and backgrounds and cameras and choices and I flounder. But drop me in the middle of a 2.5-mile race track on NASCAR’s biggest weekend of the year, when I also have to be doing reporting for the story I’m writing? I’ll take that kind of challenge every day.

Part of the fun in shooting something like the Daytona 500 is the sheer access you get as an accredited photographer. You can lean out over the wall on the pit road, stand shoulder-to-shoulder with the rest of the photojournalists in victory lane, or wander in search of the best spot to capture The Big Wreck. I tried to capture little bits of all of this to pepper in around the photos and words that served the story I wrote about NASCAR’s push into the future. What I wouldn’t give to go back. —Sean O’Kane


THE VERGECAST

This photo is, in my humble opinion, the Vergiest Vergecast photo ever. —James Bareham


Bowers & Wilkins P9

VLAD’S HEADPHONES

Senior Editor Vlad Savov is the man to speak to if you want to know about headphones. After all, he has reviewed most of them. Vlad has also taken some quite stunning product photos of a wide variety of headphones this year, as well as more than a few self-portraits wearing them. This self-portrait is by far my favorite: the lighting, styling, expression and haircut lend this photo the look of a film still from George Lucas’s THX 1138James Bareham


MARCH


A DAY WITHOUT A WOMAN

Organized by the same group behind the Women’s March, A Day Without A Woman was held on International Women’s Day and encouraged women to abstain from work (if possible) and protest. This image was taken at the southeast corner of Central Park on Fifth Avenue and 59th Street in Manhattan where speakers addressed the crowd prior to the march setting off. The woman in red in the center of this image held her pose as long as I was standing there. This photo is one of my favorites from that day. —Amelia Holowaty Krales

GENEVA MOTOR SHOW

In addition to headphones, Vlad Savov’s other passion in life is exotic super cars — which is somewhat ironic as he doesn’t drive. Vlad went to cover this year’s Geneva Motorshow and managed to come back with photos of a wide variety of motoring exotica including the Pagani Zonta Roadster, Bentley XP 12 Speed 6e, Renault Trezor and McLaren 720s amongst many others. —James Bareham


SNAP IPO

Being on the floor of the New York Stock Exchange for the first time was really interesting, especially as I was there to witness the IPO of SNAP, a tech company The Verge has covered closely since its earliest days.—Amelia Holowaty Krales

SAMSUNG GALAXY S8 LAUNCH in NYC

The Samsung Galaxy launch, or “Unbox” in March was quite a spectacle. At the close of the presentation, Samsung employees, bathed in blue light, paraded through the audience holding the new S8 aloft. Later, during the hands-on portion of the event after the presentation, a very dapper gentleman in a sparkly suit tried on a set of VR glasses. I had to get a snap. —Amelia Holowaty Krales


APRIL


GALAXY S8

For our review of the S8, I wanted to find a way to shoot the Samsung phones in a way that emphasized the huge, almost edgeless screen. It struck me that the solution was to shoot the phones on an even bigger screen: an iMac 5K screen to be precise. Verge Art Director William Joel created the stunning wallpaper art work for both the iMac and S8 screens, and it turned out so well that Will created wallpapers for every subsequent major smartphone review we undertook this year. —James Bareham

PHONE CASES: THE GOOD, THE BAD, AND THE BUNNY

No one can track down a bizarre smartphone case quite like Reporter Ashley Carman. In 2017 she reviewed cases covered in fake Lego and pompoms, as well as a fluffy bunny and a rubber duck. For each of Ashley’s lighthearted reviews I found a different patterned background to shoot against. —Amelia Holowaty Krales


MAY


STAR WARS STORMTROOPER PIZZA PARTY

When Verge Weekend Editor Andrew Liptak paid a visit to the office, he brought along a Shore Trooper costume from Rogue One: A Star Wars Story that he made himself. So we ordered pizza. Yup, that happened. —Amelia Holowaty Krales


PREDATOR GAMING LAPTOP

This laptop is ridiculously enormous. —Amelia Holowaty Krales


NAVEEN ANDREWS, SENSE 8

When it came to lighting actor Naveen Andrews, star of The English Patient, LOST, and the Netflix series Sense8, I started with the set up I use for shooting The Verge staff portraits and then kept added more lights and more color gels until it looked suitably dramatic. I felt Naveen deserved no less. —James Bareham


ANKER

Photographing purely functional technology in an interesting way is always a challenge. But Anker’s products were particularly tricky: they are basically black boxes with ports. Not a lot to work with. So shooting them with strong lighting to create long shadows and give them the appearance of floating seemed a little different from the norm. —James Bareham


HAWAII’S RARE PLANTS

When Deputy Science Editor Alessandra Potenza set off to Hawaii to visit a seed bank storing some of the rarest seeds on earth, she also took a camera with her and came back with this very impressive photo essay. —James Bareham


WELCOME TO PANDORA: THE WORLD OF AVATAR

My trip to Pandora: The World of Avatar was one of my highlights of 2017, not least because it was such unexpected surprise. I confess that when I made the trip to Disney World to attend the opening of this new attraction, I went with more than a small dose of cynicism. But as soon as entered the park I was taken aback by the breadth of imagination and astonishing attention to detail that had gone into the creation of this attraction. Mind you, I certainly helped that I could ride the incredible Flight of Passage four times without queuing for hours on end. —James Bareham


JUNE


walt mossberg

WALT MOSSBERG’S GADGET COLLECTION

When the incomparable Walt Mossberg announced his retirement earlier this year, we knew that we needed to mark the occasion in a significant way. And what better way of celebrating Walt’s illustrious career than by letting him guide us through his remarkable collection of gadgets. —James Bareham


IPAD PRO REVIEW

I have chosen this picture purely because it is just one of the many photographs I have taken of Executive Editor Dieter Bohn typing with purpose. —James Bareham


SCREENDRIVE: ROLLS-ROYCE DAWN

It was a hard day at the office when I was tasked with photographing a ScreenDrive with Ashley Carman driving the beautiful $400,000 Rolls-Royce Dawn. This image didn’t make it in the original post but it’s one of my favorites from that afternoon. —Amelia Holowaty Krales


FUJI INSTAX CAMERA

Because of Polaroid nostalgia, instant cameras will always have a place in my heart. Sean O’Kane wrote the review of the Fuji Instax and I got to play with it for an afternoon. —Amelia Holowaty Krales


Eero CEO Nick Weaver

EERO CEO NICK WEAVER

Eero CEO Nick Weaver swung by The Verge office to tell Editor-in-chief Nilay Patel about his plans for WiFi and making our homes smarter. His visit also gave me the chance to shoot a very candid and simple portrait. —James Bareham


JULY


HOGWARTS AT THE NEW YORK ACADEMY OF MEDICINE

Verge producer Sarah Bishop became Hermione Granger for a day for our visit to The New York Academy of Medicine’s rare book collection. We were given the chance to see the original books that are part of the digital collection, From Basilisks to Bezoars: The Surprising History of Harry Potter’s Magical World, released to mark the 20 year anniversary of JK Rowling’s original Harry Potter novels. —Amelia Holowaty Krales


FORMULA E IN NYC

There are two truly unbelievable things about Formula E’s first race in New York City. One is how amazing it is that the all-electric racing series, launched in 2014, is even still around; starting a racing series is hard, starting one with new technologies that not everyone is on board with is a totally different kind of challenge. The second, though, is that the series pulled off a race in New York City. That’s something major motorsports like Formula One and IndyCar were never able to make happen over the last few decades, and yet here was the upstart EV racing series putting on a double-header race weekend on the streets of Brooklyn in just its third year.

So we had to be there. The field might not be full of Earnhardts or Hamiltons, and the races are far from the spectacle of something like the Daytona 500. But that makes it all the more interesting (and challenging) to shoot. Add in the cramped confines and scenic backdrop of the Manhattan skyline, and you wind up with an event I couldn’t stop myself from overshooting. —Sean O’Kane


CON OF THRONES

Reporter Kaitlyn Tiffany traveled to Nashville’s Gaylord Opryland Resort & Convention Center to cover Con of Thrones, the first-ever full-scale fan convention for HBO’s Game of Thrones, and came back with this wonderful set of candid photographs. —James Bareham


SWALE: GARDEN ON A BARGE

I sailed up the East River (and was momentarily stuck) in a garden floating on a barge with Alessandra Potenza for this story about Swale, part installation project, part community outreach. The organization aims to bring green spaces to urban communities to encourage foraging, picking and snacking —Amelia Holowaty Krales


LORD MARCH AND THE GOODWOOD FESTIVAL OF SPEED

I took this portrait of Charles Gordon-Lennox, Earl of March and Kinrara (Lord March for short) back in the summer of 2016. I had returned to the Goodwood Festival of Speed (FOS) in England for the first time in almost 15 years. This portrait was published as part of my written preview of this year’s FOS, which was celebrating its 25th anniversary. In addition, the feature also included photographs I took during the original press preview at Goodwood House way back in 1993. In the 25 years since the FOS has now grown into one of the biggest and most important motoring events in the world. —James Bareham


AUGUST


DELL MIXED REALITY GOGGLES

I picked this awesome photo of Circuit Breaker Editor Jake Kastrenakes taken by Amelia because it is just so good. And, like the earlier shot of Vlad wearing headphones, it too looks like a still from the George Lucas’s film THX 1138James Bareham


ANDROID OREO

Yup, we correctly called it, and Amelia shot the photo to prove it. —James Bareham


V-MODA HEADPHONES

I will use any excuse to get out to the beach, even on a cold day. I liked the idea of shooting a dreamy, contemplative moment, almost like a still from a film. The copper accented V-Moda headphones were a perfect fit. —Amelia Holowaty Krales


ESSENTIAL PHONE

Vjeran Pavic’s impressive set of photos of Andy Rubin’s much heralded Essential Phone look as though they were taken in a Star Wars Imperial base, lots of red and white light reflecting on shiny black surfaces. —James Bareham


BACK TO SCHOOL GIFT GUIDE

2017 was the year of The Verge isometric still life shoot. What started with the lead image for Guidebook was developed and improved upon over the course of the year culminating in the image for our Holiday Gift Guide. This shot for our Back to School Guide was the first time I used Photoshop to drop the full image back onto the laptop screen to give the impression that it repeats endlessly. Very Interstellar.James Bareham


LOREN CHASES THE ECLIPSE

Armed with a Canon 5D, an 80-200mm lens, a tripod and the all important Solar Filter, Reporter Loren Grush headed for Nashville to chase the Eclipse and came back with this stunning photo. —James Bareham


ECLIPSEVILLE, USA

While Loren Grush was setting up in Nashville, freelance photographer Luke Sharett was heading to Hopkinsville, Kentucky. Hopkinsville was going to be the point of the greatest eclipse on August 21st, 2017 and had been preparing for it for the past ten years. Luke’s photo essay perfectly captured this momentous day for this small rural town as it unfolded.


SEPTEMBER


GALAXY NOTE 8

This shoot was the sequel to the Samsung S8 Review shoot in April. Once again, it featured some wonderful custom artwork by William Joel, but on this occasion we ditched the iMac in favor for an OLED TV screen in search of brighter colors and richer blacks. We found them. —James Bareham

GROUPS

Ben Popper and I traveled to the greater Cincinnati area to report on Groups, a start-up that is opening small clinics in rural America to address the opioid addiction epidemic. I was honored to meet and photograph people who were willing to share stories about their communities and their struggles with addiction. These are some of my favorite pictures from that series.

The top two images were taken in the Aurora, Indiana, at the Groups location during intake and a group session. “I have been here 90 days and I am ready to tell her today that I don’t need to come back,” Jan Karg told The Verge, “but I want to come back because I really enjoy this group. I really enjoy these people.”

Amanda Sampson (center), a founding member of Challenge to Change, leads a weekly substance abuse recovery group and allowed Ben and I to visit and listen in. Sampson has struggled with addiction herself and has since become a leader in the recovery community. —Amelia Holowaty Krales


APPLE iPHONE 8 & 8 PLUS

This year saw Apple release three new iPhones: the iPhone 8 and 8 plus, and then a month and a half later, the iPhone X. First up was the iPhone 8 and 8 Plus. As both of these phones were largely an iteration of the iPhone 7, I decided to continue with the theme I started for last years shoot: cameras and lenses. But for the iPhone 8, I wanted to make the set up look a little more “real world.”—James Bareham


OCTOBER


SUNDAR PICHAI

2017 has been a challenging year for Google, with both notable failures and real successes. Vjeran Pavic’s portrait of Google CEO, Sundar Pichai perfectly captures the weight of responsibilities on this man’s shoulders heading into the new year. —James Bareham


GOOGLE PIXEL 2 & 2XL

Screen issues not withstanding, one of undoubted highlights for Google was the launch of the Pixel 2 and Pixel 2 XL. Not only are they nicely designed phones, the camera is currently the one to beat. I shot the photos for our review in a photographic studio located across the East River in Industry City, Brooklyn. The studio was once a coffee roasting factory and had a wonderful wooden floors complete with inset iron doors with decades of wear, which I thought was the perfect backdrop for two phones made of aluminum and glass. —James Bareham


BEATS STUDIO 3 HEADPHONES

Though most people will not use these headphones in a studio, I took the name literally and shot these cans in one of Vox Media’s podcast studios. According to Vlad, the wireless connection can be problematic when used with Android devices. But even though it seems that there are better headphones out there, I still thought the success of the shoot warranted the photo’s inclusion here. —Amelia Holowaty Krales


SAMIA

This young singer-songwriter’s recent success in the algorithmic reality of Spotify’s playlists made her an ideal subject for The Verge. Walking around the Lower East Side neighborhood of Manhattan chatting and photographing Samia for Kaitlyn Tiffany’s piece on was a delight. —Amelia Holowaty Krales


WHY’D YOU PUSH THAT BUTTON PODCAST

Tackling the tough questions of modern life, Kaitlyn Tiffany and Ashley Carman discuss the implications of turning on read receipts; why and when to super like something; and admit it, you stalk people on Venmo, right? These brave women discussed all this and more in the new podcast, Why’d You Push That Button. —Amelia Holowaty Krales


CIRCUIT BREAKER LIVE SHOW

I have spent the last few months describing the Circuit Breaker Live show (that has been airing weekly on Twitter) as “the Wayne’s World” of gadget shows. The studio set has the perfect “down in the basement” vibe and is the ideal location for (left to right) Chaim Gartenberg, Nilay Patel, Paul Miller, Ashley Carmen and Jake Kastrenakes to go deep into the nerdy weeds of the latest gadgetry.


VISITING ANDY WEIR’S LUNAR CITY ARTEMIS AT NEW YORK COMIC CON

One of the more interesting attractions at this year’s New York Comic Con was a pop-up museum devoted to Andy Weir’s latest novel Artemis. The book is a crime thriller set on the moon, and his audiobook publisher set up an extensive exhibit about the fictional world. But the centerpiece was an installation by Luke Jerram, a 1:500,000 scale replica of the Moon. It’s an astonishing piece of art, and it’s probably the best look that I’ll ever get of our closest natural satellite.

While I took pictures of the installations, I happened to snap a picture of two guests silhouetted against the bright lunar surface. It made for a particularly breathtaking shot. — Andrew Liptak


iPHONE X

I don’t think I have ever shot so many photographs for a single review during all my time at The Verge as I did for the iPhone X. Yet the best photo by far in this review is actually an infrared video still shot by Senior Video Director Phil Esposito. Phil perfectly captured the iPhone X’s facial recognition system lighting up Nilay’s face. It’s a remarkable shot. —James Bareham


NOVEMBER


ASTON MARTIN FACTORY, GAYDON, ENGLAND

Photographer Patrick Gosling joined The Verge Transportation Editor Tamara Warren on a visit to the Aston Martin factory in Gaydon, Warwickshire, England. While Tamara interviewed Aston Martin CEO Andy Palmer, Patrick went onto the factory floor to photograph just what goes into the making of a modern Aston Martin, a process that even in 2017 is still largely done by hand. —James Bareham


ASTON MARTIN VANTAGE

A few weeks after Patrick Gosling’s Aston Martin factory visit, his colleague Mike Dodd flew to Valencia in the south of Spain to spend some time shooting the utterly wonderful brand new Aston Martin Vantage. While the lime green color may not be to everyone’s taste, you can’t deny that it certainly makes the Vantage stand out from the white plaster walls of the open-air car studio. —James Bareham


Petcon

PETCON

I didn’t know much about the world of Instagram pet influencers until I went to Petcon along with News Editor Lizzie Plaugic, where we met some of these furry celebrities and their humans. —Amelia Holowaty Krales


PERRY CHEN AND KICKSTARTER

Kickstarter’s offices in Greenpoint, Brooklyn are beautiful. I photographed their rooms with vaulted ceilings; common areas made with reclaimed wood; and documented people typing in quiet nooks that they had made into their own alternative workspaces. Finally, I took some portraits of founder and chairman, Perry Chen in the library, a favorite spot in the building. —Amelia Holowaty Krales


Sony 1000XM2

SONY 1000XM2 HEADPHONES

Vox Media’s offices are located in the downtown Financial District at the tip of Manhattan, just a few short blocks from both the East and the Hudson Rivers. One late afternoon I convinced my coworker, Social Media Manager for Video Mariya Abdulkaf to model these Sony headphones for me. We went out just in time to catch the sunset over the Hudson. The late daylight was beautiful and perfect for this shot. —Amelia Holowaty Krales


HOLIDAY GIFT GUIDE

2017 has been the year we fell in love with stop motion animation at The Verge and that is largely down to the work done by Post Production Specialist Michele Doying, who joined us in June to work as a retoucher. Her work on this year’s Holiday Gift Guide not only included animating the lede image above, but she also created a series of stop motion animations which ran as Instagram Stories. —James Bareham


ROBOT CAT PILLOW

Yes, this is the tail wagging pillow that you never knew you wanted. For some inexplicable reason, this shot of Dami Lee cuddling the Qoobo by Yukai Engineering is one my favorite photographs that I have taken this year. It’s The Verge at it’s Vergeiest. —James Bareham


DECEMBER

WHAT’S IN YOUR BAG

Rummaging through my coworkers bags is now a thing I do and I have to say, it’s pretty fascinating. You learn a lot about people from what they carry around with them everyday. With this years revamp of the What’s in Your Bag? series we got to rummage a lot. For each WIYB shoot, we shot a main image; a series of shots breaking down the contents into groups (and including a separate set for Instagram Stories); and finally shoot stop motion animation. The bags and their contents above were kindly provided by Dani Deahl, Dieter Bohn, Natt Garun, and Chaim Gartenberg. —Amelia Holowaty Krales


VERIZON NET NEUTRALITY PROTESTS, NEW YORK

People gathered in front of a 42nd Street Verizon store to protest the proposal to kill net-neutrality. “I think it really is an attack on our freedom of speech” Diane Hoffman told The Verge. “I just [am] really afraid that if we lose net neutrality that’s gonna be another step down a very dark road.” —Amelia Holowaty Krales


#HASHTAGS

This was a true Verge photo team collaboration. Shooting this lede image for Ben Popper’s Instagram hashtag feature followed a full day of prepping that included crafting our very own hashtags out of popsicle sticks, cardboard, felt and fur; borrowing accessories from our colleagues at Racked; and a surprisingly difficult search for sheet cake. —Amelia Holowaty Krales


NEXT LEVEL WITH LAUREN GOODE: EXOSKELETON

This second season of The Verge’s video series Next Level has seen Lauren Goode meet one company distilling drinking water from the air and another creating hologram time capsules. But the finale of the season was for me perhaps the most interesting: How some companies are seeing Exoskeleton suits as the future of physical labor. This shot of Lauren Goode wearing one of the Exoskeleton suits taken by Vjeran Pavic not only perfectly summed up this specific episode, it almost sums up the entire series. —James Bareham


NEW ADVENTURES LISTS AND REVIEWS

Earlier this year, I approached James Bareham with a thought: I wanted to showcase our books coverage in a different way. While science fiction and fantasy novels often come with great covers, it’s hard to snip out a segment of the artwork for an online post. We came up with a new solution: showcase the entire book as an object.

The result was a couple of different types of pictures. The monthly book list is topped with a selection of books stacked on the counter at one of my favorite bookstores, Bear Pond Books of Montpelier. Our book reviews often feature the book sitting on my notebook, along with some sort of nicknack that fits thematically with the story and a cup of tea. (Many people have asked me about the robot in the tea cup — it’s from Kikkerland). Other pictures have been specific shots of a book, or in other instances, a nice, thematic background for excerpts or interviews or longer lists of recommendations. The results are always fun to put together and shoot. — Andrew Liptak


TECHNOGRAPHICA: DANIEL CANOGAR

This photo essay featuring Daniel Canogar’s series, Echo with words by Lizzie Plaugic, was many months in the making. It is the first “epidsode” of Technographica, a new series that looks at the intersection between technology and art. Daniel Canogar’s Echo installation is made up of five individual sculptures: warped steel frames with flexible, magnetic panels of LED lights attached to them. Those lights respond to algorithmic interpretations of environmental data. Lizzie and I visited the installed pieces at the Bitforms gallery in New York this fall. —Amelia Holowaty Krales


SCREENDRIVE: MCLAREN 570S SPIDER

The McLaren 570S Spider is the most intoxicating, exhilarating and frustrating car I have ever driven. But if you have to spend a weekend driving a $244,000 British supercar, then I suggest you take it to Connecticut in the Fall. It was perfect. Oh and by the way, the stereo is awesome. —James Bareham

[ad_2]

Source link

قالب وردپرس

2017: My year in cars

[ad_1]

At times this year at The Verge, it feels like we already live in the autonomous future in our transportation section. It’s not entirely clear whether the momentum that drives us will be dystopian or delightful. But like our readers, we must get by in the present, where human-driven cars that we own, lease, buy, or ride in via our ridesharing drivers are still by far the dominant form of mobility. How can we write with knowledge about what’s coming in cars if we don’t know where we’re at? We launched our series ScreenDrive this year to show that many elements of cars are just like the gadgets we cover in our sister tech section — perfectly flawed.

In order for us to keep our feet on the ground, or at least close to the pulse of the current day pedal, as transportation editor, I try to drive as many cars as I possibly can, which can be a challenge, considering I live in a town where public transportation (when it’s actually working) and walking are options I enjoy. But I managed to squeeze in seat time in these 62 new cars this year, sometimes on race tracks, a Sunday drive, or in the real-world task of schlepping my kid to day camp. Modern cars are accused of looking and feeling very much the same — kind of like smartphones — they are tactile, three-dimensional rectangular objects loaded with sensors. What I see is an industry in transition, scrambling to find the most attractive functional path toward connectivity and convenience, but not clear on how to keep up with the pace of our more expendable gadgets. Here’s how I spent my year test-driving cars.

January


Chrysler Pacifica
Photo by Tyler Pina / The Verge

In 2017, the Chrysler Pacifica was in the spotlight as the go-to car for Waymo public-road self-driving testing. What I admired most about the Pacifica, as a family minivan solution, was the attention to obsessive detail. Our staff drove two Pacificas at the North American International Auto Show in January, and while some would say their favorite element was the in-car checkers game, what struck me as clever were the second-row seats that fold flat into the floor, making a minivan into a truly mobile living room.


Audi S3

The Audi S3 was our first experimentation with how to ScreenDrive a car. Much of the experience focused on how Audi has expanded Virtual Cockpit across its vehicle lineup. On the S3, Audi built an attractive, modern looking interior that’s stupid fun to drive. It has responsive ride and handling, even for a small car on bumpy city streets. Though some tech functions are not intuitive, like the scrolling wheel, its connected features are still among the best approaches in the industry.

February

Back in February, I took the Toyota Camry Hybrid for a spin. The ‘17 model year added the Entune Audio Plus entertainment system, automatic emergency braking, and wireless smartphone charging. The Camry maneuvers smoothly from electric to gasoline power, but faces stiff competition in this growing segment of mid-sized hybrids.

The Lexus IS200T is an entry-level luxury car that isn’t afraid to make a statement. It has a polarizing, but memorable grille. Unlike many luxury automakers, Lexus opts to go its own way rather than mimic German luxury design. It’s not always successful, but on the IS200T, that’s a good thing. What it lacks is space in the rear interior — even kids’ legs were cramped. It also comes short in the performance numbers of its competitors.

The Mazda 6 is what I call the ultimate sleeper car. Mazda lacks the big overstated presence of larger brands, but its handsome design coupled with peppy performance makes it a solid choice for consumers to consider. What contributes to this 6’s savvy is a driver’s seat positioning that borrows from the sports car DNA of its Miata.

I didn’t read the fine print on the offer to drive the BMW 330e, and was pleasantly surprised to see the e-for-quasi-electric when the final paperwork crossed my desk. The 330e doesn’t scream “look at me, I’m driving a plug-in!,” but instead, “look at me I’m driving a BMW!” that has the essence of performance that makes everyone want to drive this present-day icon.

March

The Lexus GS350 F Sport is a bit of a metal mouth. Its grille takes familiar proportions and stretches them in into a bulbous form. But it’s been around so long that this observation is no longer a revelation. The Lexus’ Remote Touch interface requires a light touch, and can be frustrating. Its interior is spacious, and like other Lexus models, uses rich materials.

Sure, strong, snow-ready and steady, the Subaru Forester didn’t receive a major refresh in 2017. It remains a true sport utilitarian. That’s why people keep buying it, as I was reminded when driving it through slippery wet spring conditions. Subaru added better cameras, steering responsive headlights, cameras, and new features to the sight safety system on its 2017 model.

The cockatoo comes to mind if you gaze long enough at the front end of the Lexus RX350. It’s a look that’s been working for Lexus as it continues to dominate as the luxury standard bearer. Once you’re inside, the high-quality materials and comfortable seats make it a pleasant environment if you’re stuck in traffic, which is how most of us spend our time in the car.


Genesis G90 at the Detroit Auto Show

Genesis G90
Photo by Sean O’Kane / The Verge

For the backseat driver in all of us, Genesis G90 lets you live the limousine fantasy with ample legroom. It’s the $70,000 flagship of the gussied up Hyundai brand, and it packs in the accoutrements and standard features like leather, heated and cooled and adjustable seats, a 17-speaker sound system, and a 12.3-inch infotainment screen.


Uconnect system in the Jeep Compass
Photo by David Bush for The Verge

I have a soft spot for the aging Chrysler 300 sedan. Originally designed by Ralph Gilles, now design chief for the FCA group, it reintroduced attitude to the banal sedan back in the day. This year it added in its new UConnect system, which Lauren Goode also assessed in her ScreenDrive.

Americans continue to bromance big trucks like the Ram 2500 Power Wagon. As a former Dodge pickup truck owner, it’s part function, part psychology to sit up above everyone else. Even if you don’t need to truck drive, your friendships will improve if you do, because everyone will ask your help to move them. The Ram Power Wagon drives home a message in its strong accents. What’s changed since I last owned a truck is the advancement of parking technology, which is a game changer to pull off the truck driver look, without sideswiping small cars and mailboxes in your wake.

April

The Audi A6 Competition is like the A6 amped up — a stellar performer wedged between Audi’s sleek sport division S6 and the base model. One tiny detail stands out: wicked looking blacked out mirrors, and part of what an extra $6,000 will buy you along with sport suspension and torque vectoring.

If you don’t want your compact sport utility vehicle to look like a grocery getter, the Jeep Renegade presents a more brawny option. The Jeep Renegade looks rugged and has handling characteristics to back it up. My favorite feature was the removable MySky roof.

The functional Chevy Equinox might not be cause for excitement, but it’s a key product for GM, as the hunger for value-driven, family-friendly crossovers is palpable. It comes ready with standard features like three 12-volt power outlets, Apple CarPlay and Android Auto, and a wifi hotspot. What gives it a slight edge in my book is its fuel economy, which at 39 MPG on the highway, is pretty good for a gasoline powered engine.


Rolls-Royce Dawn


Rolls-Royce Dawn

I wrote this ode to the suicide doors on the Rolls-Royce Dawn. It’s not the brand’s flagship Phantom, but its $412,430 price reflects its super-luxury pedigree. “To drive this coupe isn’t about the ride, but more the glide.”

I call the Toyota RAV4 a wake up call to the practical desires of Americans. It’s chock-full of safety and was the top-selling car in 2017. Still there are rumors, Toyota may go back to the drawing board and roughen up the RAV4’s image.

The classic Jeep Wrangler Rubicon never really goes out of style, so even if you opt to buy this soon to be phased-out generation, you’re still steeped in Jeepdom culture. It’s militaristic design dates back to 1941. If you’re willing to deal with a noisy, rough ride, that is, in favor of winching your way up a trail, its staying power is timeless.

May

I traveled to a warm climate in late spring, and a Mercedes-Benz C300 cabriolet that was there to greet me was a welcome reprieve, in a soft top that takes only 20 seconds to drop.

Those annoying Buick commercials don’t lie. At least the ones that capture would-be Buick LaCrosse customer actors in mock shock as they behold a brand that’s gone through a spiffy upgrade. And after spending an afternoon at the GM Proving Grounds learning about Buick’s ridiculously intense commitment to making a quiet interior with engineers galore dedicated to these efforts, its quiet cabin really is the thing that speaks volumes.

My favorite aspect of the Cadillac XT5 is the “UltraView Sunroof,” fancy branded language for a sweeping panoramic view of the sky. It sounds trivial, but studies show that exposure to natural light during a daily commute improves mood, but it also may make you spend more money on your next car.


Aston Martin DB11

The steering wheel is the point of orientation in cars driven by humans. Aston Martin has cracked the code on how to make a fancier steering wheel shape in the Aston Martin DB11. It’s consistent with what makes Aston Martin distinct: it’s not about function, but about the beautiful form.

Toyota Sienna: Say hi to kids and car seats. It’s a minivan that looks and behaves like one, a familiar form that’s been in production for seven years. Its engine in 2017 is a bit more efficient, earning 27 MPG on the highway.

In 2017, the Cadillac Escalade turned the camera toward the inside view: it added a teen-driver monitoring system, automated parking to accommodate its super-size proportions, and a rear passenger reminder so you don’t forget your baby on board.

June

The brilliance of the BMW M240i — not to be confused with the M2 — is in its slight proportions. It’s the definition of how small walks tall in a nimble design performance package. It has verve, in the sense that it’s fast and responsive, but you’ll feel the bumps along the way, due to its stiff suspension.


Jeep Compass
Photo by David Bush for The Verge

The Jeep Compass has often seemed off of its mark — an underwhelming version of the brawnier jeep. It’s redeemed itself with a 2017 redesign to be more handsome in form.

The Fiat 500L retains much of its throwback design. It’s a rough and tumble ride, a budget statement car for those that desire a bit of Italian flair.


Jaguar F Pace


Jaguar F Pace

The decadent grille on the Jaguar FPace is part of its allure and one that I paid homage to in this piece about the vehicle that took home the title for World Car of the Year.


Mini Countryman
Photo by Thomas Ricker / The Verge

The Mini Countryman is a compromise between roomy and mini, built on the BMW X1 platform. I never thought of it as a ‘90s gadget until I read Thomas Ricker’s Countryman ScreenDrive.


Lamborghini Aventador S
Photo by Tamara Warren / The Verge


Lamborghini Aventador S
Photo by Tamara Warren / The Verge

I described the Lamborghini Aventador S as, “a sharky-cobra-rocket-jet hybrid that runs on gluttonous petrol,” after a day spent whizzing around the Poconos Raceway. From the launch of a push button, driving a Lamborghini on the track is like living in a real-world video game, only better with a V12 engine that makes 740 horsepower and 508 pound-feet of torque.


Lexus LC500

The Lexus LC500 is a worthy flagship vehicle for the Lexus brand. It’s a performer, but also a looker that presents traditional Lexus aesthetics in strong proportions.

July

The Ford Mustang GT continues to trot along since it was refreshed in 2015. One area that’s added serious wow-factor: its sound. You can turn the snarl on and off if you don’t want to wake the neighbors before you stunt.


Mercedes-Benz E400


The redesigned Mercedes-Benz E400 is loaded with every piece of contemporary automotive tech imaginable, including two screens in the dash, a head-up display, steering assist and automatic braking. At times, the myriad of options available feel overwhelming. The touch screen and wheel feel at odds. I prefer it in the wagon form.

The straight-line performance on Dodge Charger Scat Pack seemed like a big deal until I drove the Dodge Demon, which took street car speed to another level. But nailing the gas and listening to the Hemi engine rev to the 4,000 rpm limit does induce feelings of power.


Dodge Demon
Photo by Tamara Warren / The Verge

Who says toy cars are for kids? The drag racing capabilities that are stock features in the Dodge Demon are what gives its street-racing cred. On my first outing I was rained out, but eventually I was able to practice my start on the quarter mile on a New Jersey racetrack and experience 0 to 60 glee.


Mercedes-Benz G Wagon

The Mercedes-Benz G-Class is due for a redo next year, but we can’t help but get amped up about this big, boxy design. Climbing in the awkward cabin feels like a blast from the past — and that’s part of its appeal, until you toggle with Comand, the Mercedes-Benz infotainment system.

Jeep Cherokee Overland is the crossover variation of the more well-known Grand Cherokee. It was once an SUV, but as tastes have shifted it has gotten smaller. The Overland is a higher-end trim variation and boasts an Alpine nine-speaker audio system.

The thing that stood out on the test drive of the Audi TTS was my experience with Audi’s subscription service, Audi on Demand, curbside outside of The Verge’s San Francisco offices. A low-mileage Audi greeted me after I used the app to order it. It’s a way for everyday customers to conduct extended test drives of new models.


Tesla Model 3
Illustration by Alex Castro / The Verge

Tesla Model 3 hype hit a threshold this year, making this the most memorable ride of the year, because in true Tesla mystique, no one knew what to expect. Little did I know I’d drive it long before owners who have money down on this car, and are still anxiously awaiting delivery as the company grapples with manufacturing delays.

August

The Grand Cherokee has entered the phase of modern classic, and looks even better with a little mud on the bumper. While the 707-horsepower Trackhawk has been causing a commotion, the SRT model includes a Hemi engine. For thrills, I drove it on Indy Motor Speedway in monsoon rains.

In Los Angeles, I drove a 2018 model of the Nissan Rogue that comes standard with Android Auto and Apple CarPlay. Nissan is big on features with nifty names: Divide-N-Hide, marketing speak for a thoughtful, discreet storage area.


Tesla Model S P100D
Photo by Tamara Warren / The Verge

The whip-fast performance and smooth handling on the Tesla Model S P100D was the key takeaway from my test drive of the Model S P100d. But driving the Model S to the Pebble Beach Concours d’Elegance was also a study in the psychology of the California motorist.

September

The L stands for long wheelbase on the Infiniti Q70L. While 414 horsepower makes for plentiful performance, its shape is one that makes it harder to distinguish.

Verge staffer Dani Deahl and I had a lovefest over the Audi A5 Sportback and the contours of its cool cockpit.


The ridiculously perfect paint job on BMW M760i was the definition of decadence in this loaded up full-size sedan that can be yours for $154,795. For all that, you get 601 horsepower and a lot of looks on the streets.

The Hyundai Ioniq PHEV has a rather sedate form that belies plug-in hybrid capabilities. It has a range of 630 miles on a full charge and a tank of gas, impressive for a compact car. While its all-electric range is on the low side at 29 miles, its integrated electric motor produces a decent amount of power.


Porsche Panamera

The design of the Porsche Panamera is not for everyone, but I happen to favor its unique take on the oversized sedan, with a slightly longer wheelbase, larger wheels, aluminum door panels, and a new hood found on the 2018 model.

October


Chevy Bolt

The Chevy Bolt drives and performs well and, even after zipping around long stretches of Detroit freeways, caused no range-induced anxiety. But now that Chevy’s proven it can EV, it’s time to dial back its overstated eco-interior, and make something more attractive.

I learned to appreciate the role of the backseat driver in a Mercedes-Benz Maybach. It’s the dazzling version of the S560 4Matic. The plush pillowy seats are straight from the finest first class cabin you can imagine. Two engines are available. It’s priced just under $200,000 for the more powerful V12 version. Mercedes is keeping the iconic German nameplate alive that was founded by 1909 by Wilhelm Maybach.

In my everyday life, of zipping around town, I didn’t want to give up the keys the Audi SQ5. In the sea of luxury crossovers, it’s among the standouts. It has thoughtful, attractive design, a long list of features, and responsive performance.

The Mazda CX-5 is my longstanding go-to rec for real-world shoppers who want flare for around $25,000. It’s the complete package of style, panache, and performance.

The Jaguar XF is a big bad cat when you see it approaching, but inside I found the materials lackluster, which isn’t enough for this upper-crust category composed of competitors like BMW 5 series, Audi A6 and Mercedes-Benz E-class.

November


BMW 5 Series


BMW 5 Series wireless charging
Photo by James Bareham / The Verge

It’s the first car we were able to test out the iPhone X wireless Apple CarPlay capabilities in this fall, but what I’ll remember most about the BMW 5 Series is its sleek design and tech as a work in progress in our winter ScreenDrive. From a performance perspective, driving the 5 Series is an exercise in satisfaction.

I still have trouble keeping the Infiniti nomenclature straight, so in case you’re wondering, the Infiniti Q50 picks up where the G sport sedan left. It has responsive steering, and invigorating acceleration. What it’s missing: Apple CarPlay and Android Auto for 2018.



Lincoln Navigator

A lot is riding on the success of the new generation full-size Navigator for Lincoln. From a gadgety perspective, Lincoln has packed as much as it possibly can into its long list of standard features and options. The result is a bit of everything and the kitchen sink, and I wonder how much of all these handy features will be integrated into everyday usage. I drove it for two days in NYC, where a big vehicle like the Navigator is slightly out of its element (unless you’re a limo driver.) What I did mess around with was the new head-up display — that contains 400,000 mirrors — its new touchscreen, and the SYNC 3 system. There are multiple screens and streaming capabilities and Wi-Fi. The 20-speaker Revel II audio system was incredibly boss. Where it stands apart from Navigators past: it feels like far more vehicle than a chromed-up Expedition.


Range Rover Velar

The Range Rover Velar is a supersonic take on what it means to range in the rover. It’s the first Land Rover vehicle to use the InControl Touch Pro Duo system, a serious departure from the buttons and knobs, in favor of two 10-inch touch screens. Its sparse, clean design wins kudos.

The Nissan Maxima has features that were once only privy to luxury buyers such as Android Auto and Apple CarPlay, forward-collision warning, and automated emergency braking, which are now available.

December

The Alfa Romeo Giulia sedan revives the name of an Italian marque from the 1960s. It’s not lacking in character. But in many ways, the Giulia is a like a trip to a Zara store — cool on surface, but its functionality, durability and comfort leave something to be desired. Loads of fun to drive if you’re into zippy performance in your daily commute.

I wanted to love the Alfa Romeo Stelvio. Instead, I liked it. What I like about it is the experience from the driver’s perspective. What’s lacking is the roominess for passengers and logic in how the interior functions are placed. The competition in this increasingly crowded luxury crossover segment category is stiff.

Many curious people I ran into while parking paused to inquire, but no one’s first guess was the 2018 Toyota Camry. It has come to life in a much improved exterior form, and it also handles with grace.


GMC Sierra 2500 HD Denali | Photo by Tamara Warren / The Verge

I ended the year on a high note . Or at least in a truck that has me riding high, the loud, proud and over-the-top GMC Sierra 2500 HD Denali, a massive pickup that runs on diesel fuel. After a 600-mile road trip through wind, sleet, and snow, I can say that it can pretty much conquer anything. One thing you sacrifice in exchange for that massive footprint is a tight turning radius. Think big, wide turns.


One day soon, perhaps the self-driving cars will be picking us up for work, but until then this is the reality of how most of us are getting by, as we spend an average 17,600 minutes driving each year, according to AAA. It’s the space where safe, user-friendly tech matters most. Of course, my seat time spent test driving only scratches at the long list of cars we drove across the section and the new cars available on the market, which you’ll find in the Verge Transportation archives. So many cars, so little time!

[ad_2]

Source link

قالب وردپرس

Driverless minivans, electric race cars, and luxury coupes: our favorite rides of 2017

[ad_1]

So here’s the thing, and I swear we’re not bragging when we say this: we got to drive a ton of cars this year. Ultra-luxury coupes, family-friendly minivans, electric taxis, impossible-to-park SUVs, battery-electrics, plug-in hybrids, compact city cars, race cars, and (of course) a crop of Teslas. Our butts graced a variety of driver’s seats. The Verge’s transportation editor Tamara Warren drove over 60 cars in 2017 by herself!

Car companies traditionally loan “press cars” to reporters so we can experience new features, check out enhanced performance, and generally get a sense of what they are like to drive to better inform our coverage. It’s a perk, for sure, but one we take very seriously.

This past year was one of rapid change and escalating stakes for the auto industry. Electrification, autonomy, and mobility services like ride-hailing and car-sharing provided legacy car companies an opportunity to posture like tech startups. The car-buying public, though, remained blissfully unaware of most of these trends, snatching up gas-guzzling SUVs, crossovers, and pickup trucks in large quantities. But analysts predict that when all the numbers are added up, 2017 will be the first year since the Great Recession that auto sales slumped.

With all that in mind, here is our list of cars we drove in 2017 that were among our favorites.


Waymo’s driverless minivan

I drove an interesting range of vehicles this year — big gas-guzzlers, compact battery-electrics, and sexy convertibles — but the most interesting car of all was the one I didn’t drive at all, but the one that drove me.

See what I did there?

My ride in Waymo’s fully driverless minivan lasted all of 15 minutes, took place on a closed-to-the-public decommissioned Air Force base in Central California, and only encountered Waymo employees disguised as drivers, cyclists, and pedestrians along the road. Not exactly a recipe for fireworks. And yet it was the most thrilling 15 minutes in a car I had probably ever experienced in my 37 years on this planet. My fellow passengers included another reporter and a Waymo employee named Diondra, who was unflappable throughout. I managed to conceal most of my giddiness beneath a layer of journalistic cynicism, but when the car expertly threaded a complicated intersection, my veneer slipped slightly and I think I said “wee!”

The decision to remove the driver from the equation wasn’t a rash one, but rather one that has been eight years in the making for the Google spinoff. It’s one thing to ride in the backseat of an autonomous (or highly automated) car with someone behind the steering wheel.

Riding in a Level 4 driverless vehicle was as close to a glimpse of the future of mobility as I’ve ever seen. We are still decades away from a reality in which these types of vehicles are able to roam freely through our cities and communities without restrictions. The transition from manual to automated driving will be slow and complicated, and probably messy. But when it does come, it will completely transform how we get around.

But it will never come if there isn’t trust in the technology, and that will only follow an overabundance of testing, both physical miles driven and in simulation. There will be accidents and injuries, and maybe even deaths. (There’s already been one fatality.) There’s convincing research that suggests people have an extremely high tolerance for human error, and an extremely low one for robotic error. What happens when a Waymo minivan gets in its first fatal accident? It could set the race to autonomy back years, and grind a lot of this momentum to a halt.

Waymo is gearing up to allow its first passengers into its driverless vehicles in a small Phoenix suburb. The scope is extremely limited, just a handful of people in a 100 square-mile radius using the minivans for boring errands and other daily trips. You probably won’t even notice when it happens.

Andrew J. Hawkins


Greenwheels car sharing

2017 was the year I decided not to buy an automobile because my car-sharing service is so utterly convenient and economical. I subscribe to Greenwheels in my home city of Amsterdam. It’s pretty much the same service offered by ZipCar in the US, UK, and a few other countries. For a relatively low monthly rate (€25 for the Frequent plan) I have access to hundreds of dedicated cars, which I then pay to use by the kilometer (usually €0.27) and hour (usually €3). In my neighborhood alone there are 20 cars to choose from within a 10 minute walk of my house, and just three steps from my door.

Unless you’re super wealthy, owning a car in a major city is a huge pain in the ass. But with Greenwheels, I never have to hunt for parking, I don’t worry about oil changes or someone dinging my doors and bumpers, and I can always pick exactly the right size car to serve my immediate needs. Trip to Ikea? Reserve a van. Weekend at the beach? Better get the station wagon to fit the three kids. Time for gymnastics? Grab the subcompact to take my daughter to practice.

According to the latest AAA study, owning and operating a new car costs an average of $8,469 annually, or $706 each month. In 2017 I paid just over $2,000. Your mileage will vary, literally, especially since you probably don’t live in a city as bicycle friendly as mine.

Yes, there are a few downsides. The radio presets are never as I left them. And my car — usually a VW Up! — would not be my first choice to buy… especially with that big dumb Greenwheels logo on the door. I also have to deal with the occasional lost umbrella or other detritus from the driver before me. But these are minor annoyances compared to storing, insuring, and maintaining a car over its lifetime.

I find car sharing so incredibly convenient that I can’t imagine ever owning a car again.

Thomas Ricker


Tesla Model 3 Gallery

Photo: Tesla

Tesla Model 3

It wasn’t the fastest car I drove this year, or the most beautiful. And I didn’t even drive it very far — only a quick lap around the Fremont, California factory perimeter. But the 15-minute test drive I had in the Tesla Model 3 was the most talked-about ride I took in 2017, and if Tesla can actually scale the Model 3 operation, it would make it the most significant.

Until that time, no one was sure that the Model 3 really existed in tactile form, and the interior was a bit of mystery. It turns out that the interior, with its minimalist approach, was the most intriguing part of the car’s presentation and performance.

Part of what made the ride special was the fact I wasn’t expecting to try it out at the company’s summer event. In both the spectacle of the Model 3 drive and the Roadster unveiling, Tesla proves that it has mastered the art of surprise, and wins the award for the company that will always keep us guessing.

Tamara Warren


London Electric Vehicle Company (or LEVC)

London Electric Vehicle Company (or LEVC)
Photo by James Vincent / The Verge

London’s new electric taxi

This year I only drove one car for The Verge — and it was also the first time I drove a taxi. I’m Australian, I don’t have a car here in London, and I haven’t really driven since moving about seven months ago. I’m not one for cars, I’m usually happy with a decent vehicle that gets me from A to B easily. Like Thomas, I also use ride-sharing services and a lot of public transport.

London Electric Vehicle Company’s new electric taxi is one of the smoothest vehicles I’ve ever driven. It’s bulky, that’s for sure — being a six seater, it has ample room inside — but the ease of driving the vehicle makes you forget how big it is.

Though technically you wouldn’t be driving this thing unless you’re a taxi driver, being a passenger is also a really great experience. London’s taxis are notoriously loud and shaky, but riding in this car you don’t notice any noise at all.

It handles very well and is super quiet, allowing you to relax easily and talk to your fellow passengers. The sunroof is a big plus, especially for people new to London. You get to see all the old buildings and skyline you’d otherwise miss out on.

This is something all taxi rides should be — comfortable, relaxing, and environmentally sustainable.

Thuy Ong


Photo by Amelia Holowaty Krales / The Verge

2018 Honda Odyssey

This was not my favorite thing to ride or drive in by any means, but it sure was the most memorable. After all, no other car has caused a reaction quite like seeing the entire back row of passengers scream out in horror when I turned on the 2018 Honda Odyssey’s new “CabinWatch” feature. It’s essentially a camera that shows the driver a live feed of what’s happening behind them, so drivers can keep watch of backseat shenanigans. In theory this sounds great, and I do think it has a lot of potential for parents of young children. But in practice, it mostly made passengers feel uncomfortable that they’re being watched — and the purple hue overlay that I couldn’t get to go away from the camera feed didn’t help.

Sitting in a car, especially for a long period of time, is supposed to be about maximizing comfort. And even though I found the Odyssey incredibly spacious and the customizable seat configurations useful, I couldn’t get past the idea that a car that has an optional cellular internet service with a camera capable of live streaming wouldn’t somehow be abused in the future. Mental uneasiness is by far the worst type of discomfort.

I understand I am not the target market for a minivan. But I am also skeptical of the promise of digital security in a year where we’re still taping up our laptop webcams and most of our social security numbers have been stolen. Perhaps I shouldn’t be paranoid, but to me, the car has been the only space where I am truly detached from the internet (aside from streaming music). While adding an internet connection, several TV programming channels, and a local livestream of the backseat are technologically forward features, when it’s time for me to choose my next car, I’ll be looking for one that has none of the above, but can warm up my driver seat in under 60 seconds during a New York winter.

Natt Garun


Photo by Amelia Holowaty Krales / The Verge

Tesla Model S P100D

For one very brief moment in time this year, my car fit right in with all the other high-end cars in Silicon Valley. It wasn’t “my” car, though. Call it an occupational perk (or hazard, if you get distracted by such things), but as part of my job I get to sometimes review cars, under the premise that cars are essentially giant gadgets. Our Screen Drive series approaches cars like this: it’s less about gear shifting and suspension and torque, and more about the way we interface with the vehicle when we’re in it.

Okay, it’s about torque too.

Especially in the case of the Tesla Model S P100D. I had briefly driven a Model S before the review, and my initial impressions were what you’d probably expect: wow this thing is fast; it’s so quiet; look at this giant fucking tablet; I could get used to this. But after driving a loaner Model S P100D for a week I felt like I knew the car much more intimately. (I also started to notice its small, irritating quirks, like its cup holders.) We went through charge cycles together. We were connected figuratively and literally. Like through an app.

From a Screen Drive perspective, the giant touchscreen distracted me much less than I thought it would. I didn’t find the media options to be as impressive as, say, the maps. Not surprisingly, it doesn’t support Android Auto or Apple CarPlay. And the built-in voice control is really just that — voice control for the car’s local functions, and not a virtual assistant. But by the end of the week, I had grown fully accustomed to tapping, swiping, and pinching my way around a 17-inch LCD display while I was driving. I appreciated the simplicity of the instrumentation.

But that wasn’t the highlight of reviewing the car. It was a ridiculously fun car to drive, which is what you might expect for a vehicle with a base price of $134,500. Few cars I’ve driven have me fantasizing about ditching my 10-year-old gas guzzler, something I justify because it’s paid off, still runs well, and fits all my sporting gear. The Model S had me suddenly in love with an all-electric sedan.

Lauren Goode


Formula E car

It wasn’t until I was strapped into the cockpit with the track’s fire marshal shouting potentially life-saving instructions at me that I realized I was actually about to drive a bonafide race car. And not just any race car, an all-electric one.

In April, I just happened to be in Mexico City starting a two week vacation when Formula E, the first global all-electric racing series, was there, too. I knew that they had a few demo cars, and I knew that they occasionally allow dopes like me who cover the series to get a firsthand taste of what those cars are capable of. Sometimes things just work out.

Formula E cars look kind of like an F1 or IndyCar, are built to go over 150 miles per hour and make it from 0 to 60 in under three seconds, and they do so using one (giant) 28kWh battery. The one I drove was slightly limited, but still plenty powerful enough to both terrify and thrill me as I tried to survive the 17-turn track the series had set up at Autodromo Hermanos Rodriguez.

The terror peaked two laps into my time on the track when — what else? — I got cocky. You see, the inside wall that runs the length of the frontstretch doesn’t end until what feels like almost the middle of turn one. It makes the corner completely blind. You have to start moving the wheel before you see the exit, which is doubly terrifying when you’re also slowing down from hitting around 120 miles per hour on the straightaway. And it wasn’t just my lack of experience that made this turn difficult, too — more than one driver had skidded off into the runoff section of turn one during the race the day before.

I tiptoed through turn one the first few times out, but I foolishly trusted myself a bit more on the start of that third lap. I eased off the brakes, aimed for the correct line, and coasted through the turn just inches away from the end of that wall. And I nailed it.

Too happy with myself, I gassed (zapped?) it on my way out of the turn. The rear tires spun, and before I knew it the back end of the car was skidding into the left side of my peripheral vision, as my eyes kept looking forward while the car did a 180.

The rest of my laps were less fraught, and I even clicked off a 1:18 — a personal victory considering the series’ drivers were turning in laps around 1:03. But while that 15 seconds of difference sounds flattering in a bar conversation, it represents an entire career’s worth of difference.

I’ve done some other ludicrous things in fast cars this year — I drove a race-ready Tesla in France, whipped a Chevy Bolt around a parking lot in Detroit, and even got to pilot the new Ford GT. But the Formula E car has stuck with me most vividly. Nothing else offered such a raw experience. There’s no power steering, you stop and go with slabs of bent metal instead of proper pedals, and you’re exposed in an open cockpit while your knees are practically above your chest as you hurtle around a track mere inches from the asphalt.

The drivers have to manage all this while also being careful not to drain the battery too fast (or too slow). That’s harder than it sounds — try being careful while also fighting for position or, hell, the win. Sure, Formula E isn’t as fast or as popular as Formula One. No three-year-old racing series could be. I’m constantly impressed that this series ever got off the ground, and that it’s now attracting some of the biggest manufacturers in the world. After surviving a few laps in one of its cars, I can see why.

Sean O’Kane


Photo by Amelia Holowaty Krales / The Verge

Rolls-Royce Dawn

This year, I drove a $400,000 car, and nothing bad happened. I took the Rolls-Royce Dawn through Midtown Manhattan, around Astoria, Queens, and onto the BQE Expressway into Brooklyn. I survived the journey, and the car made it through unscathed.

This drive was my favorite of 2017, not only because I don’t foresee myself cruising around in a custom, hand-crafted Rolls-Royce again in the near future, but also because it taught me that, more than anything else, the difference between a $40,000 car and a $400,000 car is the amount of attention you get. Yes, you can appreciate the craftsmanship, the roar of the engine, and the smoothness of the drive itself, and those factors vary from vehicle to vehicle, but fundamentally, a car is a car. It should get you where you want to go with little hassle, especially if it’s a new model.

I panicked when I first got into the Dawn. What if I scratched it? What if I crashed it? What if someone stole it when I wasn’t looking? How would I explain the incident to my editor? I’m also a highly neurotic person.

But you know, the drive was completely fine. It actually turned out great. I constantly reminded myself that the Dawn was only a $400,000 thing, and in reality, it was replaceable. Still, construction workers gawked at me; pedestrians took my photo at stoplights; and my neighbors questioned who lived next door when I parked it in my driveway. I flexed for my boyfriend when I picked him up outside his apartment, and he flexed for his friends by snapping that he was in the Dawn on the BQE. I’d been on that same expressway countless times, only this time, I was sitting in traffic in a Rolls. The Dawn brought me a lot of joy and made me feel extra cool. I love attention.

Maybe I’ll drive another Rolls-Royce someday. I hope I do. But this stands as my favorite drive of 2017, and maybe of all time, because I witnessed firsthand how much everyone loves an expensive thing.

Ashley Carman


Mitsubishi

Mitsubishi Outlander PHEV

Okay, the best thing I drove this year was not a Mitsubishi. But I put this here because it’s important that the 2018 Outlander PHEV be thrown in the conversation. Despite being saddled with some deeply anonymous styling, I see the point of the Outlander plug-in, and a drive around Catalina Island confirmed it’s the car people with 1.5 kids from San Diego to Somerville would actually appreciate.

Turn it on and the Outlander PHEV operates in hybrid mode when the battery is fully charged. Drive slowly and you’ll likely rely on the battery charge until more power is needed and the gasoline engine kicks on. Or you can use the buttons around the gear lever to either force the car into EV mode (a Charge mode to regenerate the most energy back to the battery), or save the charge for when you can best use it, such as in heavy traffic or at low residential speeds.

The Outlander PHEV isn’t going to rival a Tesla for range, but it strikes a better balance between electric range and fuel economy that luxury PHEVs from BMW or Volvo just can’t offer. People may scoff at you for picking a car from a company best known for making big-screen TVs, but you’ll have a new plug-in hybrid for less than $30,000 after tax credits (that were mercifully saved) and won’t have to wonder if the dog gate will fit into it over a holiday weekend.

And a bonus: the dirt roads on the island revealed that the all-wheel drive system meant we were never unsure about making it up a hill. It may not scare Jeeps and Land Rovers, but buyers who have steep, snow-covered driveways in the winter might be relieved.

Despite some ridiculous chromed accoutrements, the Outlander PHEV is an honest car. It’s not supposed to be luxurious or quick. But in trying to be honest, Mitsubishi stumbled upon a segment of the market that has been completely underserved. There are a host of people, possibly scorned from diesel scandals, who just want a fuel-efficient car to move them and their family around without significant compromise. And finally, this is an answer.

Zac Estrada

[ad_2]

Source link

قالب وردپرس

This is the best smartphone camera of 2017

[ad_1]

Smartphone cameras get better every year, but this year more than ever truly felt like a leap. For once, we can talk about more than just Apple and Samsung (and Google) when referring to the “top tier” of smartphones with good cameras. And what’s fascinating is that, after the race to cram megapixels and ultra-wide apertures into these phones made them all kind of the same for a year or two, now there are loads of differences. Multiple companies have armed their phones with dual cameras; Google released a followup to the excellent, software-driven Pixel; and names like HTC are back in the mix after years of building subpar shooters.

So, of course, we wanted to see how the best stack up. For our comparison we picked the iPhone X, the Google Pixel 2 XL, the Samsung Galaxy Note 8, the HTC U11, and the LG V30. These are the best of the best, the phones that have the most cutting edge cameras you can buy. Keep in mind everything said about the Pixel 2 XL will also apply to the Pixel 2, as they have the same camera. And the Note 8 has the same main wide camera as the Galaxy S8 and S8 Plus, meaning you can translate those findings to those phones, too. The iPhone X, however, has a slightly wider aperture on the telephoto camera than on the 8 Plus, so bear that in mind.

Our methodology was pretty simple. I had all five phones with me and tried to recreate the same framing of each shot with each phone. I let the phones do their work — no tapping to focus or expose, everything on full auto using the stock camera apps. This comparison is more of an attempt to gauge how these phones are reading and adjusting for a scene than a scientific analysis of what their sensors and hardware capture. (There are plenty of great technical tests worth pixel-peeping, like at GSMArena.)

I’m also more interested in the choices these phones (and the engineers who designed the software) make with regards to how they take that information from what is largely similar hardware and turn it into a final JPEG image you can view, print, or share. The information captured by the sensor is whittled and shaped to something more manageable in size, but in that whittling and shaping is a ton of room for what amounts to editorial choices by the designers of the camera systems. By the end of this piece, we’ll hopefully have a better understanding of what those choices are.

Low light

Let’s start with one of the most difficult comparisons: low light situations. Not only is this a very common and challenging lighting situation to shoot in, it’s also where the differences in these smartphone cameras are most noticeable.

This first set of photos reveals something that surprised me (and our creative director James Bareham, who helped me shoot and sort through some of these photos) during this whole process: the HTC U11’s camera is good. We knew our own Vlad Savov had fallen pretty hard for the U11’s camera this year, but we really had to see it to believe it for ourselves.

The HTC U11 is reliably great in low light because it doesn’t do as much noise reduction as the other smartphones. Yes, that means the images can look a little noisier from time to time. But that also means the phone captures and maintains lots of detail. Look at the metalwork at the bottom of this photo and you’ll see that none of the other smartphones produced an image with as much preserved detail as the HTC did. Even better, look at the stone underneath the bright window in the middle of the photo — there’s detail there, too, that didn’t make it into the other photos.

The Pixel 2 XL captured a decent amount of detail for such a low light scene, but I think Google’s HDR+ over-cranks a bit here, which I really don’t like. Google’s HDR+ is like a computationally-enhanced version of the typical HDR process. It takes many photos when you press the shutter button and quickly merges them together, and it’s usually fantastic, as you’re about to see in further comparisons. But this is an example where it’s doing too much.

This comparison sets up a few other trends we saw in testing out these phones. The LG V30 produces very muddy photos in low light situations, and worse is that the viewfinder in the app is also really dark, no matter your screen’s brightness. It’s like you’re shooting blind sometimes.

And while I think the HTC’s photo is the best of the bunch here, it achieved that by doing something I noticed in other photos, too: overexposing really dark photos. In one sense, that’s fine, since the camera obviously captured an image that isn’t abhorrently noisy. But it did lose detail in the highlights in the process, something the other smartphones didn’t sacrifice. Choices!

This is a slightly less challenging scene, in that there’s at least a lot more light. But it’s still a dark setting, and I expected the light bulbs to skew at least one of the exposures. I expected wrong! Almost all of the smartphones handled themselves nicely, with the LG V30 turning in another muddy, overly-smoothed result.

To spot the differences, you really have to dig deep into the full-resolution file here. I think a closer examination shows the Pixel 2 XL produced the best photo. It captured more details in the shadows than the other phones, save for maybe the HTC U11, and it did this without blowing out the brightest parts of the photo, such as the lights, or the tent in the back.

The iPhone X’s photo is a close runner-up, or third, depending on your preference versus the one from the HTC U11. There’s good detail in the trees and in the wood to the right side, but it still lost some of the highest highlights. Look at the sidewalk and you’ll notice another trend we discovered while testing out these smartphones, which is that the way the iPhone processes its JPEGs often gives things a sort of painterly look.

I don’t think it’s necessarily a bad thing, especially when you’re not pixel-peeping, but it means the iPhone sometimes has trouble with straight lines (check out the power lines connecting the lights in this photo). And in later examples, you’ll see that the iPhone’s photos often come away with less fine detail than the HTC U11 or the Pixel 2 XL.

Here’s what I think is an absolute home run for the Pixel 2 XL, even though it might not look that way stacked next to the slightly brighter and more orange HTC photo. The Pixel 2 XL’s photo is somehow neutral and crisp, even though this photo was taken in a dark bar with, obviously, a candle burning in the background. The HTC U11’s photo is maybe more pleasing at the outset thanks to its overall brighter appearance, but it doesn’t have the same level of detail in the orange peel, or in the wood grain in the bar.

Funny how much we’re not talking about the iPhone X yet, huh? In low light situations, I found that at best it could keep up with the Pixel or the U11, but often times it was clearly second or third. At least with the Pixel phones there’s the excuse that Google has HDR+, which was a clear advantage in challenging lighting last year and has only gotten better with the Pixel 2. It’s harder to explain how the U11 keeps matching or beating the iPhone in low light, though.

In the previous set of photos, I think the iPhone X’s and the HTC U11’s results are close. Bump the brightness a touch on the iPhone’s image and they would more or less be the same. But this comparison above shows a world of difference between the iPhone X and the Pixel 2 XL and HTC U11 in very low light.

Not only does the iPhone put a reddish cast on the image (something it tends to do in low and in good light), but it loses tons of fine detail in the trees at the top of the image. Look at the difference between that part of the image and the ones captured by the Pixel 2 XL and the HTC U11. On the iPhone X’s photo, the smallest branches smudge together. With the Pixel 2 XL, there’s still some separation. With the HTC U11, they’re almost all still visible. I still think the Pixel 2 XL nabbed a more pleasant and accurate image, but the HTC’s straightforward, minimal processing again helped preserve more fine detail. (A stray car headlight seems to have wound up in the U11’s image, though this didn’t impact the branches way up top.)

As for the Note 8, which is the other phone that I haven’t mentioned much, I found it usually fell at or behind the quality of the iPhone X. In this set of photos, for example, look at the leaves behind the yellow turn sign. It absolutely butchered them. The Note 8’s image processing tends to smudge things, which means it’s not as reliable for recreating fine detail.


Here’s an example where all the smartphones handled themselves pretty well. This is a challenging scene not only because there’s not a ton of light, but there’s a lot of competing light as well. One place where you really see the differences is in the concrete columns in the middle of the train tracks. The Pixel 2 XL was able to hold onto the detail of the grime underneath the red-and-white striped line, where the iPhone X and others tended to lose that in the post-processing. James and I talked about this one in the video comparison above, so be sure to check out the full breakdown there.

Daylight

Okay, let’s finally move out of the dark and take a look at some brighter scenes. And here’s an extremely muted one to start:

The Pixel 2 XL captured the finest details, but the iPhone X walked away with the best exposure and color. Snow scenes are hard — cameras tend to underexpose them and turn whites into a neutral grey — and this one fooled most of the phones. The Note 8 skewed very blue, which is something we’ve noticed it tends to do when there are lots of cool tones in a photo. And once again the LG V30 lagged behind the rest, with rough detail reproduction and grimy colors. The V30 comes from a line of LG phones that’s all about creation (especially video, which we’ll get to), but while it has different lenses and some manual controls, it was consistently disappointing.

Now for something a little less gray, here’s the night scene we saw before on an overcast and snowy day. Funny enough, the HTC U11 struggled more than it did in that first comparison at this location, and the iPhone stepped up with a better exposure, better detail reproduction (check the snow on the trees and the branches below).

The Pixel 2 XL image is a bit dark this time around, and here’s why I think that is. Like in the previous comparison, these are very muted scenes. Without lots of different light in a scene (real dark darks or very bright brights), Google’s HDR+ has less to work with. And so the result is that the Pixel’s “magic” gets kneecapped. It’s still a sharp photo that’s a great starting point if you want to edit things like color or contrast, but I’m happier with the iPhone X’s image here.

In broad daylight, things are also close. The iPhone X and the Pixel 2 XL capture about the same amount of fine detail, with the U11 at their heels, while the Note 8 and the V30 are a bit behind again.

The differences here are more about color reproduction. The HTC U11 spat out the most vibrant image with a beautiful blue sky. The Note 8 and the iPhone X processed more muted blues, with the former skewing a little blue overall and the latter featuring a reddish cast, as both tend to do. The Pixel’s is the most muted, having kept the most detail in that bright white church tower without losing any in the shadows of the buildings.

This is one of those examples where it really comes down to taste. You may prefer the U11 photo because it’s the most visually pleasing at first glance. Or you may prefer the Pixel 2 XL’s photo, which has enough detail and dynamic range information that it could be easily edited to look like the U11’s photo, maybe even without losing as much detail in that bright white church tower. Only the V30 and its washed out, greenish color seem to be in a lower class of performance here.

Another daylight shot, but this time with some challenging light considering the sky in the background is much brighter than the light on this car. The Pixel 2 XL reproduced the best blue sky of the bunch, but the iPhone X and the HTC U11 got the color of the car and the snow on it more accurately. The iPhone’s photo is a little warm, again, but has more detail in the sky and clouds than the U11’s photo, where that part is a bit blown out.

The Note 8’s photo is honestly a bit closer to what I would probably edit this photo to on Instagram, but that doesn’t mean I like it as a starting point. The more choices the camera (or camera app) is making for me, the less leeway I have when making my own edits. All told though, every phone except for the LG V30 produced a photo that a two or three year old smartphone might have struggled mightily with.

Here’s another comparison we looked at in the video above, so be sure to check it out for the full breakdown. In short, though, the LG V30 gets credit for getting the color pretty accurate for once, though the Pixel 2 XL (surprise!) handled it the best. The iPhone X skewed the photo a bit warm, as it does from time to time, but it and the U11 captured good detail in the wooden beams and of the Statue of Liberty in the background.

And it wouldn’t be a camera comparison without a photo of flowers, right? Here’s the set James and I pored over in the video. The Note 8 and the LG V30 overexposed the image a touch, with the Pixel 2 XL and the HTC U11 offering more muted versions. The iPhone X’s image has far more contrast — though to be fair, it might have caught a touch of sunlight through the clouds.

Portrait mode

Only three of these smartphones have a feature that is commonly referred to as “Portrait Mode” (Samsung calls it “Live Focus”): the iPhone X, the Pixel 2 XL, and the Galaxy Note 8.

With portrait mode, these companies are trying to simulate the shallow depth of field effect that happens when you have a long lens with a big aperture on a more traditional camera. There are two approaches: Apple and Samsung use their dual lenses (and the space between them) to calculate depth information, and then simulate the effect based on that data. And the iPhone X repeatedly beat out the Note 8 in this mode thanks to better detail and color reproduction. The Note 8 tends to overly smooth out the detail on people’s faces, giving everything an unpleasant, waxy look.

Google, however, says it uses the depth information from adjacent pixels on the image sensor to figure out where and how much to blur the image. This makes the Pixel 2 XL’s portrait mode photos (of people, at least) look more like cardboard cutouts than the iPhone X’s. With a portrait taken by a big camera and lens, the blur usually steadily increases from the person’s face to the back of their head. Google just hasn’t found a way to recreate that same effect using the method it chose.

The iPhone, on the other hand, theoretically has better depth information because it’s calculating it from the two cameras, which are farther apart than literal pixels. (Think about how your eyes resolve the three dimensional information of the world around you, and how that goes away if you cover one eye.) This means it typically reproduces that smooth transitional blur better than the Pixel. It also has the advantage of using the telephoto lens’for portraits, which helps separate a subject from the backgrou.d But! I’ve found that the Pixel is better at detecting the edges around someone’s face and hair than the iPhone, which tends to have more obvious errors.

All this said, the Pixel 2 XL still captures the best detail and color out of the three phones, with the iPhone X coming in second and the Note 8 behind both. One thing the Note 8 has that I wish the iPhone X and Pixel 2 XL had is the ability to adjust the amount of blur, both while you’re composing the shot and after you’ve taken it. The Pixel and the iPhone let you toggle the blur on or off after you’ve taken a portrait mode photo, and the iPhone lets you change the different “portrait lighting” effects, but they don’t let you adjust the severity of the effect.

Of course, these modes have been optimized for people, and things sometimes get wonky when you try to use portrait mode to shoot things that aren’t human faces. But other times, the results are pretty great:

Selfies


Click to see the high-res version.


Click to see the high-res version.

Now for maybe the most important camera on these phones (seriously!), the front-facing camera.

I’ll start with the top image, a selfie shot in somewhat strong daylight, which once again features examples that speak to the tendencies of these phones. The Pixel 2 XL repeatedly produced the most crisp, most well-balanced photos from the selfie camera, with impressive dynamic range and accurate color reproduction.

The iPhone X’s front-facing camera, like the ones on the rear, can skew warm, but it’s very good in more muted light, and especially accurate in challenging light, like you see in the second comparison. In that series of photos, my face was lit by daylight, while the background was warm interior lighting. The Pixel 2 XL kept the background from looking overexposed, but the iPhone X produced slightly more detail and a better color palette of my face.

The HTC U11’s front-facing camera is admirable in daylight, though it doesn’t handle strong light so well, and in low light you can see it struggles with colors. The Note 8 does about the same, though it also has face-smoothing beauty features built into the app if that’s your thing. Our own Ashley Carman spent some more time with the selfie cameras of these smartphones, so here are a few of her thoughts:

Dual cameras


Taken with the LG V30’s super wide camera.


Taken with the LG V30’s super wide camera.

I’ve ragged pretty hard on the V30 throughout this piece because it consistently turned in the worst result. But for stills, at least, the V30’s secondary super-wide angle camera can be really fun in a pinch.

The V30’s second camera is so wide that it’s not always going to look great, and there is something about the quality of it that feels a little gross sometime — as if a GoPro had gotten knocked around a little too much and the lens was just so slightly misaligned.

But that super-wide camera can capture some really wild and sometimes impressive photos that none of these other smartphones can. The question is whether you would want to suffer the tradeoff of having an inferior main camera just to snap the occasional jaw-dropper. The answer is probably no.

On the other hand, the secondary telephoto cameras on the iPhone X and the Note 8 are genuinely useful. I find myself using the iPhone X’s telephoto camera way more than I thought I would, so I really appreciate the added versatility. And just to make sure I wasn’t fooling myself that the iPhone X’s (or the Note 8’s) telephoto camera is that much better than just zooming in on, say, a Pixel 2 XL photo, we recently tested this on the Circuit Breaker Live show. The result? The iPhone X and Note 8 produced nearly identical detail, with slightly different colors, but both were better than the Pixel’s.


Video

Of course, cameras don’t just shoot still photos. All of these smartphones are capable of shooting 4K video, slow-motion video, and crisp, steady 1080p video, too.

I could write a whole other post about comparing the video that comes off these smartphones, but for the sake of everyone’s sanity, I’m just going to refer you to this video we made about it and throw down some bullet points below.

  • The Note 8 is not my favorite smartphone for still photography, but I think it’s the most versatile for video. It has the two lenses on the back, and the ability to shoot quad HD (2K) footage on the front-facing camera, which none of the other phones can do. It does 4K well, it does slow-mo well enough, and it captures decent sound.
  • The Pixel 2 XL’s combination of optical and digital stabilization gives it the smoothest footage of all five phones, but the poor sound recording quality is a real problem.
  • All the smartphones except the Pixel 2 XL and U11 will let you shoot 240 frames per second slow motion footage at 1080p (Google and HTC cap it at 120 frames per second at 1080p). But! The iPhone will let you shoot 240 at 1080 with the telephoto camera, too, which the Note 8 won’t let you do.
  • The LG V30 is a straight up disappointment when it comes to video. With stills, there are times where it hangs with the competition. But the video quality on this phone is frankly embarrassing, especially when you consider that was basically the main selling point of it.

Everything else

Samsung’s camera app is still the fastest overall. It is the fastest to autofocus, and it’s the fastest to launch. You can double tap the power button to launch the camera app on the Note 8, the Pixel 2 XL, and the HTC U11, but I found Samsung’s quick launch feature is still just a hair quicker. And while Apple made it possible to 3D Touch the camera icon on the lock screen to launch the camera with this latest round of phones, I still find myself missing the ability to press a physical button to do so.

The iPhone X, Pixel 2 XL, and Note 8 all have a version of what Apple calls “Live Photos,” which means they capture a few seconds of video footage around the moment that you take a picture. These make for fun GIFs, though it can sometimes be tricky to convert them.

Regardless, I think Apple is getting the most use out of them. With iOS 11, you can not only use Live Photos to reselect the main picture, meaning you’re less likely to catch someone blinking or moving, but you can also simulate long exposure photos. It’s become one of my favorite things to shoot — just look at these examples:

Conclusion

Alright, that was a lot. So what does it all mean?

Based on our tests the Pixel 2 XL has the best camera of these five smartphones. It repeatedly and consistently captures the most accurate colors, the most detail, and tackles challenging lighting scenarios that make the other smartphone cameras weep. Google’s computational photography approach pushed it into the lead last year, and I think it’s gained even more ground this year. I couldn’t be more excited to see where it goes over the next few.

As for the runner-up, I’m torn. The HTC U11 has an amazing rear camera, one that sometimes rivals (or even beats) the Pixel 2 XL. But when you’re talking about the whole package, I think the iPhone X still edges it out. The iPhone X has a second telephoto lens, which offers more versatility. It has portrait mode, live photos, and a better selfie camera. It also blows the U11’s video out of the water. I think you could probably even make the case that the iPhone X is a better buy than the Pixel 2 XL if you want the best of all these worlds. Either way, HTC deserves a ton of credit for finally getting back in the ring with the U11’s main camera, but it still has a lot of work to do with the rest of the experience.

The Note 8 is the most versatile when it comes to video, but its still photos (front and back) are just too much of a step behind the iPhone X and the Pixel 2 XL to push it ahead of either of those phones. Samsung was the first to match and beat Apple a few years ago, but not only has it lost that lead, it’s let Google swoop in and knock both companies off the image quality pedestal.

And then there’s LG. You might have spent this entire article screaming the name of the smartphone you think we should have tested instead of the V30. And you might be right! But LG bills itself as a company that makes smartphones with great cameras, and it’s had success living up to those claims in the past. That the V30 is obviously in a lower class than the other four we tested is illuminating as it is disappointing. As they say in sports, there’s always next year.

[ad_2]

Source link

قالب وردپرس

We have abandoned every principle of the free and open internet

[ad_1]

“In a few years, men will be able to communicate more effectively through a machine than face to face.”

It was 1968, and J.C.R. Licklider, a director at ARPA, had become convinced that humanity was on the cusp of a computing revolution. In a landmark paper called “The Computer as a Communication Device,” he described “a radically new organization of hardware and software, designed to support many more simultaneous users than the current systems, and to offer them… the fast, smooth interaction required for truly effective man-computer partnership.” For Licklider, this wasn’t just a new technology, but a new way for human beings to exist in the world.

You’re reading this on a website, so you know what happened next: the internet. What initially seemed like a new way to transfer information turned into a revolution that rewrote the basic assumptions of society. Entirely new kinds of economic and social organization evolved on these networks, taking root faster than anyone would have thought possible. For an entire generation — my generation — that process is all we’ve ever known.

Now, that vision is fraying. The social fabric of the internet is built on very specific assumptions, many of which are giving way. Licklider envisioned the internet as a patchwork of decentralized networks, with no sense of how it would work when a handful of companies wrote most of its software and managed most of its traffic. Licklider conceived a level playing field for different networks and protocols, with no sense that the same openness could enable a new kind of monopoly power. Most painfully, this new network was imagined as a forum for the free exchange of ideas, with no sense of how predatory and oppressive that exchange would become.

These failures are connected, and they leave us in a difficult place. It’s easy to say this was a bad year for Google or Facebook (it was), but the news is actually worse than that. Companies are falling into crisis because the basic social compact of the internet has reached its limit — and begun to break.

FREE SPEECH MINIMALISM

In March 1989, a researcher named Tim Berners-Lee laid out a new system for connecting computers at CERN, a proposal that would ultimately lay the groundwork for the World Wide Web. Information was being lost as CERN grew and projects turned over, so Berners-Lee envisioned a computer system that could accommodate that kind of constant change, a network built on hypertext links that were indifferent to the content they were transmitting.

“The hope would be to allow a pool of information to develop which could grow and evolve with the organisation and the projects it describes,” Berners-Lee wrote. “For this to be possible, the method of storage must not place its own restraints on the information.”

Berners-Lee was thinking of technical restraints — a hyperlink works just as well for a webpage as it does for a JavaScript application — but the lack of restraints had political implications, too, building on a more fundamental content-neutrality built into the network itself. ARPA’s network had been built in the wake of the Free Speech Movement and Vietnam, giving it a deep connection to free speech libertarianism that only deepened when Berners-Lee added the hyperlink. On this network, there were few mechanisms to stop objectionable emails from being delivered or retaliate against an unruly network node. The flow of information over this system would be largely uncontrolled, with no distinction between true or false, good or evil.

That ideology grew into a set of business practices, codified by Section 230 of the Communications Decency Act. There were still crimes you could commit with just information (particularly content piracy), but 230 meant you could only blame the source of the information, not the networks that delivered it. At the same time, operators developed authentication and filtering methods to deal with basic problems like spam, but it was always an uphill fight, and fighting speech with speech was always the preferred option.

Persistent, targeted harassment has made that logic harder to defend, and the move to closed platforms like Facebook has scrambled the conversation even further. Abuse is everywhere, and left to their own devices, malicious users can easily make platforms unusable. Even committed speech advocates like Jillian C. York see the end goal as consistent principles and accountable systems on platforms, rather than a lack of moderation itself. And while there are lots of complaints about moderation on Facebook and Twitter, almost no one seems to think the companies should be taking a lighter touch.

The internet is still catching up to that logic. After white nationalists rallied in Charlottesville this August, web providers realized they, too, were in the moderation business, dropping neo-Nazi sites in response to widespread public pressure. But outside easy victories (which are largely Nazi-related), there are still very few moderation principles everyone agrees on, and there’s no higher authority to appeal to when disagreements happen. There’s no law telling platforms how to moderate (such a law would violate the First Amendment), and no mechanisms for consensus or due process to take the law’s place. More practically, nobody’s good at it, and everyone is taking heat for it more or less continuously. With new legislation poised to chip away even more at Section 230, the problem is only getting more complex.

ANONYMITY

In the early days, it seemed like online anonymity had opened the door to a new kind of identity. Not only could you be a different person online, but you could be more than one person at once, exploring your own personhood from multiple angles. In a 2011 TED Talk, 4chan founder Christopher Poole said the key was to think of identity as a diamond, not a mirror.

“You can look at people from any angle and see something totally different,” he told the crowd, “and yet they’re still the same.” It’s a beautiful idea, although the fact that it came from the founder of 4chan should give you some sense of how it worked out in practice.

For a long time, hardly anyone knew who you were online. Handles replaced real names, and though your service provider certainly knew who you were, massive swaths of the internet (Facebook, e-commerce, etc.) hadn’t developed enough to make the information widely available. Prosecutions for online crime were still relatively rare, stymied by inexperience and jurisdictional issues. There was simply nothing tying you to a single, persistent identity.

Now, nearly everything you do online happens under your name. It started with Facebook, the most popular single product on the internet, which has enforced its real-name policy since the beginning. Today, your Google searches, Netflix history, and any cloud-stored photos and text messages are all only a single link removed from your legal identity. As those services cover more of what we do on the web, it’s become much harder to create a space where anonymity can be maintained. As I type this, my browser is carrying auto-login tokens for at least five web services, each registered under my real name. If I were trying to maintain a secret identity online, any one of those tokens could give me away.

That’s not all bad news. Real names have helped close the gap between online and offline space, clearing space for new kinds of personal branding and online commerce that would have been impossible before. At the same time, you can see the old system withering. Anonymity still exists in certain places, but it’s grown fragile and taken on a different meaning. It’s easy to break through in most cases — an FBI director can’t even keep his Twitter account secret — so it only thrives in mobs where no individual member can be singled out. Using web anonymity for any sustained purpose like criticizing government officials or organizing political dissent, has become a losing bet.

DECENTRALIZED OWNERSHIP

Four days after the rally in Charlottesville, the content distribution network Cloudflare publicly discontinued service to the neo-Nazi site Daily Stormer. The move came after months of escalating pressure from anti-racist activists, and after finally giving in, CEO Matthew Prince wrote a post explaining what made him so reluctant to drop the site. It wasn’t sympathy for neo-Nazis, Prince wrote, but a fear of how powerful networks like Cloudflare were becoming.

“In a not-so-distant future,” he wrote, “it may be that if you’re going to put content on the Internet you’ll need to use a company with a giant network like Cloudflare, Google, Microsoft, Facebook, Amazon, or Alibaba.” The implication was clear: if those six companies don’t like what you’re doing, they can keep you off the internet.

It wasn’t always like this. An online presence has always required lots of partners (a host, a domain registrar, a caching network), but for most of the history of the internet, no single player was powerful enough to pose a threat. Even if they did, most functions could be brought in-house without any significant reduction in service. The shaggy, decentralized network had given rise to a shaggy, decentralized infrastructure, with no single choke point where a business could be shut down.

Now, the internet is full of choke points. Part of the reason is the shift to the mobile web (which tends to be owned by a handful of carriers per country), but another part is the growing centralization of how we reach things on the web in the first place. After a decade of laughing off AOL as a walled garden, we’ve ended up with a handful of services that have a similar level of power over everything we see online. Google is where the world finds information: if you’re a listing service competing with Google, your days are numbered. Facebook is how people share things: if you can’t share it on Facebook, whatever you’re talking about just won’t travel. Uber is a billion-dollar company, but if iOS and Android decided to delist its software, the product would be inaccessible in a matter of hours.

That centralization causes problems beyond outright blocking. Web users were throwing off just as much personal data 20 years ago, but the data was spread between dozens of different companies and there was no clear infrastructure for coordinating them. Now, it’s entirely plausible for Facebook or Google to collect every website you visit, following logged-in users from site to site. Data collection has become a pivotal part of the internet, used either to target ads or to build products, but there are only a handful of players with the scale to meaningfully pull it off. The result is a series of competing walled gardens that look almost nothing like the idealized internet we started out with.

OPEN INFRASTRUCTURE

The first spark of the internet was the open connection. Hosting a website meant anyone with a modem could dial-up and stop by — and anyone with a server could set up a website. All the servers ran the same set of protocols, and no provider was favored over any other. In short, everyone connected to the same internet, even if some hosts and connections were better than others.

Those principles have come under immediate threat this month, after the FCC’s official vote to roll back Title II protections. The order is still being challenged in court, but we now face the very real prospect of a tiered internet, as companies aligned with Comcast or Verizon navigate a completely different network than independent competitors. The network can also segment according to types of content, with high-traffic services like Netflix facing throttling and interconnection standoffs that services like Twitter will never have to deal with. There’s no longer one single network, and managing those asymmetric frictions are now just part of running a business online.

In fact, the open network has been closing for far longer than Ajit Pai has been in charge. Today’s technology runs on a string of closed networks — app stores, social networks, and algorithmic feeds. Those networks have become far more powerful than the web, in large part by limiting what you can see and what you can distribute. Services like Fire TV and YouTube are built on the internet, but they’re playing by different rules. As long as Google can block Fire TV’s YouTube access by fiat, we are not dealing with an open network. The basic promise of the internet — the scale, the possibility — is no longer possible without closed corporate networks. To thrive on today’s internet, you need much more than a server and a dream.

PERMISSIONLESS INNOVATION

The internet also made a lot of people very, very rich in ways that were difficult to predict or even comprehend. In a 2012 post, Y Combinator co-founder Paul Graham made it sound as if a startup idea could come from almost anywhere. “Pay particular attention to things that chafe you,” Graham wrote. “Live in the future and build what seems interesting. Strange as it sounds, that’s the real recipe.”

In economic terms, this was about tearing down barriers to entry. If you wanted to sell glasses frames or mattresses, now all you needed was a product and a website. You could cut out the intermediaries that had defined the industry pre-internet. Legacy businesses were slow to catch on to the possibilities of the internet, which created a power vacuum and lots of opportunities for entrepreneurs.

The result was a flood of startups, which have attacked incumbent industries more or less indiscriminately for the past 20 years. Not all of the resulting businesses were successful or good (RIP Pets.com), but it’s hard to name a section of the economy that hasn’t been reshaped by them in some way. Internet-fueled disintermediation resulted in profound and lasting shifts in the global economy, and minted a new generation of tech billionaires. When folks like Marc Andreessen get excited about the internet-like properties of the blockchain, this is what they’re talking about, and it’s independent from issues of free speech, or even net neutrality.

But by now, the disintermediating magic of the internet is mostly used up. There’s still plenty of VC money out there, but the easy disruptions have already happened. Any new entrants with real promise are most likely to be acquired or Sherlocked by one of the major tech companies. In either case, they’re plugged up before they can do too much damage to the incumbent order of things.

Occasionally, a startup will make it through the gauntlet to become an independent public company — Snapchat and Uber being the most recent examples — but it’s much harder than it was even five years ago. For those that make it, the now-centralized internet means you’ll have a new set of intermediaries to deal with, relying on Apple’s App Store, Google’s search rankings, and Amazon’s server farms. The power vacuum is over. If you’re fighting to save the internet for entrepreneurs, there’s simply nothing left to save.


It feels sad writing all of this down. These were important, world-shaping ideas. They gave us a specific vision of how networks could make society better — a vision I still believe did more good than harm. With no argument for an open web, how do you tell a country not to shut down networks in the run-up to an election, or not to block apps used to organize opposition? We’ve shunned the tech world for hiding behind content neutrality, or using the gospel of disruption to entrench their power. How will the same companies act when they believe in nothing at all?

Maybe they never did. The last year has toppled over many of the old assumptions, but they had been weakening for a long time. The sooner we acknowledge that the old ideas have failed, the sooner we can start building new ones. As technologists look for a way forward, those new ideas are sorely needed. The scary thought is that we may be starting from scratch.

[ad_2]

Source link

قالب وردپرس

How technology helped a blind athlete run free at the New York Marathon

[ad_1]

It’s a literal road to nowhere. Stretching out from a roundabout outside the Robin Hood Airport in Doncaster, a small village in Northern England, it’s a wholly unremarkable stretch of slowly cracking pavement, bushes, and weeds, an idle strip of asphalt near long-term parking and a bland business park.

For 35-year-old runner Simon Wheatcroft, however, this stretch of unused roadway may as well be his gym, training center, and proving grounds, his own private version of the 72 stone steps that make up a Rocky montage. Wheatcroft knows every inch of this one-third-mile strip of asphalt — from the contours of the roadway to the feeling of its double yellow lines of paint under his sneakers. Despite the mind-numbing bore of jogging such a short length in endless loops, Wheatcroft had to memorize it. He’s blind.

Imagine getting up from your desk or couch, closing your eyes, and walking to the other end of the room, or perhaps crossing the street in midday traffic. Most people wouldn’t have the audacity to do that without guidance or aid. Meanwhile, Wheatcroft has run the New York and Boston marathons, covered 100 miles in the Sahara Desert, and — perhaps most impressive — sprinted solo alongside the curving roads and streets of his small corner of rural England, sometimes alongside oncoming traffic, all without the benefit of actually seeing where he was going. Instead, he used the twin yellow lines on the side of the road, feeling them through his sneakers, to avoid stepping into the road. (Cars usually make it a point to avoid hitting people, he says, and honestly, they hate cyclists more.)

For the last few months, Wheatcroft has been training along these roads with renewed intensity. Though he’s finished countless races and even ultramarathons, he’s now focused on the New York Marathon, the premier event of its kind. He’s completed the race twice before, but this year carries another challenge. Thanks to the technology of a Brooklyn-based startup called WearWorks, and their prototype wearable navigation device, Wheatcroft aims to be the first blind runner to cover the course unaided and unassisted.

When it comes to technology developed for the visually impaired, “the biggest thing is accessibility and affordability,” says Wheatcroft. “How do we make visually impaired people more mobile? If these technologies exist, eventually they trickle down to people, and everybody uses them.”

The New York Marathon represents an edge case, a stress test, an extreme. Wheatcroft believes that by finding a way to navigate the route amid thousands of runners, he can help test technology that could assist the quarter of a billion people around the world today who are blind or suffer from vision impairment. Many of the visually impaired don’t have a job — 70–80 percent in the US are unemployed — and suffer varying degrees of mobility and navigational challenges.

“It upsets me that so many blind people don’t work, and a lot of that is due to mobility,” he says. “We should be at a point where we should be able to solve these things. I want to make better technology for the community as a whole.”


Photo by Abbie Trayler-Smith for The Verge

Walking through Doncaster with Wheatcroft, on the route where he takes his sons, Grayson and Franklin, to school, it’s difficult to tell he’s visually impaired. Even when he’s out walking with his guide dog, Ascot, Wheatcroft’s mental map of the surrounding roadways is so acute that he often gives precise directions to people too dependent on their smartphones to find their way without one.

“People would see me running and ask what I was doing, and eventually, I’d end up telling them where to go,” he says. “‘To your left, there’s a building, about 0.9 miles down the road, then you can turn right.’”

Wheatcroft often looks people in the eye when he talks, a force of habit from when he could see. He started losing his vision at 17 due to a degenerative eye disease called retinitis pigmentosa, a genetic condition that also blinded his uncle. (Only 1 percent of Americans who are blind are blind from birth.) At this point, Wheatcroft can only vaguely make out changes in light, or what he calls the “fog of dull color.” (He could tell when I stood in front of him and blocked the afternoon sun.)

“When I was young, I thought, ‘Oh, this’ll never really happen,’” he says of going fully blind. “I was always a little bit concerned about the things that I’d miss out on, like I wouldn’t be able to see my kids. That used to plague me. But at the same time, I thought medical science might solve the problem.”


Photo by Abbie Trayler-Smith for The Verge

Wheatcroft grew up near Doncaster and dreamed of being a fighter jet pilot, but his diagnosis ended that dream. During high school, he rarely talked about his situation. After graduating, he went to college in Sheffield, where he received an undergraduate degree in psychology. He milled around a bit, and eventually worked at a friend’s video game store for a few years before finding a job in IT. At 26, his vision rapidly deteriorated. The shift initially devastated him; he says his situation became “depressing as hell.” Without work, he felt like he had lost purpose. Over time, Wheatcroft found ways to acclimate to his condition — he recalls memorizing the route between local pubs — all part of what he says was a constant adjustment.

But during a three-week vacation traveling across the United States in 2009, Wheatcroft was reminded of his limits. He had planned to propose to his girlfriend Sian at the summit of Half Dome in Yosemite, California, a romantic vista accessible via an arduous hike. But Wheatcroft had trouble navigating the ascent, and as they crossed the tree line, Wheatcroft, with a ring in his pocket, became exhausted. The loose ground and steep incline were proving difficult. Light rain started falling, making the route even more treacherous. Sian asked him to stop and rest, and when he sat down at the halfway point, he realized he had to turn back. In the end, Wheatcroft proposed to Sian at the base of the mountain during a snack break. A few weeks later, still crisscrossing the US, they wed in Las Vegas.

Wheatcroft came back to the UK struggling with what had happened in Yosemite. He decided to take a “voluntary redundancy” and quit working. His failure to propose at the summit ate at him for weeks, then months. What if he kept giving up on his aspirations because he was blind?

It was then time that Wheatcroft picked up a book given to him by an old university teacher: Ultramarathon Man by Dean Karnazes, a famed ultramarathon runner. Wheatcroft, who wasn’t very involved in sports as a teenager, thought that if Karnazes could endure long distances, and find significance and self-confidence in running, maybe he could, too. The idea marinated in his head for a few months. Maybe running could be his way to overcome obstacles, like the one that had forced him down the mountain.


Photo by Abbie Trayler-Smith for The Verge

In 2010, Wheatcroft started practicing in what he thought was a safe space: a soccer field in the back of an elementary school in Doncaster. He had done some weight lifting and CrossFit in high school and into his 20s, but this was different. Wheatcroft barely had the money to afford serious training: he ate candy bars from the corner store as a cheap source of calories, and wrote to Brooks running shoes, explaining his cash-strapped situation, and got a free pair of shoes in the mail. Sprinting between posts in endless loops, he’d feel out the paint on the grass to help himself navigate, but it was far from foolproof. Occasionally he’d run into a dog walker, a post, or someone who just assumed he could see and swerve around them. Eventually he moved to the empty airport road and, after gaining confidence, ventured out onto surrounding streets and roadways.

“Had I not lived here, I don’t think I’d have even been able to start training,” he says. “Right location right time, more than anything.”

In 2011, inspired by Karnazes and feeling confident after six months of training, Wheatcroft attempted his first ultramarathon: a 100-mile race in the Cotswolds, a rural area of rolling hills in South-Central England. At mile 83, he was dragged off the race when he could no longer stand. But he didn’t stop running.


Photo by Abbie Trayler-Smith for The Verge


Photo by Abbie Trayler-Smith for The Verge

Over the next six years, he would go on to finish numerous marathons and ultramarathons: he ran the Boston Marathon in 2016 (which he finished in four hours and 45 minutes), the New York Marathon twice (in 2014, he finished in five hours and 14 minutes), and even ran the 220-mile route from New York to Boston over the course of nine days in 2014.

For most of these races, Wheatcroft ran with a guide, his friend Neil Bacon, who’s been running with him for four years. But increasingly, he’s been turning to technology to wean himself off of human guides. He attempted the Four Deserts Marathon in Namibia last May — a 155-mile-long, multi-day race through scorching, shade-free desert where temperatures climbed to 104 degrees Fahrenheit — using corrective navigation technology he helped develop with IBM engineers. The device used a series of audio cues to keep him on track; beeps would steer him and keep him within a virtual corridor mapped out by the program. They named the device “eAscot” after Wheatcroft’s dog.

Wheatcroft says the device functioned well as a proof of concept for corrective navigation, but it was a rush job and had too many functional constraints. The navigational corridor wasn’t tight enough, and the device assumed that the desert would be free of obstacles. On day two, Wheatcroft ran without Bacon trailing him; he hit an unmapped flagpole 10 miles in.

Competitive running is a notoriously injury-prone pastime, even for those with full sight. Long-distance runners face twisted ankles, runner’s knee, and shin splints. Wheatcroft says the most significant issues he and other blind runners face is drifting from their paths. He’s clipped countless lampposts and traffic lights during training, and tripped over ditches, piles of dirt, and even garbage left on the road. A few years ago, Wheatcroft was running down a roadway near his home when he unknowingly came upon a battered car, abandoned on the shoulder the day before. Wheatcroft hit the damaged vehicle running at full speed, cutting his shins. Disoriented, he tried to right himself and in the process cut his arms. He got up, dazed, covered in what he thought was sweat. When he realized it was blood, he panicked, unable to see himself, identify his injuries, or find landmarks that could help someone locate him. He located his phone amid the wreckage and called his wife, frantically telling her to come find him. Luckily, she was able to locate him by driving up and down his normal route.

“If I’d have smashed my phone,” Wheatcroft says, “I would have been fucked.”


Photo by Abbie Trayler-Smith for The Verge

Wheatcroft’s running career coincided with an advance that made his life as a blind person better: the 2009 release of Apple’s iPhone 3GS, the first smartphone with a built-in screen reader, VoiceOver.

“It was night and day,” he says. “It wasn’t just about training. Now I could read newspapers. I could cue up a song on Spotify. I can do it now, thanks to that phone.”

More important for Wheatcroft is the issue of mobility. Despite a massive market, one that’s forecast to grow as baby boomers age, there has been no truly affordable or readily attainable breakthrough navigation technology for the visually impaired. Meanwhile, the established everyday aids are imperfect: canes require environmental cues to work, and can’t provide directions to the store; guide dogs can master an area or a series of tasks, but can’t immediately learn a new neighborhood, or help navigate through an unfamiliar city.

“The basic skills we need to navigate aren’t the challenge,” says Karl Bélanger, a technology expert at the National Federation for the Blind. Canes and guide dogs work, he says, for general, day-to-day navigation. But it’s important to have supplements to basic mobility, especially in specialized circumstances.

Some new technologies have offered steps forward: Google Glass, in conjunction with a subscription service called Aira, can “see” for the blind. Aira give the visually impaired immediate access to a remote, sighted assistant who can tell them what’s in their field of vision. (Erich Manser used Aira to run the Boston Marathon earlier this year.) It’s incredible technology, but it’s also expensive — the unlimited plan for Aira costs $329 a month — which may explain why Aira has less than a thousand subscribers. Other programs and devices, such as Microsoft’s Seeing Eye, tap phone cameras and visual recognition software to help navigate certain scenarios, but they don’t offer wider navigation cues. Not to mention, with constant need for power and a Wi-Fi connection, they’re limiting.

“That’s why the dog and cane still reign supreme,” says Wheatcroft. “The only input a dog needs is food.”

The first technology Wheatcroft experimented with was a relatively basic app called Runkeeper, which simply told him how far he had gone with regular audio reminders. Those reminders helped jog his memory and maintain focus, as well as create detailed mental maps of his surroundings.


Photo by Abbie Trayler-Smith for The Verge

“It was just a data point, but that data point was like a comfort blanket,” he says. “That voice helped tell me what to do, and that almost becomes your internal voice. If I didn’t have that technology, I wouldn’t have had the extra confidence to go out.”

Now, Wheatcroft trains with Runkeeper and uses a treadmill at home; it’s a Nordic model that’s hooked up to a program called iFit to run preprogramed routes, practice pacing, and get used to inclines and markers on his upcoming routes.

During races and long runs, Wheatcroft, like many other blind runners, relies on a much more low-tech way of getting around: human guides. Professional blind runners rely on volunteers and practice partners who are literally tethered to them by ropes in order to help them avoid hitting anything or anyone on the course. It’s both a liberating, and limiting, factor.

“When you ask people why they run, it’s normally about freedom and independence, to go out and push yourself,” Wheatcroft says. “But you can only push yourself as much as the person you’re connected to.”

New Yorker Charles-Edouard Catherine, also a blind runner, is a member of Achilles International, an organization that helps pair volunteers and athletes with a variety of disabilities, including vision impairment, autism, and amputations. With chapters in more than 60 countries, the group fields a large team at marathons and other running events; at the New York Marathon, the group can field over 300 athletes with nearly 700 accompanying guides. (Many racers have multiple guide runners.) Catherine, who also has retinitis pigmentosa, says his first time running with Achilles in 2012 was life-changing.

“When you become blind, you get in a phase of denial where you do not want to accept the new condition you’re in, the new requirements that it implies. You don’t like to ask for help. I didn’t know what to do,” he says. “It was awkward. But I paired up with people depending on speed and level, and right away, it felt like a new community.”

Catherine started running regularly with Achilles, and he quickly realized the advantages and limits of running with a guide. He felt camaraderie with fellow runners, who would share the experience of a long race with him, and having someone with him to warn other runners and pedestrians to get out of the way felt like having a presidential escort. But the more Catherine trained, the more dependent he felt.

“I always need someone,” he says. “And that’s limiting. In New York in February, if it’s snowing and frozen, and you want to do hill repeats, you’re not going to find lots of volunteers.”


Photo by Andrew White for The Verge

Most of the technology Wheatcroft has used to date relies on audio cues. But audio is a constricting form of communication. Imagine a Siri or Alexa-like interface describing every single object in your field of vision. Consider the cognitive overload that it would create on an already loud street crowded with obstacles.

“When I’m walking down the street to my house, hearing that there’s a bush or a lamppost doesn’t really help me,” Wheatcroft says. “Just help me avoid it.”

That’s why Wheatcroft has become increasingly focused on the sense of touch. Haptic technology, Wheatcroft believes, can steer a visually impaired person without overloading their senses. A haptic device could be called up by a voice command to access existing GPS data for directions, then “steer” someone via gentle taps on their skin. (The system could be combined with additional sensor systems, or even a service animal or cane, to help avoid obstacles, grade changes, and immediate impediments.)

Earlier this year, Wheatcroft went searching for a company working on a haptic solution. That’s how he came across WearWorks.

Co-founded by a trio of graduate students at New York’s Pratt Institute, WearWorks traces its origins, at least in part, to visions of a kung fu suit and an “iTunes for movement.”

Keith Kirkland, a dreadlocked designer and engineer born in Camden, New Jersey, knew his way around clothing. A graduate of the Fashion Institute and Technology, a freelancer for Calvin Klein, and a one-time handbag engineer for Coach (“every bag has to be stress-tested to hold 150 pounds,” he says), Kirkland was inspired to explore haptic design while working on 3D modeling. An ex-girlfriend saw him hunched over a computer from across the room, noticed his poor posture, then walked over and shifted his shoulders.

“What if you could read my body posture and compare it to what’s right, all without being there?” he remembers thinking at the time. “What does it look like to have movement fully digitized?”

He spent months trying to fashion a crude prototype, which was the foundation for his thesis at Pratt. Imagine Neo uploading his martial arts mastery into The Matrix as a file. The end result was a crude punching meter, a sleeve that would measure the strength of a strike. The project fell apart due to the difficulty of connecting wires and motors to the elastic sleeve, but it got Kirkland thinking about haptics and feedback: how can we communicate movement instruction via touch?

Kirkland partnered with two classmates, Yangyang Wang, and Kevin Yoo, a sculptor and painter turned industrial designer who had worked with Wang for a 2015 competition called America’s Greatest Makers for Intel. The million-dollar contest, focused on wearable technology, was a perfect place to pool their design skills to work on designing a better haptic interface.

The team’s original idea was to create a general market notification device, but then Yoo remembered the story of Marcus Engel, a famous blind author and consultant, who Yoo once heard speak. (Engel would later become a friend and adviser for the group.) The team began discussing how they could create a device that could help the visually impaired navigate, “offloading” the communication of directions from verbal to tactile.

WearWorks’ early Wayband prototype didn’t win at the Intel competition, but a few weeks later, it did help them become fellows at the Next Top Makers incubator, an event sponsored by the New York Business Development Corporation. The recognition helped the team take the device to SXSW last year, and landed them a spot in the Urban-X incubator in Brooklyn’s Greenpoint neighborhood, where they recently finished a year-long residency. That’s where Wheatcroft came upon the group, and began working with them to develop and refine the technology.

“What they’ve understood is that it’s not about the maps. It’s about how you communicate with a person,” says Wheatcroft. “With verbal systems, you need to lose one of your senses for directions; hearing becomes dedicated to navigation. By using touch, which isn’t often used, you still leave audio free.”

The system developed by WearkWorks that utilizes GPS to create a map and route

The core technology behind the Wayband is relatively simple: users pair the Wayband with their phone, and it utilizes GPS to create and map a route. The path is surrounded by virtual “fencing,” and any time a user steps in the wrong direction, or approaches a mapped object or obstacle, the band buzzes in a sort of Morse code. (Four quick taps on the bracelet signal a turn left, for example, while two long taps signal a right turn.) It’s corrective navigation. Testing out an early version of the device at the Urban-X accelerator earlier this summer, I found myself slowly spinning in circles, eventually righting myself after getting the hang of the haptic cues. Kirkland compares it to creating an alphabet and vocabulary from scratch.

“Keep it functional and simple,” says Yoo. “We actually went to the National Federation for the Blind, and they told us high-tech canes and proximity sensors are great, but what really would help us is wayfinding.”

Instead of reinventing navigation, or relying on new computer models, the device simply creates a more easy-to-understand, universal system of directions, which connects to a GPS mapping system. The team is quick to note this doesn’t entirely solve the problem of navigation; though the Wayband can steer a blind person to the Post Office, it can’t help them avoid a pothole or cross a street. For that, Wheatcroft will be partnering the Wayband with an ultrasonic device the team devised to help with micro-scale navigation. Called the Tortoise, the green plastic device, roughly measuring two inches square and strapped to Wheatcroft’s chest, broadcasts and receives ultrasonic vibrations. (The antennae looks like the small bump of a camera on a smartphone.) The Tortoise’s constant, low-level vibration will speed up when the reflected waves indicate another runner or object is close.

Catherine, who became one of a number of blind consultants for the WearWorks team after they reached out to him, loves the concept behind the technology.

“You have this bittersweet feeling. Why haven’t we figured this out five years ago?” he says. “I think this technology has been there for a long time.”

Throughout the last year, WearWorks and Wheatcroft have refined the technology. He tested an early prototype in April, and it was impressive enough that he was almost ready to use it for the actual race. During a visit to New York City in September, Wheatcroft briefly ran around Central Park with the updated device.

Wheatcroft loves the Wayband system because it’s what he calls a “safe sandbox.” Instead of running within a wide digital corridor between 10–50 meters wide (the system he developed with IBM), WearWorks’ Wayband works within a 2.5-meter corridor, which offers more accuracy and safety, especially in a race environment.

For the marathon, he’ll wear a larger armband-sized version of the device in addition to the Tortoise. Neil Bacon, Wheatcroft’s longtime guide runner, will be at the race as a precaution, but won’t be helping Wheatcroft along on this record-breaking attempt.

“My main concern is running into somebody,” Wheatcroft says. “If this is their first marathon, and they’ve been training for years, I don’t want to be the bloody idiot who runs into them and takes them out.”

After the race, WearWorks plans to begin selling early versions of the Wayband, including an armband-sized version for athletes, similar to what Wheatcroft will be wearing, starting at $300.

Catherine says the potential independence this device promises would be like going from a child to an adult, a graduation. It would be a different race. But he knows exactly what he’d like to do first.

“I would really love to guide someone else,” he says. “I would like to be on the other side.”


WearWorks co-founder Kevin Yoo adjusting the equipment prior to the race
Photo by Amelia Holowaty Krales / The Verge

Wheatcroft’s bet on a haptic, rather than audio, navigation system was a smart one: the New York Marathon engulfs runners in noise.

Started in 1970 as a race that took place entirely within Central Park and had roughly 100 spectators, the New York City Marathon has become the largest and most important race of its kind. Last year, a record-setting 51,394 runners, representing every state in the US and 124 countries completed a course that winds through each of New York City’s five boroughs. More than a million cheering and screaming fans, along with bands, DJs, and announcers, line the 26.2-mile course.

This year’s race took place on Sunday, November 5th. At 7AM, runners started to gather in corrals on Staten Island. They were itchy with nervous energy, ready to shed blankets and jackets, and — after long mornings of commuting on boats, buses, and trains to the edge of Staten Island — eager to just run.

Wheatcroft’s day started at 5AM with coffee, oatmeal, and so many press calls to UK media that he didn’t even have time to talk to his family. By 9:15, he was at the starting line, part of group of athletes with disabilities that include other blind runners (and guides from Achilles International) as well as those using handcycles.

The 24 hours before the marathon were full of last-minute preparations. Wheatcroft and the WearWorks team ran final trials in Central Park on the eve of the race, and discovered that the ultrasonic sensor wasn’t sensing objects in Wheatcroft’s vicinity. That night, the WearWorks team huddled at a Thai restaurant in Manhattan to hack together a solution, and Yoo fabricated a new module overnight. Yoo, who was going to run with Wheatcroft to observe the Wayband and Tortoise in action, made last-minute adjustments to the devices.

Press swarmed over Wheatcroft with questions and photographers snapped photos. New York Times reporter Jeré Longman was there, and would shadow Wheatcroft for the first few miles. Runners in front of Wheatcroft started asking members of the entourage if they should know who he was.

Minutes before the start, a stoic Wheatcroft, more serious and slightly more rigid than he was back in England, slipped out of his black tracksuit. At 9:57, as a slight drizzle fell on the crowd, the start gun was fired and the pack of hundreds began to move. Wheatcroft hung at the rear, and was one of the last of his group to begin crossing the Verrazano-Narrows Bridge.

Wheatcroft was running as the forward point in an invisible triangle. Though he was navigating independently, his guide runners, who previously guided him at the Boston Marathon last year, ran 10 feet behind. As Wheatcroft cleared the bridge with a smooth, steady gait, Bacon and Croak hung back, giving him a wide lead. A water station appeared in Wheatcroft’s path and both guides bit their tongues to avoid tipping him off. This was the Tortoise’s first test. Wheatcroft felt the device vibrate faster, so he slowed down and weaved around the obstacle.

“Then it became a totally different race,” says Bacon. “I’d never seen him dodge things like that on his own. The hard thing was standing back and letting him go. ”

From there, Wheatcroft continued through Brooklyn and Queens, picking up the pace, enjoying the freedom provided by the twin devices. Bacon and Croak, accustomed to chatting with Wheatcroft, hung back. They watched him avoid large groups of runners, the Tortoise functioning like it was meant to.

“At the beginning, it was like, ‘Oh my god, we’re doing it,’” says Wheatcroft. “It was exactly how I imagined we’d avoid people in the crowd. I was running faster because I was enjoying it working.”

But the team didn’t count on rain. Around mile 15, the functionality of the Tortoise, which had been steadily deteriorating as rainfall picked up, stopped working. At the same time, the Wayband was having difficulty picking up signals. The sheer volume of data and cellular traffic along the route didn’t help, says Yoo.

Photographs by Amelia Holowaty Krales / The Verge

“We had every single problem possible,” Yoo would later say, during a post-race stretch near the finish. “There was lots of high-rises causing signal issues, issues with navigation while crossing bridges. We did the hardest thing we could do: testing the Wayband during the marathon.”

As the navigation aids faltered, Wheatcroft found himself working more, forced to concentrate harder to move ahead. Combined with his early surge, he began to feel drained. By the time they crossed the East River and headed through Manhattan, Bacon, Croak, and Yoo assumed typical guide duties. As the group passed through Manhattan’s Upper East Side, Wheatcroft and his guides ran side by side.

Wheatcroft crossed the finish line at 3:15PM, five hours and 17 minutes after the start, with Yoo and Bacon flanking him. Over the last leg of the marathon, he demonstrated the same steady gait he had at the start, but it was clear he was spent. Huddled under two blankets and clutching a cup of sugary, milky tea in the finish area, he said the sheer amount of mental energy required to navigate with the system added to the physical exhaustion of the race. He expelled too much energy at the beginning, and didn’t anticipate the energy needed to navigate.

After Wheatcroft crossed the finish line, he put his arm around Bacon and flashed a grin. He appeared excited and relieved to have met the physical challenges of the race. But the unproven technology, which showed promise under the harshest of conditions, ultimately didn’t last the entire marathon, and Wheatcroft was unable to finish unaided.

I asked Bacon what he thought of the entire thing: he felt it was a great success. Exhausted, Wheatcroft couldn’t muster up a response:“Right now, I really don’t know. I’m too tired to think,” Wheatcroft said.


Photo by Abbie Traylor-Smith for The Verge
Photo by Abbie Trayler-Smith for The Verge

In the hours after the race, Yoo cataloged improvements for next time: the software algorithm needs to sort out data discrepancies better, the hardware needs to stand up to more duress, and they need a better GPS system. WearWorks clearly doesn’t have the budget to launch a fleet of satellites, but Yoo believes a mass-market GPS chip coming to the smartphone market next year will allow accuracy to within roughly a foot, and significantly improve the performance of their system.

Despite being exhausted, Wheatcroft lit up a little when asked about the future of the Wayband after the marathon.

“We took something we always knew was going to be an intense test,” he says. “We tested so many worst-case scenarios. Let’s take the lessons learned, and see how we can improve it.”

Wheatcroft is already looking toward the future, and even more strenuous challenges. Already an advocate and an occasional speaker, next, he’d like pursue triathlons. In addition, he’s consulted with tech companies about inclusivity and designing for the visually impaired, and he’s continuing his studies, including computer coding. (He’s currently working at home with a braille reader, and pursuing a master’s in computer science.) Wheatcroft wants to be more than a runner; eventually, he doesn’t just want to test the technology, he wants to help develop and build it.

“As a blind person, you always strive for independence,” says Wheatcroft. “But it’s a bit of a contradiction, because oftentimes, you’re using somebody with sight to become independent. What we’re trying to do is use this technology to really achieve true independence. This race isn’t about time, it’s proving that something is possible.”

[ad_2]

Source link

قالب وردپرس

Google Pixel 2 review: plainly great

[ad_1]

Without fail, every person who has picked up the Pixel 2 XL has said virtually the same thing: “It feels like it’s made out of plastic.” I said it myself when I first held it. Of course, neither the Pixel 2 nor the Pixel 2 XL are made out of plastic. They’re made out of Gorilla Glass and aluminum, just like every other high-end phone these days.

But Google coated all that aluminum with a textured finish that hides most of the antenna lines and also makes the phones easier to grip. Google took what could have been a visually impressive design and covered it up in the name of ergonomics. It literally made a metal phone feel like a plastic one. It chose function over form.

At nearly every turn, with both the hardware and the software, Google made that design decision again and again. There have been a few times when I wish the company had risked a little more razzamatazz, but mostly I’ve been appreciating the focus on improving the basics.

“It’s not just what it looks like and feels like,” Steve Jobs once said, “design is how it works.”

The Pixel 2 works really well.

Update, 10/31/17: After the original review was published on October 17th, we saw reports and directly experienced “image retention” on the Pixel 2 XL screen. Since then, Google has responded. It says that burn-in shouldn’t be an issue, but software updates are coming. It also extended the warranty to a full two years. We have updated the review’s screen section and score in light of this new information.


http://blog.computerpoint.com.pk/wp-content/uploads/2017/10/google-pixel-2-review-plainly-great.com

Hardware

The Pixel 2 comes in two sizes: a very humdrum 5-inch phone with a squared-off screen and big, chunky bezels, and a slightly more impressive 6-inch version with curved corners and smaller bezels. You’ll need to spend $649 for the smaller one or $849 for the larger one, with a $100 premium for expanded storage.

As it did last year, Google has done its level best to make these two phones identical except for their size. You’ll get the same power, performance, and (most importantly) camera with either device. The only differences are supposed to be the screen and the battery. You could endlessly debate whether these are the “same phone” in two different sizes. If you replace the keel on a ship, does it make it a different boat? If you replace the screen, body, and battery on a phone, does it make it a different phone? To me, there’s more that’s similar than different, so let’s not go full Ship of Theseus on them. (Note: when I refer to “Pixel 2” below, I’m referring to both. I’ll call out the “smaller” Pixel 2 or the 2 XL specifically where applicable.)

It is true that the Pixel 2 and Pixel 2 XL are more divergent than they were last year — perhaps because they’re manufactured by different companies. I prefer the XL because I prefer big screens, but obviously the smaller Pixel 2 is nicer and easier to hold. The XL is just a little too big, bigger even than last year’s Pixel XL, but it does have a larger screen than last year’s, too.

The smaller one comes in three colors, the bigger one in two. Each color has a slightly different texture and finish, and a few have jaunty little accent colors on their power buttons. All have what has become the signature design for Pixel devices: a glass “shade” on the back of the phone for improved wireless signals. The shade is smaller this year, stopping above the fingerprint sensor, but we’ll have to see if it’s any less prone to scratching compared to the first Pixels.

Google is sticking with the fingerprint sensor on the back, in an easy-to-reach spot. It’s fast and accurate. I wish the power button, which sits above the volume buttons, was as easy to reach on the XL. I should also note that both models do have a small camera bump this year, but it’s not quite as pronounced as what you’ll see on the latest iPhones.

The screen, especially on the 2 XL, has been polarizing. Google opted to tune the display to sRGB (the Galaxy S8, by comparison, offers four gamut options), so it looks a little more like the iPhone’s screen. But more than that, on the 2 XL the colors look muted in a way that many Android users I’ve shown it to found distasteful (even with the “vivid colors” setting turned on). I think many Android phones, especially from Samsung, are so vivid as to be phantasmagoric, so Google’s choice was to make this more “naturalistic.”

Part of the issue, Google says, is that Oreo is the first version of Android to have proper color space control. So until now, Android developers really didn’t have a way to control precisely how their colors would look on screens. The Pixel 2 is part of an effort to fix that, but even so, the more “naturalistic” color tuning on the Pixel 2 XL (and, to a lesser extent, the smaller Pixel 2) just looks a little off. The problem gets much worse when you look at the screen from angles, the color swings simply because that’s what pOLED does.

We spent a lot of time staring at different photos on the Pixel 2, the 2 XL, the iPhone 8 Plus, the Note 8, and the original Pixel XL. When you look at all the phones side by side, it’s undeniable that the Samsung phone is wildly oversaturated, the iPhone 8 looks the most natural, and the Pixel 2 XL is the most muted. Reds on the 2 XL tend to be more brownish, and skin tones look a little greener than they ought. The smaller Pixel 2 seems to have better color balance; it’s closest to the iPhone 8 of the bunch.

The charitable way to put it is that Google opted for something practical when it could have gone bolder. The less charitable way to put it is that the Pixel 2 XL has a bad screen with bad color tuning. For me, at least, I found that it doesn’t bother me unless I am actively comparing screens to another phone. When I just use the 2 XL day to day, it’s fine. In fact, I appreciate that it’s not as oversaturated as your average OLED Android phone.

There are some who fear that the Pixel 2 XL will suffer from the screen issues that plague a closely related phone, the LG V30. That’s not the case for me; my screen doesn’t have any blotches or dead pixels.

Update 10/31/17:

After we posted our original review, reviewers noticed an “Image Retention” issue on the Pixel 2 XL. You could see ghostly versions of the buttons on top of the actual image in certain cases, e.g. looking at a gray image in full screen. We’ve seen this behavior on multiple review units ourselves, including mere minutes after unboxing a new phone.

Image retention is a thing that isn’t surprising on OLED screens, but this seems much worse that usual. It’s also possibly a sign of burn-in, where the screen is permanently damaged by images sitting on the screen for too long. Google has posted an update contending that neither of these things are true. It argues that image retention and burn-in on this screen are both in line with industry norms.

Nevertheless, Google has extended the warranty on the Pixel 2 XL for two years. It has also promised a software update that should do a couple things: reduce the potential for permanent burn in over time and give users the option for more vibrant colors.

Those promises are good, but they don’t address other core issues on the screen: graininess and image retention that very clearly is worse than normal. Is it liveable, given that you won’t notice image retention except in particular situations? Probably, but it’s also something you should be aware of going in.

It’s also not strictly true to say that the Pixel 2 XL is a “bezel-less” phone. Although the bezels are much smaller than those on many other phones, the Samsung Galaxy S8, Note 8, LG V30, Essential Phone, and (I think) iPhone X all have smaller bezels. The glass is curved on the edges of the XL, though the screen itself is not, and I vacillate between thinking it looks elegant and thinking it looks kind of plain. I don’t have that problem with the smaller Pixel 2; I always think it looks plain.

On both phones, there are front-facing stereo speakers, as opposed to the mix of front-facing and bottom-firing ones on the iPhone. It’s one of those design decisions Google made preferring function over beauty, because even though the speakers make both phones taller and less elegant, they sound great. They get plenty loud, but even at max volume there’s no distortion. There’s even a hint of bass. Not as much as you’ll get from a proper Bluetooth speaker, but more than you’d expect from a phone.

And as long as we’re talking about audio, let’s talk about the lack of a headphone jack. For a phone that so clearly puts an emphasis on practicality, it’s a stupid and annoying change. There is a USB-C dongle in the box, but no headphones are included. I’m well aware that the desire for a traditional headphone jack is viewed by many as backwards-looking — if not quixotic — but not having one is still a near-daily hassle for many.

If there’s any bright spot about Google taking away the standard headphone port, it’s that the Pixel 2 also has greatly improved Bluetooth performance. On the original Pixel XL, I was getting no end of stutters and drops with a few different types of headphones, but nary a one on the Pixel 2. I don’t know whether to credit better antennas, better silicon, or better software — maybe all of the above — but I’m glad it’s fixed.

Both phones are rated IP67 against dust and water, and we dunked ’em both a few times without any problems. Because of the aluminum unibody design, they don’t support wireless charging. I find that disappointing, as both Samsung and Apple do support it. At least Google’s quick charging works well with the included AC adapter, offering several hours of charge with a quick top-up.

The Pixel 2 phones have the same processor as every other flagship Android phone this year: the Qualcomm Snapdragon 835. They have 4GB of RAM, which is plenty. Beyond preferring it aesthetically, I’ve found that Google’s version of Android just tends to run better overall than Samsung’s or LG’s. I hope that continues to be the case over time with the Pixel 2 and Oreo. So far, it’s been snappy.

Battery life also seems good. It takes some serious work to drain the battery on the Pixel 2 XL in a single day; usually it lasts until bedtime just fine. For those who pay attention to such things, I’ll say that I’ve been getting around six hours of screen time with the brightness at around 75 percent. On the smaller Pixel 2, my results haven’t been that impressive, but still quite good. My colleague Vlad Savov has spent more time with the smaller one, and he tells me he’s getting through a full day without issue.


http://blog.computerpoint.com.pk/wp-content/uploads/2017/10/google-pixel-2-review-plainly-great.com

Camera

Last year’s Pixel had the best camera you could get on a smartphone, and not just in DxO benchmarks, but in real-world testing. Of course, since then the Galaxy Note 8, HTC U11, and the iPhone 8 all came along. And I haven’t done enough testing to say whether or not the Pixel 2 can beat the pack again. But after about a week of using the camera, I will say this: it has a real shot at being the best again.

I’ve already described the multitude of technologies that are crammed into the Pixel 2’s camera stack and image processing workflow, so I’ll just stick with the short version here. Google is using its greatest strength, machine learning, to make the camera much better.

There is a 12-megapixel dual-pixel autofocus sensor on the back and an 8-megapixel sensor on the front. On the rear, Google is using a slightly brighter lens than before with the added upgrade of optical image stabilization. But the technical details are less important than how Google approaches photography: it is treating photography like a data problem instead of just a light problem.

For regular shots in full auto, the Pixel 2 is excellent. It handles challenging lighting situations without blinking: low-light, backlit subjects, and my own shaky hands are not a problem for this camera. The selfie camera is 8 megapixels, and it probably the best front-facing camera I’ve ever used. It has a “face retouching” feature, which, like most I’ve tried, is a little over-aggressive in smoothing your pores away.

Take low light, for example: Google tells me that even though it could keep the shutter open longer to bring in more light, it’s not bothering. It doesn’t need to because every photo you take on the Pixel 2 is an algorithmically combined set of up to 10 images.

I find the Pixel 2’s photos to be way sharper than the iPhone 8 and the Note 8 — almost to a fault in a couple of cases. HDR shots are equally impressive. I prefer the Pixel 2’s images overall, even though occasionally it goes a little overboard. Colors are a little bit more subjective: Google seems to lean toward the iPhone’s more naturalistic look more than Samsung’s vivid colors. But even then, it’s worth pointing out that the primary screen you actually look at the photos on is likely going to be your phone’s screen, so the Pixel 2’s photos are going to look a little less vibrant, especially on the Pixel 2 XL.

Where things get more interesting — and a little more mixed in the results — is in portrait mode. Google attempts to do the same thing with a single lens that other cameras do with two: detect depth data and blur the background. Most phones do this by combining computer recognition with a little bit of depth data — and Google is no different in that regard.

What is different is that Google is much better at computer recognition, and it’s gathering depth data from the dual-pixel system, where the half-pixels are literally less than a micron apart on a single image sensor. Google’s proficiency at machine learning means portrait images from the Pixel 2 do a better job of cropping around hair than either the iPhone 8 or the Note 8. The dual-pixel depth sensing makes it possible to get portrait mode working on non-human subjects, but there it’s a bit more of a wash. Sometimes the Pixel 2 can’t quite tell what to blur.

Not only can both sizes of the Pixel 2 do portrait mode, both cameras on both of those phones can do it, too. On the selfie camera, however, there aren’t those dual pixels to rely on, so it needs to see your face. Portrait mode is fun, and sometimes you can get really amazing results, but to me, on all three of these phones, I still feel like this mode is ham-handed. You can almost always see the crop if you look for it.

Where the Pixel 2 can’t quite keep up is with the extra fancy effects that both Samsung and Apple layer on top of portraits. With the Pixel 2, the only option is portrait mode on or off. You have to set it before you take the photo, and you can’t adjust it after the shot.

Google is also throwing a few other tricks at the Pixel 2 camera. My favorite is Motion Photos, which is Google’s take on embedding little movable images inside your photo. Like Apple’s Live Photos, they’re cute and fun and not well-supported on social networks. Unlike Apple’s Live Photos, you can’t really do much with Google’s Motion Photos. If there’s a way to set a different part of the moving image as your key frame or a simple way to export to GIF (without resorting to a third-party app), I couldn’t find it.

I hope that Google iterates quickly on both portrait mode and Motion Photos. The basics are here and they’re fun, but without the ability to do more with them, they feel super limited. I was not able to test the other big camera feature, augmented reality stickers, as they’re getting released later.

One last thing about the camera: when you tap the little thumbnail for your last shot, it jumps directly into Google Photos instead of that weird limbo-zone camera roll you used to have to deal with.


http://blog.computerpoint.com.pk/wp-content/uploads/2017/10/google-pixel-2-review-plainly-great.com

AI & Software

If you still think that the version of Android on Google’s phones is “pure Android” and everything else is Google’s Android with extra crap layered on, I’m here to tell you that’s not really accurate anymore. More and more, “pure Android” is just a shell of functions and a few key apps and everybody builds on that. So Google’s version of Android is distinctly Google.

The first and most important piece of that puzzle is the Google Assistant. It is, of course, available on other phones, but on the Pixel 2 it just seems to subtly feel like it’s more central to the experience. For me, it became clear when I said “OK Google” and the Pixel 2 actually heard me from across the room and woke up.

Android phones have been promising that voice experience for years now, but the truth is that it never really worked all that well. The Pixel 2, with its loud speakers and clearly improved microphones, practically feels like a Google Home smart speaker to me.

Google isn’t really touting that feature much. Instead it’s talking about Active Edge, the feature that lets you squeeze the phone to launch the Assistant. There’s a short setup workflow, which I strongly recommend you don’t skip because it explains that to use Active Edge, you should give the thing a quick squeeze, not a hard grip.

You can alter how hard you have to squeeze it to make it go; I found over time that harder is better, otherwise you might launch the Assistant just by picking your phone up. You can also set the squeeze to silence an incoming call, but those are really the only options that matter. You can’t set the squeeze to do anything else, just like the Bixby button on Samsung’s phones. Annoying.

Anyway, Active Edge works, and it’s convenient, but I have so many years of “long press the home button for Siri or Assistant” muscle memory built up that I didn’t really use it at first. But over time I started to appreciate it more, if only because it was slightly faster than holding the home button down. I’m still not a talk-to-my-phone person, but I am doing it a little bit more than before.

And “doing a little bit more than before” is kind of a theme with the Google Assistant. Google will happily tell you it can provide “100 million” more answers than it could a year ago, but that feels like an awfully inflated number to me. Even so, the Assistant doesn’t quite get enough credit for the improvements it’s made in the past year, because they’ve been so individually iterative and small.

There is one whiz-bang feature getting added to the Pixel 2: Google Lens. It’s part of Google’s plan to make your camera a new kind of input, alongside typing and talking. That’s a nice vision, but the Pixel 2 is very far from realizing it at launch.

For now, Lens is just a button inside the Google Photos app. You tap it on a photo you’ve already taken and it will attempt to identify the object. Google is only supporting kind of obvious stuff at first: book covers, album covers, popular artwork, landmarks, etc. It can identify other things, but how often do you need your phone to tell you that a coffee mug is a coffee mug? It did successfully identify about half the cars I pointed it at, which is neat, and as a “find more cute dogs like the one I just saw at the park” tool, it’s unparalleled.

But the real point of Lens is for it to be built into the Assistant, working in real time in a more conversational way, and then after that, in Google Keep and the camera app and who knows what else. Everywhere you type, Google wants to be able to use images. Until it can get further toward that goal, Lens is a sideshow in the Photos app. A good show, to be sure, but not a really important one.

I’m more impressed with the feature that ambiently listens for whatever music is playing around you and displays it on the always-on lock screen. It sounds like a creepy feature, but the Pixel 2 is already always listening for you to say “OK Google,” so this is just one more thing for it to listen to. All of that music recognition happens locally on the phone, and the database of songs is stored locally, too, which should help alleviate some of the obvious privacy concerns that come along with these kinds of features.

Speaking of that always-on lock screen, it’s nice but more limited than what you can get on a Samsung phone, and there’s no way to set a theme on it.

Every year, Google moves some stuff around on the home screen, and this year is no different. The Google search button has been integrated into the dock, and it’s also been combined with on-device search. In theory it’s great, and it’s definitely easier to tap than when it was at the top, but I often ended up tapping something I didn’t want to when I just wanted to do a quick web search. Also, when you combine it with the new Weather / Calendar widget at the top of the screen, you end up with less space for your icons than you had before.

There are some nice little moving wallpapers, but there aren’t enough of them and there’s no way to make your own. One nice bit: when you choose a dark wallpaper the notification shade and app drawer automatically switch over to a dark theme.


http://blog.computerpoint.com.pk/wp-content/uploads/2017/10/google-pixel-2-review-plainly-great.com

A lot of phones are designed to razzle dazzle you with their first impressions. The screens on Samsung phones wrap around the edges. The iPhone X and Essential phones have screens that go so far to the edge that they have notches cut out of their screens.

The Pixel 2 and Pixel 2 XL do not razzle dazzle. It’s not just the somewhat disappointing screen on the Pixel 2 XL, it’s that Google has gone out of its way to do things that are functional instead of flashy. Instead of going bezel-less, it added front-facing speakers. Instead of a million camera effects, it focused on one or two, while making the core camera experience much better with machine learning. The list goes on.

The Pixel 2 has many, many things going for it. Were it not for a few problems — the screen, the slightly inelegant design, and (yes) the lack of a headphone jack — it might have received the highest score we’ve ever given a phone. As it is, it’s a great phone, but not quite a home run.

Still, there are just a lot of little things that are better on the Pixel 2. You find yourself using the Assistant more because it’s giving you better answers over time. You are able to triage your notifications a little faster. The camera makes it much easier to get great shots, even in low light.

The Pixel 2 isn’t the nice dining room table with the fancy silverware. It’s the kitchen counter where you actually eat. It’s not as impressive, but it’s much more comfortable. That’s what makes the design of this year’s Google Phones great. They’re meant to be of use, and they are.

Update, 10/31/17: As noted above, after the original review we noticed the image retention issue with these screens. We’ve updated our score below to reflect that concern with the display

8

Verge Score

http://blog.computerpoint.com.pk/wp-content/uploads/2017/10/google-pixel-2-review-plainly-great.com

Good Stuff

  • Incredible camera
  • Great speakers
  • Best Android experience

Bad Stuff

  • Screen shows image retention immediately
  • Colors are muted, even compared to other sRGB screens
  • No headphone jack

8.5

Verge Score

http://blog.computerpoint.com.pk/wp-content/uploads/2017/10/google-pixel-2-review-plainly-great.com

Good Stuff

  • Incredible camera
  • Great speakers
  • Best Android experience

Bad Stuff

  • Huge bezels around screen
  • No headphone jack
  • Lacks some customization in camera features

Video by:

Director: Phil Esposito
Host: Dieter Bohn
Assistant Director: Felicia Shivakumar
Camera: Vjeran Pavic
Producer: Will Joel
Graphics: Garret Beard
Audio: Andrew Marino
Supervising Director: Tom Connors

[ad_2]

Source link

قالب وردپرس

iPhone X review: face the future

[ad_1]

After months of hype, endless speculation, and a wave of last-minute rumors about production delays, the iPhone X is finally here. Apple says it’s a complete reimagining of what the iPhone should be, 10 years after the original revolutionized the world. That means some fundamental aspects of the iPhone are totally different here — most notably, the home button and fingerprint sensor are gone, replaced by a new system of navigation gestures and Apple’s new FaceID unlocking system. These are major changes.

New iPhones and major changes usually command a ton of hype, and Apple’s pushing the hype level around the iPhone X even higher than usual, especially given the new thousand-dollar starting price point. For the few years, we’ve said some variation of “it’s a new iPhone” when we’ve reviewed these devices. But Apple wants this to be the beginning of the next ten years. It wants the iPhone 10 to be more than just the new iPhone — it wants it to be the beginning of a new generation of iPhones. That’s a lot to live up to.

This review is going to be a little different, at least initially: Apple gave most reviewers less than 24 hours with the iPhone X before allowing us to talk about it. So consider this a working draft: these are my opening thoughts after a long, intense day of testing the phone, but I’ll be updating everything in a few days after we’re able to test performance and battery life, do an in-depth camera comparison, and generally live with the iPhone X in a more realistic way. Most importantly: please ask questions in the comments! I’ll try to answer as many of them as I can in the final, updated review.

But for now — here goes.


Design

At a glance, the iPhone X looks so good one of our video editors kept saying it looked fake. It’s polished and tight and clean — my new favorite Apple thing is that the company managed to move all the regulatory text to software, leaving just the word “iPhone” on the back. The screen is bright and colorful and appears to be laminated tighter than previous iPhones, so it looks like the pixels are right on top. Honestly, it does kind of look like a live 3D render instead of an actual working phone.

But it is a real phone, and it’s clear it was just as challenging to actually build as all the rumors suggested. It’s gorgeous, but it’s not flawless. There’s a tiny sharp ridge between the glass back and the chrome frame that I feel every time I pick up the phone. That chrome frame seems destined to get scratched and dinged, as every chrome Apple product tends to do. The camera bump on the back is huge; a larger housing than the iPhone 8 Plus fitted onto a much smaller body and designed to draw attention to itself, especially on my white review unit. There are definitely going to be people who think it’s ugly. But it’s growing on me.

There’s no headphone jack, which continues to suck on every phone that omits it, but that’s the price you pay for a bezel-less screen with a notch at the top. Around the sides, you’ll find the volume buttons, the mute switch, and the sleep / wake button. The removal of the home button means there are a few new button combinations to remember: pressing the top volume button and the sleep / wake button together takes a screenshot. Holding the sleep button opens Siri. And you turn the phone off by holding either of the volume buttons and the sleep button for several seconds and then sliding to power down.

And, of course, there’s the notch in the display — what Apple calls the “sensor housing.” It’s ugly, but it tends to fade away after a while in portrait mode. It’s definitely intrusive in landscape, though — it makes landscape in general pretty messy. Less ignorable are the bezels around the sides and bottom of the screen, which are actually quite large. Getting rid of almost everything tends to draw attention to what remains, and what remains here is basically a thick black border all the way around the screen, with that notch set into the top.

I personally think the iPhone 4 is the most beautiful phone of all time, and I’d say the iPhone X is in third place in the iPhone rankings after that phone and the original model. It’s a huge step up from the surfboard design we’ve been living with since the iPhone 6, but it definitely lacks the character of Apple’s finest work. And… it has that notch.


Display

The iPhone X is Apple’s first phone to use an OLED display, after years of Apple LCDs setting the standard for the industry. OLED displays allow for thinner phones, but getting them to be accurate is a challenge: Samsung phones tend to be oversaturated to the point of neon, Google’s Pixel XL 2 has a raft of issues with viewing angles and muted colors, and the new LG V30 has problems with uneven backlighting.

Apple’s using a Samsung-manufactured OLED panel with a PenTile pixel layout on the iPhone X, but it’s insistent that it was custom engineered and designed in-house. Whatever the case, the results are excellent: the iPhone X OLED is bright, sharp, vibrant without verging into parody, and generally a constant pleasure to look at. Apple’s True Tone system automatically adjusts color temperature to ambient light, photos are displayed in a wider color gamut, and there’s even Dolby Vision HDR support, so iTunes movies mastered in HDR play with higher brightness and dynamic range.

I did notice some slight color shifting off-axis, but never so much that it bothered me — I generally had to go looking for it. And compared to the iPhone 8 Plus LCD, it seems like a slightly cooler display over all, but only when I held the two side by side. Overall, it’s just a terrific display.

Unfortunately, the top of the display is marred by that notch, and until a lot of developers do a lot of work to design around it, it’s going to be hard to get the most out of this screen. I mean that literally: a lot of apps don’t use most of the screen right now.


Apps that haven’t been updated for the iPhone X run in what you might call “software bezel” mode: huge black borders at the top and bottom that basically mimic the iPhone 8. And a lot of apps aren’t updated yet: Google Maps and Calendar, Slack, the Delta app, Spotify, and more all run with software bezels. Games like CSR Racing and Sonic The Hedgehog looked particularly silly. It’s fine, but it’s ugly, especially since the home bar at the bottom of the screen glows white in this mode.

Apps that haven’t been specifically updated for the iPhone X but use Apple’s iOS autolayout system will fill the screen, but wacky things happen: Dark Sky blocks out half the status bar with a hardcoded black bar of its own, Uber puts your account icon over the battery indicator, and the settings in the Halide camera app get obscured by the notch and partially tucked into the display’s bunny ears. It almost looks right, but then you realize it’s actually just broken.

Apps that have been updated for the iPhone X all have different ways of dealing with the notch that sometimes lead to strange results, especially in apps that play video. Instagram Stories don’t fill the screen; they have large gray borders on the top and bottom. YouTube only has two fullscreen zoom options, so playing the Last Jedi trailer resulted in either a small video window surrounded by letter- and pillar-boxing or a fullscreen view with the notch obscuring the left side of the video. Netflix is slightly better but you’re still stuck choosing between giant black borders around your video or the notch.

Landscape mode on the iPhone X is generally pretty messy: the notch goes from being a somewhat forgettable element in the top status bar to a giant interruption on the side of the screen, and I haven’t seen any apps really solve for it yet. And the home bar at the bottom of the screen often sits over the top of content, forever reminding you that you can swipe to go home and exit the chaos of landscape mode forever.

I’m sure all of this will get solved over time, but recent history suggests it might take longer than Apple or anyone would like; I still encounter apps that aren’t updated for the larger iPhone 6 screen sizes. 3D Touch has been around for years, but I can’t think of any app that makes particularly good use of it. Apple’s rolled out a lot of screen design changes over the years, and they take a while to settle in. We’ll just have to see how it goes with the iPhone X.


Cameras

I haven’t had a lot of time to play with the cameras on the iPhone X, but the short answer is that they look almost exactly like the cameras on the iPhone 8. Both the telephoto and wide angle lenses has optical image stabilization, compared to just the wide angle on the 8 Plus, and the TrueDepth system on the front means the front camera can take portrait mode selfies. It’s nice.

iPhone X rear camera (left) / Pixel 2 XL rear camera (right)

Of course, the main thing the front camera can do is take Animoji, which are Apple’s animated emoji characters. It’s basically built-in machinima, and probably the single best feature on the iPhone X. Most importantly, they just work, and they work incredibly well, tracking your eyes and expressions and capturing your voice in perfect sync with the animation. Apple’s rolled out a lot of weird additions to iMessage over the years, but Animoji feel much stickier than sending a note with lasers or adding stickers or whatever other gimmicks have been layered on. And while iMessage remains a golden palace of platform lock-in, Animoji are notably cross-platform: they work in iMessage, send as videos over MMS, and can be exported as MOV files. Nice.


FaceID: it works, mostly

The single most important feature of the iPhone X is FaceID, the system that unlocks the phone by recognizing your face. Even that’s an understatement: the entire design and user experience of the iPhone X is built around FaceID. FaceID is what let Apple ditch the home button and TouchID fingerprint sensor. The FaceID sensor system is housed in the notch. The Apple Pay user flow has been reworked around FaceID. Apple’s Animoji animated emojis work using the FaceID sensors.

If FaceID doesn’t work, the entire promise of the iPhone X falls apart.

The good news is that FaceID mostly works great. The bad news is that sometimes it doesn’t, and you will definitely have to adjust the way you think about using your phone to get it to a place where it mostly works great.

Setting up FaceID is ridiculously simple — much simpler than setting up TouchID on previous iPhones. The phone displays a circular border around your face, and you simply move around until a series of lines around that circle turn green. (Apple suggests you move your nose around in a circle, which is adorable.) Do that twice, and you’re done: FaceID will theoretically get better and better at recognizing you over time, and track slow changes like growing a beard so you don’t have to re-enroll. Drastic changes, like shaving that beard off, might require you to enter your passcode, however.

FaceID should also work through most sunglasses that pass infrared light, although some don’t. And you can definitely make it fail if you put on disguises, but I’d rather have it fail out than let someone else through.

In my early tests, FaceID worked well indoors: sitting at my desk, standing in our video studio, and waiting to get coffee. You have to look at it head-on, though: if it’s sitting on your desk you have to pick up the phone and look at it, which is a little annoying if you’re used to just putting your finger on the TouchID sensor to check a notification.

You also can’t be too casual about it: I had a lot of problems pulling the iPhone X out of my pocket and having it fail to unlock until Apple clarified that FaceID works best at a distance of 25 to 50 centimeters away from your face, or about 10 to 20 inches. That’s closer than I usually hold my phone when I pull it out of my pocket to check something, which means I had to actively think about holding the iPhone X closer to my face than every other phone I’ve ever used. “You’re holding it wrong” is a joke until it isn’t, and you can definitely hold the iPhone X wrong.

That’s a small problem, though, and I think it’ll be easy to get used to. The other problem is actually much more interesting: almost all of the early questions about FaceID centered around how it would work in the dark, but it turns out that was exactly backwards. FaceID works great in the dark, because the IR projector is basically a flashlight, and flashlights are easy to see in the dark. But go outside in bright sunlight, which contains a lot of infrared light, or under crappy florescent lights, which interfere with IR, and FaceID starts to get a little inconsistent.

I took a walk outside our NYC office in bright sunlight, and FaceID definitely had issues recognizing my face consistently while I was moving until I went into shade or brought the phone much closer to my face than usual. I also went to the deli across the street, which has a wide variety of lights inside, including a bunch of overhead florescent strips, and FaceID also got significantly more inconsistent.

I’ve asked Apple about this, and I’ll update this review with their answers along with more detailed test results, but for now I’d say FaceID definitely works well enough to replace TouchID, but not so well that you won’t run into the occasional need to try again.

Recent Apple products have tended to demand people adapt to them instead of being adapted to people, and it was hard not to think about that as I stood in the sunlight, waving a thousand-dollar phone ever closer to my face.


Software

There’s a lot of new hardware in the iPhone X, but it’s still running iOS 11 — albeit with some tweaks to navigation to accommodate the lack of a home button. You swipe up from the bottom to go home, swipe down from the right to bring up (down?) Control Center, and swipe down from the left to open the notifications pane. That pane also has buttons for the flashlight and camera; in a twist, they require 3D Touch to work, so they feel like real buttons. It’s neat, but also breaks the 3D Touch paradigm — it’s the only place the entire system where 3D Touch acts like a left click instead of a right click. It’s emblematic of how generally fuzzy iOS has become with basic interface concepts, I think.

Switching apps is fun and simple: you can either swipe up and hold to bring up all your apps in a card-like deck, or just quickly swipe left and right on the home bar to bounce through them one at a time.

And… those are basically the changes to iOS 11 on the iPhone X, apart from the various notch-related kerfuffles. If you’ve been using iOS for a while and iOS 11 for the past month, nothing here will surprise you. Apple might have completely rethought how you unlock the iPhone X, but it’s still not giving up on that grid of app icons or making notifications more powerful or even allowing the weather app icon to display a live temperature. Siri is still Siri. If you’re buying an iPhone X expecting a radical change to your iPhone experience, well, you probably won’t get it. Unless you really hate unlocking your phone.


The iPhone X is clearly the best iPhone ever made. It’s thin, it’s powerful, it has ambitious ideas about what cameras on phones can be used for, and it pushes the design language of phones into a strange new place. It is a huge step forward in terms of phone hardware, and it has the notch to show for it. If you’re one of the many people who preordered this thing, I think you’ll be happy, although you’ll be going on the journey of figuring out when and how FaceID works best with everyone else.

But if you didn’t preorder, I suspect you might not feel that left out for a while. The iPhone X might be a huge step forward in terms of hardware, but it runs iOS 11 just the same as other recent iPhones, and you won’t really be missing out on anything except Animoji. FaceID seems like it’s off to a good start, but it’s definitely inconsistent in certain lighting conditions. And until your favorite apps are updated, you won’t be able to make use of that entire beautiful display.

All that adds up to the thing you already know: the iPhone X is a very expensive iPhone. For a lot of people, it’ll be worth it. For a lot of people, it’ll seem ridiculous. But fundamentally, it’s a new iPhone, and that means you probably already know if you want to spend a thousand dollars on one.

Because this review isn’t final, we’re not scoring the iPhone X yet. Leave us your questions and comments below, and we’ll try to address as many of them in our final review as we can. We’ll add the score at that time as well.

[ad_2]

Source link

قالب وردپرس