Jaguar I-Pace review: A luxury EV that can tackle anything


A gentleman in a field peeks into the car and asks me to raise the height of the red First Edition I-Pace I’m driving then gestures towards a few feet of water ahead of me. “Don’t go too fast, there are sharp rocks down there,” he says. I’ve driven the car for a few hours already and am already a fan of its capabilities on paved roads and the luxury interior. Now I’m about to drive the crossover on a “surprise” off-road course and sure, why not. Let’s do this.

Gallery: 2019 Jaguar I-Pace Review | 42 Photos

Engadget Score






from $69,500.00


  • Pure EV with 240 mile range
  • A true luxury vehicle
  • Overflowing with tech without feeling overwhelming
  • Over-the-air updates
  • Fast
  • Handles well for a heavy car
  • You can drive through a giant puddle
  • True off-road capabilities
  • Infotainment system still has some latency issues


The I-Pace is an exceptional pure electric luxury SUV from Jaguar that can handle real offroading and a day at the track just as well as a night out on the town. It’s chock full of tech that for the most part delivers. We just wish the infotainment system suffered from less latency.

The Jaguar I-Pace is an electric vehicle four years in the making. The vehicle is the result of CEO Dr. Ralf Speth’s unilateral decision that the company build a pure electric luxury SUV. Ever since its conception, design, engineering and finally production, the Tesla Model X has owned the segment. Not just because it’s the only real luxury SUV that runs totally on batteries, it’s also a very good vehicle. But after two days behind the wheel of the I-Pace, not only has the segment gotten a little bit more crowded, it might just have a new leader.

Back on the off-road course in southern Portugal, the I-Pace made short work of the water obstacle. A steep entrance into the silty liquid, a quick left turn and we drove out of it after about 30 feet. All the while, inside the crossover, the vehicle was pure Jaguar with all the luxury, refinement and comfort you’d expect. This portion of the two-day drive was clearly the result of the automaker borrowing off-road capabilities from its sister company, Land Rover, and sticking it in the cat.

With that in mind, it wasn’t surprising when the I-Pace’s all-wheel-drive system was able to conquer a rutted, steep path up a mountain. Even when it seemed to falter (ever so slightly), the traction system corrected itself and we forged ahead. About halfway up the “road” another gentleman has us put the car in Adaptive Dynamics, Adaptive Surface Response (AdASR) mode. It’s a bit like cruise control but for tackling challenging terrain. I set it to 12kmh (about 7.5 mph) and all I had to do was steer.

Of course, most I-Pace owners won’t be taking it through creeks. (It can wade into water up to 19.7-inches deep.) Or up steep dirt roads riddled with gravel, potholes and deep ruts with cruise control (AdASR mode is available on the top-end First Edition trim, which starts at $85,900). Instead, they’ll drive it around town, on the highway, on twisty roads and occasionally on dirt and gravel roads where the S model (starting at $69,500) handled all those environments without an issue. It even took on a race track where it was pushed to the limit and surprised me with its handling and acceleration when pushed to the limit. Especially for a large car with a heavy battery pack sitting at the bottom of the frame.

That 90kWh battery pack is where all the power that gives the crossover a 0-to-60 time of 4.5 seconds comes from. In conjunction with two electric motors (one in the front and one in the back) the vehicle pushes out 394 horsepower and 512 pounds of torque. The result is a large car that can leap off the line at stop lights and easily get you up to speed from the freeway onramp.

When you do get onto that freeway, the adaptive cruise control and steering assist do a fine job keeping the vehicle in its lane and tracking the cars in front of it. It’s not as robust as Autopilot found on the Model X though. It doesn’t have auto lane change, but its ability to track the road is nearly on par with Tesla’s and Nissan’s offerings.

While behind the wheel, the infotainment system is the same Touch Pro Duo system found in the Range Rover Velar. The two screen setup (the top one is 10 inches the bottom is 5.5 inches) have been updated for the I-Pace. It now has EV-centric features like keeping track of power distribution and a helpful navigation feature that shows what your battery level should be at the destination of a route and at points along the way. It makes these determinations via information about the terrain, traffic and how you drive. That last one was bit more tricky to test because the car’s AI learns your driving style, and that can take up to two weeks. But even without the benefit of days of data, the battery levels were pretty close to what we saw at the beginning of the route for the allocated waypoints.

The infotainment system has less latency than the Velar we tested back in January, but there were a few times when a tap didn’t result in an instantaneous action. Quicker reaction time is always good, I just wish Jaguar had fine-tuned it further to reduce any perceptible delays while using the system.

Another improvement over the Velar are the dials adjacent to the bottom screen that have different capabilities depending on whats on that display. Their push-button and pull-up action feel more solid, and I had zero instances where I was accidentally making changes via the controls.

For fans of plugging their smartphones into their cars, the I-Pace supports both Android Auto and Apple’s CarPlay.

The rest of the vehicle’s interior is easy to navigate from the light-up controls on the steering wheel (also borrowed from the Velar) to the location of the six (that’s right, six) USB ports to keep all your things charged up.

Which shouldn’t be a surprise, this is Jaguar’s flagship technology vehicle and the automaker made sure to cram as much tech into it as possible. Features like Alexa support, connected home support, a companion app that recognizes drivers as they enter the car and adjusts settings, an alert for when you leave your smartphone behind and a whole host of others.

But probably one of the most important addition is support for over-the-air updates. While a majority of the features in the I-Pace are cribbed from other Jags and Land Rovers, this one is clearly borrowed from Tesla. Musk’s company set a precedent with its OTA updates and it’s good to see other automakers following suit. Consumers are used to seeing their other tech devices update without having to visit a service center, there’s no reason the biggest computer they own shouldn’t do it as well.

Jaguar was also able to get 240 miles of range out of its 90kWh battery pack, joining Tesla and Chevy above the 200-mile zone. It’s more than enough for 95 percent of the drives most people will take with the car. It does support 100kWh DC fast charging and can go from zero to 80 percent charged in 40 minutes if you’re out in the real world and need to top up the car. With the typical 50kWh DC fast charger you double that time spent connected to the grid. At-home charging via a 230-volt outlet will take 10 hours to get to 80 percent and 13 hours to completely fill the battery with electrons.

Sadly, I didn’t have the chance to do a full battery test, but the advertised range seemed to be in line with my time behind the wheel using the various driving modes. Instead of trying to suck all the electricity out of the car, I was given an I-Pace and told to drive around a racetrack a few times. So, in addition to going offroad (something very few people will do), I got to push the SUV to the limit on a race track (something absolutely no one will do).

What I learned is that the I-Pace performs far better than I expected. New owners might not have a very nice instructor barking orders at them to “go go go, now hit the brakes. OK, turn. Now give it all the accelerator,” but they will appreciate how well the EV does in the real world. Because it drives really well. Exceptionally well for its size.

In fact, except for a weird software issue my co-driver encountered going downhill on the off-road course (something the Jaguar representative told me was being fixed via a software update before it goes on sale), it’s hard to find much fault with the I-Pace. It’s fast, fun to drive, it’s comfortable and if it wasn’t for the infotainment latency, I’d be a big fan of the InControl Touch Pro Duo.

I prefer the I-Pace over the Model X, and I really like the Model X. The I-Pace is less expensive (comparing the baseline S model to the 75D there’s a $10,000 price difference) and feels more like a luxury car. But Jaguar and Tesla won’t be the only worthy contenders for the title of best luxury EV SUV for long. We have the Audi E-Tron and the BMW iX3 coming. But for now, the I-Pace is the big cat in town and if you have a chance to get behind the wheel, do it.

from Engadget

Google’s Tenor slips GIFs into your command line interface


Tenor/20th Century Fox

If you live in the command line, you probably like to give that otherwise plain interface your own distinctive touch, like ASCII art. But wouldn’t it be nice if you could spice it up with a GIF? You can now. Google’s Tenor team has released a GIFs for CLI tool that, as the name implies, turns short videos and GIFs (including those sourced from Tenor’s search toolkit) into animated ASCII art you can use as a greeting when you open your terminal. The Deadpool 2 skydive you see above is in black and white, but you can include GIFs in glorious color.

The code starts by using ffmpeg to chop the clip into individual JPEG images. It then turns those into ASCII art printed one frame at a time to your console, using ANSI escape sequences to clear the screen and show the GIFs as you’d expect.

Is this extremely nerdy and limited? You bet. The code is sitting in GitHub if you want to tinker with it, though. And look at it this way: when command lines are almost relentlessly drab, flair like this is bound to help.

from Engadget

Minds, machines, and centralization: AI and music


Far from the liberated playground the Internet once promised, online connectivity now threatens to give us mainly pre-programmed culture. As we continue reflections on AI from CTM Festival in Berlin, here’s an essay from this year’s program.

If you attended Berlin’s festival this year, you got this essay I wrote – along with a lot of compelling writing from other thinkers – in a printed book in the catalog. I asked for permission from CTM Festival to reprint it here for those who didn’t get to join us earlier this year. I’m going to actually resist the temptation to edit it (apart from bringing it back to CDM-style American English spellings), even though a lot has happened in this field even since I wrote it at the end of December. But I’m curious to get your thoughts.

I also was lucky enough to get to program a series of talks for CTM Festival, which we made available in video form with commentary earlier this week, also with CTM’s help:
A look at AI’s strange and dystopian future for art, music, and society

The complete set of talks from CTM 2018 are now available on SoundCloud. It’s a pleasure to get to work with a festival that not only has a rich and challenging program of music and art, but serves as a platform for ideas, debate, and discourse, too. (Speaking of which, greetings from another European festival that commits to that – SONAR, in Barcelona.)

The image used for this article is an artwork by Memo Akten, used with permission, as suggested by curator and CTM 2018 guest speaker Estela Oliva. It’s called “Inception,” and I think is a perfect example of how artists can make these technologies expressive and transcendent, amplifying their flaws into something uniquely human.

Minds, Machines, and Centralisation: Why Musicians Need to Hack AI Now


It’s now a defunct entity, but “Muzak,” the company that provided background music, was once everywhere. Its management saw to it that their sonic product was ubiquitous, intrusive, and even engineered to impact behavior — and so the word Muzak became synonymous with all that was hated and insipid in manufactured culture.

Anachronistic as it may seem now, Muzak was a sign of how tele-communications technology would shape cultural consumption. Muzak may be known for its sound, but its delivery method is telling. Nearly a hundred years before Spotify, founder Major General George Owen Squier originated the idea of sending music over wires — phone wires, to be fair, but still not far off from where we’re at today. The patent he got for electrical signalling doesn’t mention music, or indeed even sound content. But the Major General was the first successful business founder to prove in practice that electronic distribution of music was the future, one that would take power out of the hands of radio broadcasters and give the delivery company additional power over content. (He also came up with the now-loathed Muzak brand name.)

What we now know as the conventional music industry has its roots in pianola rolls, then in jukeboxes, and finally in radio stations and physical media. Muzak was something different, as it sidestepped the whole structure: playlists were selected by an unseen, centralized corporation, then piped everywhere. You’d hear Muzak in your elevator ride in a department store (hence the phrase, elevator music). There were speakers tucked into potted plants. The White House and NASA at some points subscribed. Anywhere there was silence, it might be replaced with pre-programmed music.

Muzak added to its notoriety by marketing the notion of using its product to boost worker productivity, through a pseudo-scientific regimen it called the “stimulus progression.” And in that, we see a notion that presages today’s app behavior loops and motivators, meant to drive consumption and engagement, ad clicks and app swipes.

Muzak for its part didn’t last forever, with stimulus progression long since debunked, customers preferring licensed music to this mix of original sounds, and newer competitors getting further ahead in the marketplace.

But what about the idea of homogenized, pre-programmed culture delivered by wire, designed for behavior modification? That basic concept seems to be making a comeback.

Automation and Power

“AI” or machine intelligence has been tilted in the present moment to focus on one specific area: the use of self-training algorithms to process large amounts of data. This is a necessity of our times, and it has special value to some of the big technical players who just happen to have competencies in the areas machine learning prefers — lots of servers, top mathematical analysts, and big data sets.

That shift in scale is more or less inescapable, though, in its impact. Radio implies limited channels; limited channels implies human selectors — meet the DJ. The nature of the internet as wide-open for any kind of culture means wide open scale. And it will necessarily involve machines doing some of the sifting, because it’s simply too large to operate otherwise.

There’s danger inherent in this shift. One, users may be lazy, willing to let their preferences be tipped for them rather than face the tyranny of choice alone. Two, the entities that select for them may have agendas of their own. Taken as an aggregate, the upshot could be greater normalization and homogenization, plus the marginalization of anyone whose expression is different, unviable commercially, or out of sync with the classes of people with money and influence. If the dream of the internet as global music community seems in practice to lack real diversity, here’s a clue as to why.

At the same time, this should all sound familiar — the advent of recording and broadcast media brought with it some of the same forces, and that led to the worst bubblegum pop and the most egregious cultural appropriation. Now, we have algorithms and corporate channel editors instead of charts and label execs — and the worries about payola and the eradication of anything radical or different are just as well-placed.

What’s new is that there’s now also a real-time feedback loop between user actions and automated cultural selection (or perhaps even soon, production). Squier’s stimulus progression couldn’t monitor metrics representing the listener. Today’s online tools can. That could blow apart past biases, or it could reinforce them — or it could do a combination of the two.

In any case, it definitely has power. At last year’s CTM hacklab, Cambridge University’s Jason Rentfrow looked at how music tastes could be predictive of personality and even political thought. The connection was timely, as the talk came the same week as Trump assumed the U.S. presidency, his campaign having employed social media analytics to determine how to target and influence voters.

We can no longer separate musical consumption — or other consumption of information and culture — from the data it generates, or from the way that data can be used. We need to be wary of centralized monopolies on that data and its application, and we need to be aware of how these sorts of algorithms reshape choice and remake media. And we might well look for chances to regain our own personal control.

Even if passive consumption may seem to be valuable to corporate players, those players may discover that passivity suffers diminishing returns. Activities like shopping on Amazon, finding dates on Tinder, watching television on Netflix, and, increasingly, music listening, are all experiences that push algorithmic recommendations. But if users begin to follow only those automated recommendations, the suggestions fold back in on themselves, and those tools lose their value. We’re left with a colorless growing detritus of our own histories and the larger world’s. (Just ask someone who gave up on those Tinder dates or went to friends because they couldn’t work out the next TV show to binge-watch.)

There’s also clearly a social value to human recommendations — expert and friend alike. But there’s a third way: use machines to augment humans, rather than diminish them, and open the tools to creative use, not only automation.

Music is already reaping benefits of data training’s power in new contexts. By applying machine learning to identifying human gestures, Rebecca Fiebrink has found a new way to make gestural interfaces for music smarter and more accessible. Audio software companies are now using machine learning as a new approach to manipulating sound material in cases where traditional DSP tools are limited. What’s significant about this work is that it makes these tools meaningful in active creation rather than passive consumption.

AI, back in user hands

Machine learning techniques will continue to expand as tools by which the companies mining big data make sense of their resources — from ore into product. It’s in turn how they’ll see us, and how we’ll see ourselves.

We can’t simply opt out, because those tools will shape the world around us with or without our personal participation, and because the breadth of available data demands their use. What we can do is to better understand how they work and reassert our own agency.

When people are literate in what these technologies are and how they work, they can make more informed decisions in their own lives and in the larger society. They can also use and abuse these tools themselves, without relying on magical corporate products to do it for them.

Abuse itself has special value. Music and art are fields in which these machine techniques can and do bring new discoveries. There’s a reason Google has invested in these areas — because artists very often can speculate on possibilities and find creative potential. Artists lead.

The public seems to respond to rough edges and flaws, too. In the 60s, when researcher Joseph Weizenbaum attempted to parody a psychotherapist with crude language pattern matching in his program, ELIZA, he was surprised when users started to tell the program their darkest secrets and imagine understanding that wasn’t there. The crudeness of Markov chains as predictive text tool — they were developed for analyzing Pushkin statistics and not generating language, after all — has given rise to breeds of poetry based on their very weirdness. When Google’s style transfer technique was applied using a database of dog images, the bizarre, unnatural images that warped photos into dogs went viral online. Since then, Google has made vastly more sophisticated techniques that apply realistic painterly effects and… well, it seems that’s attracted only a fraction of the interest that the dog images did.

Maybe there’s something even more fundamental at work. Corporate culture dictates predictability and centralized value. The artist does just the opposite, capitalizing on surprise. It’s in the interest of artists if these technologies can be broken. Muzak represents what happens to aesthetics when centralized control and corporate values win out — but it’s as much the widespread public hatred that’s the major cautionary tale. The values of surprise and choice win out, not just as abstract concepts but also as real personal preferences.

We once feared that robotics would eliminate jobs; the very word is derived (by Czech writer Karel Čapek’s brother Joseph) from the word for slave. Yet in the end, robotic technology has extended human capability. It has brought us as far as space and taken us through Logo and its Turtle, even taught generations of kids math, geometry, logic, and creative thinking through code.

We seem to be at a similar fork in the road with machine learning. These tools can serve the interests of corporate control and passive consumption, optimised only for lazy consumption that extracts value from its human users. Or, we can abuse and misuse the tools, take them apart and put them back together again, apply them not in the sense that “everything looks like a nail” when all you have is a hammer, but as a precise set of techniques to solve specific problems. Muzak, in its final days, was nothing more than a pipe dream. What people wanted was music — and choice. Those choices won’t come automatically. We may well have to hack them.

PETER KIRN is an audiovisual artist, composer/musician, technologist, and journalist. He is the editor of CDM and co-creator of the open source MeeBlip hardware synthesizer ( For six consecutive years, he has directed the MusicMaker’s Hacklab at CTM Festival, most recently together with new media artist Ioann Maria.

from Create Digital Music