Researchers have announced a new battery breakthrough that focuses on the negative. Rather than using lithium, the most electro-positive element on the periodic table, they used flouride, the most electro-negative. It can store more energy than its lithium doppelgänger, but until now, batteries needed to run hot at 150 degrees Celcius or more. Honda, Caltech and NASA scientists discovered a way to make it work at room temperature, which could eventually yield more energy dense and environmentally safe batteries for EVs and other devices.
Fluoride-ion batteries essentially work in the opposite direction of lithium-ion cells, attracting electrons instead of shedding them. Flouride (the ionized version of fluorine) is an interesting battery material because it has a low atomic weight and very high capacity to store electrons. However, to do that, you have to dissolve the fluoride ions into an electrolyte, and researchers have found that it only works with solid electrolytes heated to high temperatures.
To get around that, the Honda/NASA/Caltech team created a liquid electrolyte called BTFE that allows fluoride to dissolve at room temperature. With two positively-charged regions, it exploits the “opposites attract” principal, reacting strongly to negatively-charged fluoride.
The scientists paired the electrolyte with a copper, lanthanum and fluorine to create a prototype battery capable of reversible chemical reactions (aka recharging) at room temperature. All told, the batteries have the potential for ten times the energy density of lithium-ion batteries, and would have a “more favorable environmental footprint,” according to Honda.
However, we’ve heard this sort of thing many times before, so the usual caution and caveats apply. For instance, the team still has to figure out how to stabilize the anodes and cathodes, which tend to dissolve completely into the electrolyte. They’re making some headway, though and further testing is currently underway — so hopefully we won’t be disappointed yet again by batteries that work great in labs but not cars.
Kick drum can be the defining element of a song—so if it’s not working, your tracks will lack that special crack.
Picking the right kick sample is crucial for giving your tracks impact. We just released “The 50 Best Free Kick Samples” sample pack, which got us thinking…
What makes a kick drum good? And how do you pick the right kicks for your tracks?
There’s so many great kick samples out there… finding the perfect kick is a challenge.
The key is knowing what to listen for.
In this article, I’ll go over 7 tips that will help you choose the best possible kick drum sample for your track.
1. Listen in context
Audition potential kicks in context with your track. A kick might sound great on its own. But remember, your samples have to work with the rest of your mix above everything else.
Even if you haven’t recorded everything yet use whatever you do have to help you audition.
Your kick is such a major element. It needs as much context as possible to get a sense of what works.
Your kick is such a major element. It needs as much context as possible to get a sense of what works.
There are lots of ways to audition samples in your DAW. The simplest way is to add them on separate tracks and mute and un-mute to compare.
Ableton has a built-in feature to “hotswap” samples on your timeline or within Drum Racks and Simpler. It’s indicated by the circle and arrows beside files in the browser.
2. Be aware of the envelope
The attack and release of your kick sounds are crucial in your mix. Kick samples can have all kinds of envelopes.
From staccato clicks to round bass-like tones, the specific attack and decay qualities of your kick sample have to be in line with the rest of your track.
If the kick has too long of a tail or too slow of an attack, you’ll have to use your sampler’s ADSR to make sure it doesn’t conflict with other elements of your track.
Of course, you may not be able to get it perfect—there’s a limit to what can be done by manipulating a sample’s envelope
3. Pay attention to the spectrum
You need to match the overall harmonic content of your kick sample with the rest of your track.
Don’t try to shoehorn a sample with a strongly conflicting frequency balance into your track. It can be more trouble than it’s worth and EQing can only help so much.
Try to match things like the distribution of frequencies and overall amount of saturation.
A busy mix might need a more hyped kick to cut through, but that same sample could be distracting in a minimal composition.
The interaction between your kick and bassline is also crucial. Mixing them well is that much more challenging if the kick and bass occupy exactly the same space.
The interaction between your kick and bassline is also crucial. Mixing them well is that much more challenging if the kick and bass occupy exactly the same space.
Keep the frequency spectrum in mind as you choose your kick samples and start thinking about your mix before it even starts.
4. Expect to layer
In many cases you won’t be able to get the perfect kick for your track with just a single kick sample.
Don’t be afraid to enhance your original sample with other sounds. Layering samples is a powerful technique.
If you find yourself using radical EQ curves just to get more of a certain sonic quality into your kick, try layering another sample that has the character you’re looking for to get it.
In this example I’ve layered several kick samples together in an Ableton Live Drum Rack.
I like the attack of the first kick but it doesn’t have quite the low end I need. It’s also a bit dry for the track. To fix it I’ve layered a beefier sub bass kick with the just ambience from another kick sample.
With all three together, I’m getting exactly what I want for the kick drum on this track.
Chances are layering will sound more transparent than invasive EQ.
Here’s some other sounds to consider layering with your kicks:
A cracking snare to add some initial attack
A clap to add some initial smack to your kick
An 808 style sine wave bass to give your kicks a nice booming tail
5. Tune first
Before you make a decision on the right kick sample make sure you’ve taken the time to tune it for your song.
If the fundamental frequency of the kick is at odds with the rest of your song, you’ll have trouble knowing whether it really works.
If the fundamental frequency of the kick is at odds with the rest of your song, you’ll have trouble knowing whether it really works.
Use your sampler plugin’s transpose function to make sure your kick sample is in tune.
You don’t have to hard-tune the kick’s fundamental to the song’s tonic, but try to explore options that enhance their harmonic relationship.
Listening in context of the rest of your mix is really important here—tweak until it sounds right!
6. Level match
As always when comparing two audio files, make sure you match the levels before you decide.
Small differences in level have a surprisingly strong effect on how we perceive the strengths of one sound over another.
You don’t want to accidentally miss the right kick just because a slightly louder one sounded more lively when you auditioned it.
Watch your meters carefully as you level match so you can be sure you’re making a fair comparison between 2 kick samples.
7. Don’t be afraid to start over
Don’t keep struggling with a sample that’s not quite right just because you’ve already spent an hour on it.
Don’t keep struggling with a sample that’s not quite right just because you’ve already spent an hour on it.
It’s easy to rationalize your choices when when you’ve gotten attached to them over time. But sometimes the only way to move forward is to go back to the beginning.
Starting from scratch can actually give you a fresh perspective on the whole song. So even if it hurts, try not to get too dedicated to a kick that’s just not working.
Hot Tip: Instead of starting with just one kick sample, set aside a small batch of 5-10 that might work. This way, you’ll have a better backup plan if your first option isn’t working.
“Get it right at the source” is a common phrase in mixing for a good reason. If you don’t have the right raw materials, no amount of fancy mixing techniques will give you the results you want.
That’s why it’s important to choose the right kick sample as you build your tracks. It doesn’t have to be difficult if you keep the these tips in mind.
Now that you know how to choose a great kick, get back to your samples folder and find the perfect kick!
Saturday, 12 pm. The light burns. Your head throbs. And you have no recollection of how you got back home. Don’t worry, you’re not alone. More than half of college students experience blackouts, according to several studies. And let’s be clear. Blacking out doesn’t mean passing out. You were probably awake and aware the entire night. So then, where did all those memories go?
Let’s rewind to Friday night. Normally, whenever you have an experience — like a conversation — a part of your brain called the prefrontal lobe stores that information in short-term memory. Then, another part of your brain called the hippocampus weaves those experiences together so they can be stored away as long-term memories. So the next day you remember "the party" as a whole instead of "smell of sweat," "house music," "Jen was there."
But here’s the key part: storing these episodes in long-term memory requires special neurotransmitters. But your liquor shots prevent the neurotransmitters from working properly. So, instead of remembering the party, all you have is an incomplete or even empty file.
And the amount of alcohol in your system at the time influences how much you remember. Let’s say you’re a 73 kg adult man. And you’ve done eight shots in one hour. Your blood alcohol content is probably around 0.2% by this point — more than twice the legal limit for driving a car. And your brain may still be able to store some memories. So you end up with "islands" of memories separated by missing sections. That’s called a fragmentary blackout, aka a "greyout" or "brownout". But if you keep pounding those shots, it gets worse. Within the next half hour, you pound back another four shots. Now your blood alcohol content hits around 0.3%, and your hippocampus goes dark. And full amnesia sets in. This is called an en bloc blackout. And once you wake up, that entire night could be blank. Push your BAC much higher than that and…you might die.
And yet…your friends might not even realize you’re in the middle of a blackout, since the alcohol didn’t "delete" your long-term memories already safe in storage before the night began. So you can still carry on conversations and behave more or less like a typical person. To an extent. Blackouts aside, alcohol can still interfere with other regions of your brain including those responsible for reasoning and decision-making.
So during blackouts, people have crashed cars gotten into fights and committed — or been the victims of — sexual assaults. They just might not remember it.
That being said, not everyone gets blackouts. Your sex, body weight, and family history all play a roll. So that could explain why your friends recall the entire night despite downing just as much tequila. But it won’t save them from a wicked hangover the next morning.
As we hurtle towards 2019, Qualcomm has been busy introducing the world to its latest products that are likely to drive next year’s biggest trends. At the first keynote of its three-day Tech Summit in Hawaii yesterday, the company already previewed some of the features of its next premium mobile processor — the Snapdragon 855. Today, we’re getting a deeper dive into the nitty gritty details of the new chipset.
To be clear, there are a lot of highlights here. This is the first mobile processor to support multi-gigabit 5G, and is one of the first chips built on 7-nanometer architecture. The Snapdragon 855 also features (among other things) advances in AI processing and graphics prowess — let’s take a closer look.
For all of Qualcomm’s talk abut 5G, the 855 is fascinating because it mainly relies on a new, built-in X24 LTE modem, not the Snapdragon X50 5G modem we’ve heard so much about lately. The X50 enables millimeter wave (mmWave) support for transfers over recently opened mmWave frequency bands, which can provide up to 20 times faster average performance than what you’d get on today’s phones and networks. The thing is, device makers will ultimately decide whether or not they want to offer hardware with 5G-enabled 855 chipsets — just buying an 855-powered phone doesn’t mean you’ll get access to those crazy data speeds.
Regardless of your 5G situation, you still might see some important data speed improvements. The Snapdragon 855 will also support WiFi 6 (also known as 802.11ax), which promises to bring about increased throughput and faster speeds. Qualcomm’s using features like 8×8 sounding to serve more devices more efficiently, and promises up to two times improvement over 4×4 sounding. The new CPU will also support mmWave WiFi via Qualcomm’s 60 GHz platform, an industry-first 802.11ay-based offering that can boost speeds up to 10 Gbps.
Wireless earbuds also get an upgrade, thanks to improved support for Qualcomm’s TrueWireless Stereo Plus technology that’s supposed to optimize for low latency between left and right earbuds and improve energy efficiency for longer battery life.
Performance and AI
With every new processor comes faster speeds and better performance, and the Snapdragon 855 promises these improvements as well. Specifically, the Kryo 485 CPU is supposed to be 45 percent faster than the Snapdragon 845, while the new Adreno 640 GPU will provide up to 20 percent faster graphics.
Qualcomm also continued to work on its AI-processing prowess, and the Snapdragon 855 comes with the company’s 4th generation multi-core AI engine. It’s supposed to be capable of more than 7 trillion operations per second and offer three times the AI performance over the 845.
Perhaps more important is the new Hexagon 690 processor on the chipset, which includes a tensor accelerator and four vector extensions to double the vector processing prowess. Qualcomm has been talking up the value of machine learning applications being used in tandem with 5G connections, but we’re looking forward to seeing how the company’s improved AI hardware stacks up to the competition. The GPU also has 50 percent more arithmetic logic units, while the Kryo CPU received new instructions to speed up AI processing. All told, AI tasks like facial recognition or context-based suggestions should be much faster on the Snapdragon 855.
Photography and video
We’re a world obsessed with cameras and the Snapdragon 855 will provide some photography updates that could improve next year’s phones. The chipset features a new Spectra 380 image signal processor (ISP), which integrates hardware-accelerated computer vision to speed up tasks like object recognition. Qualcomm says this is the world’s first announced computer vision ISP and that it should provide up to 4 times power savings.
The Spectra 380 supports hardware-based depth-sensing to enable video recording, object classification and segmentation in real time at 4K HDR at 60 fps. What that means is the Snapdragon 855 is powerful enough to distinguish between people and a scene’s background and even apply filters or green screen-like effects to your video in real time. This will all happen in the viewfinder as you’re recording your video. That’s a lot of data to process at once in real time, but the Snapdragon 855 is up for the task (according to Qualcomm anyway).
The Spectra 380 will also support video recording using the popular HDR10+ standard to capture more than 1 billion shades of color. To better store all this data, the Snapdragon 855 also features hardware acceleration for the HEIF file format encoding, making files 50 percent smaller. Oh, and beyond all that, the Snapdragon 855 supports capture images from multiple cameras at the same time.
Gaming and entertainment
In addition to helping next year’s flagships capture high-quality content, the Snapdragon 855 will also have the chops to show it all off. With the new chipset, Qualcomm is introducing the Snapdragon Elite Gaming experience, which will support gaming in “true HDR” and offer physically based rendering to recreate textures in games by drawing from readily available templates. Mobile games have already gotten almost startingly close to console game quality — well, graphically, anyway — and physically based rendering should make that gap even smaller.
You can also expect HDR10+ playback on Snapdragon 855 devices, which Qualcomm notes is a commercial first on mobile. Beyond that, we’re also looking at improved H.265 and VP9 decoding (thanks to some handy hardware acceleration) — the specs themselves might not mean a whole lot to you, but long story short, your new 855-powered phone should let you watch videos for longer on a single charge. And the sheer horsepower available here means we’ll start to see volumetric virtual reality (VR) immersive video experiences running at 120fps and at resolutions up to 8K. We’ve hit a point where the VR industry lacks the sort of buzz it generated just a few years ago, but throwing more power at the situation definitely can’t hurt.
On paper at least, Qualcomm’s Snapdragon 855 chipset seems like a potent mix of clever architecture and sheer computing power, two things that have become all too important as we demand more from our phones. The thing is, Qualcomm has always talked a big game — thankfully, we won’t need to wait too long until we get to see if the company can actually deliver the goods.
Sex, booze, or Amazon? For some millennials, the choice is easy: online shopping.
44% of millennials said they would rather give up sex than quit Amazon for a year, according to a new survey from Max Borges Agency. And, 77% of those surveyed would choose Amazon over alcohol for a year.
Max Borges Agency polled 1,108 people from the ages of 18 to 34 who had bought consumer-tech products on Amazon in the last year.
Millennials prioritizing Amazon over sex and alcohol is just one sign of the e-commerce giant’s dominance.
Amazon was named America’s most loved brand for the second year in a row in Morning Consult’s annual report, released Wednesday. And, earlier in December, the company briefly became the world’s most valuable public company, reaching a market capitalization of $865 billion — ahead of Apple’s $864.8 billion valuation.
The e-commerce giant has dealt with backlash in recent months. Thousands of Amazon workers across Europe went on strike on Black Friday, to protest what they called "inhumane conditions" in warehouses. In October, the company announced it would raise the minimum wage for all of its workers to $15 an hour, after being slammed by politicians such as Sen. Bernie Sanders.
Forever is the short film creation of German filmmaker Nicolas Arnold. He was commissioned in July 2018 to create a “Motion Response” for Australia’s upcoming Pause Fest 2019 tackling them theme “The Future is Intimate”. Four months later he’d completed the absolute eye candy you see above.
Already a very impressive film, I was blown away when I saw just how much of this was created in-camera and that it isn’t just some CG special effects reel. Nicolas posted a behind the scenes video showing some of his design and practical effects processes and it’s just as amazing as the film itself.
Nicolas told DIYP that given the theme of “The Future is Intimate”, he chose to visually translate one of his favourite quotes; Emily Dickinson’s “Forever – is composed of nows“. The sound design is all analogue, and all of the effects are practical with the exception of a little kinetic type for the titles.
For some of the effects, Nicolas used a laptop placed and a mirror to create some very unique lighting and reflections on spheres placed within the scene. But to keep those spheres perfectly aligned and not rolling around all over the place, the mirror has to be set perfectly level.
This also means that the laptop and camera also have to be perfectly aligned with each other, as well the mirror, in order to produce flawless reflections and refractions in the final shot.
Nicolas also used a technique which seems to be quite a common theme across his work, and that is the interaction of various liquids. And watching the process reel above, you can see just how much work went into the whole production.
It’s amazing to think just how much planning and work went into the creation of this film in such a short space of time. And when you watch the final result, it’s incredibly impressive. Nicolas was also asked to create a number of posters and other media for promotional material, which were made from assets created during the production of the film.
I’ve seen tablets, laptops and TVs used as backgrounds for images and video before, but this really takes things to the next level. And the liquids have such a surreal and alien quality to them.
You can see Nicolas’ full write up on this project over on his website. And if you want to see more of his work, be sure to follow him on Instagram and Behance.
Images used with permission.
from DIYPhotography.net -Hacking Photography, One Picture At A Time http://bit.ly/2AW9Dbl
Starting an auto launch event with a dancing car is… odd. Apparently, the new 2020 Mercedes GLE is a slave to the rhythm. But the tech behind the groovin’ GLE revealed in front of a San Antonio hotel has real-world uses that don’t involve entertainment.
E-Active Body Control suspension is more impressive when taken seriously
MBUX is still the best infotainment system on the market
4Matic handling is great for a car this size
HUD can be overwhelming
We’re going to be overloaded with videos of GLEs dancing
Route-Based Speed Adaptation is still too cautious around corners
A refreshed luxury SUV that’s had all the technology thrown at it and it comes out the other side looking and driving great. The new MBUX continues to impress and the E-Active Body Control suspension, while weird at first should get a lot of people out of sandpits, but also let them show off their car’s dance moves.
Be the first to review the GLE SUV (2020)? Your ratings help us make the buyer’s guide better for everyone.
The new Mercedes GLE (starting at $53,700) looks like any other SUV refresh, but under its attractive new design is a vehicle crammed with features that include the new MBUX infotainment system and the impressive E-Active Body Control suspension that makes cornering… weird but better. Oh and that “dancing,” it’ll actually help you get out of a sand pit.
A new suspension system controls each wheel’s spring and dampening force independently. It’s how the car is able to dance. That demo I mentioned — while weird — was a good representation of what E-Active Body Control can do. Inside the vehicle, you can recreate that dance by actuating the height of each corner of the GLE in real time. Helpful while offroad and one tire is stuck in a ditch.
The real fun comes when you put the vehicle in off-road mode and turn on the rocking feature. It essentially bounces the SUV up and down. This is for when the car is stuck in sand or soft dirt and compresses the terrain giving the vehicle more traction to free itself. Sadly, Mercedes didn’t have a sandpit for us to try this feature in, instead, I just pulled over to the side of the road and tried it until I stopped giggling.
The thing is, I’ve actually used this method as a teenager to help my friends get their trucks unstuck from mud, sand and even snow. It worked then and I’m sure it will probably work without a bunch of teenagers bouncing up and down in the back of a pickup. At least there will be less of a chance of falling out of the vehicle.
One suspension trick you can’t pull off with a group of friends is the new Curve feature. When you go around a corner the vehicle actually leans into it. Like everything else with the new E-Active Body Control suspension, it’s weird at first. But, after about an hour, you miss it once you turn it off. It’s not available in Sport mode, so it’s not really built for aggressive driving, but for cruising, it reduces how much the passengers lean while the car corners.
You can push the GLE in Sport mode and it’ll deliver superior handling for a car of its size. Cornering is tight and body roll (if you don’t have Curve mode enabled) is kept to a minimum. The all-wheel-drive 4Matic system does a good job keeping you on the road, but I did encounter some understeer (the front wheels turn but the car continues straight).
Inside, the GLE’s 12.3-inch beautiful display houses the new MBUX infotainment system. I’m happy to report that it feels more responsive to voice commands than the pre-production A-Class I drove a few months back. I was already happy with MBUX in the A-Class, if this is what a few months of fine-tuning does to make it better, other automakers might want to take heed and see what Mercedes is doing.
The dash cluster is equally stunning, with its own 12.3-inch display showing off a myriad of different design modes and options. Mercedes also dropped a huge HUD (Heads up display) into the car. Within it you can add, well, frankly too many things. If you keep it simple, it’s great, if you go overboard, it gets far too cluttered for driving in anything other than a long-boring highway straightway.
That tedious freeway could also be suited to the updated Advanced Driver Assistance System. The stop and go feature is more effective than others, thanks to the car-to-x communications for traffic jam assist — which alerts the vehicle that there’s traffic up ahead and primes it for the gridlock. Like the S-Class, it supports the ability to adjust the speed of the adaptive cruise control to what Mercedes deems safe around corners. Also, like the S-Class that speed is usually way slower than I would take a corner and seems overly cautious to me.
Mercedes also added active lane change. When the driver assistance system is up and running with adaptive cruise control and lane-keep assist, tap the blinker to move into the next lane (if it’s deemed safe by the vehicle). It worked well during my tests and like Route-Based Speed Adaptation is very cautious. Unlike my concern about speed, I’m happy to have the car act less aggressive when dealing with traffic.
Once you get away from the soul-draining traffic of the city, the GLE is a happy cruiser if you opt for the GLE 350 with the 2.0 Liter Inline-4 turbo that puts out 255 horsepower and 273 foot-pounds of torque. If you want to kick up some dust, then the inline-6 GLE 450 (starting at $61,150) with 362 horsepower and 369-foot-pounds of torque is probably more your speed. Yes, the 450 is more fun, but if a majority of your driving is in town where a six-cylinder engine is constrained, you’re probably better off the 350.
The GLE 450 also gets an additional 21 horsepower from the EQ boost system. A small electric motor and battery that add a little bit of oomph and can potentially shut off the engine in certain driving conditions.
At its core, the new GLE is a good SUV made better. Handling is improved and it feels like Mercedes did more than tweak a few things under the hood. While driving and riding in the passenger and rear seats, the experience was the luxurious Mercedes ride you’ve come to expect. Massaging seats, cushioned headrests and a refreshed dash layout that, at first glance, I wasn’t sure about — but it grew on me once I actually got in the car.
That’s the magic of this SUV. All of these suspension features and the tweaked design seem a little weird. Then you get in and drive and it’s like, “ohhhh this is great.” You can’t ask for much more than that from an updated SUV.
This is the first time a crew has launched to the space station since a failed Soyuz launch in October led to an abort that brought both crewmembers (safely) back to Earth not long after launch.
That was the first crewed Soyuz failure in decades. The Monday launch marks the Soyuz’s return to form.
The new crew includes NASA astronaut Anne McClain, Canadian astronaut David Saint-Jacques, and Russian cosmonaut Oleg Kononenkoof. They’ll stay aboard the space station for about 6.5 months performing science and learning how to live in space.
And who knows, maybe they’ll also get to spot a rocket launch from space during their time in orbit.
Casual hookups can be fun, but there are plenty of people who crap all over them, believing they promote bad habits, can lead to higher odds of contracting an STI and that bumping uglies should be between two people who care about one another. Whatever, it’s no longer 1950, grandma, so there’s absolutely nothing wrong with casual hookups, just as long as it’s consensual and are using protection.
And, while some people think that casual hookups are bad because they lack intimacy — because it’s usually between two strangers — for those who think the mind, body, spirit and all that stuff is necessary to have a really great experience, a new study says that getting down and dirty with strangers offers just as much intimacy. Guys, this is awesome news.
The study, which was conducted by a team of researchers that included Binghamton University faculty and a team at Indiana University’s Kinsey Institute, reveals that casual hookups among young adults is a frequent source of intimacy, so take that all you haters! Here’s what one of the researchers involved in the study, Ann Merriwether, a developmental psychologist and lecturer at Binghamton, had to add about the findings.
“We have a stereotype that casual sex (hookups) are just about meaningless sex, but this research shows this is not necessarily true. It shows intimacy is important and desired by many people, especially those who prefer hookups to more traditional relationships.”
This is big news, bros, because it kind of proves that, in a way, we’re all hopeless romantics who look for some intimacy with potential partners, even if they’re just casual hookups. Additionally, it shows that, even if we’re not interested in a full-on relationship, we do want a connection without all the commitment.
For the study, the researchers asked several hundred college students to answer questions about “affectionate and intimate activities” during sexual encounters, whether in a relationship or as a casual hookup. These included things like cuddling, foreplay, spending the night, eye gazing, etc., with researchers finding that the rate of intimacy during casual hookups was much greater than they first thought.
Look, having casual hookups is a great way to stay sexually healthy — again, as long as it’s consensual and is done safely — so it might be time to officially lose the stigma that everyone’s a bunch of floozies if they’re sleeping around and not in a relationship. Of course, going to an alley behind a bar together isn’t all that romantic or intimate, but bringing someone home and waking up in bed next to them can be, and this study says so.
In 2017, a Palestinian construction worker in the West Bank settlement of Beiter Illit, Jerusalem, posted a picture of himself on Facebook in which he was leaning against a bulldozer. Shortly after, Israeli police arrested him on suspicions that he was planning an attack, because the caption of his post read “attack them.”
Except that it didn’t. The real caption of the post was “good morning” in Arabic. But for some unknown reason, Facebook’s artificial intelligence-powered translation service translated the text to “hurt them” in English or “attack them” in Hebrew. The Israeli Defense Force uses Facebook’s automated translation to monitor the accounts of Palestinian users for possible threats. In this case, they trusted Facebook’s AI enough not to have the post checked by an Arabic-speaking officer before making the arrest.
The Palestinian worker was eventually released after the mistake came to light—but not before he underwent hours of questioning. Facebook apologized for the mistake and said it took steps to correct it.
Advances in deep learning and neural networks have improved the precision of AI algorithms and enabled the automation of tasks that were previously thought to be the exclusive domain of human intelligence. But the precision in performance comes at a cost to transparency. Unlike with traditional software, we don’t always have an exact idea of how deep-learning algorithms work. Troubleshooting them is very difficult, and they often fail in unanticipated and unexplainable ways. Even the creators of deep-learning algorithms are often hard-pressed to investigate and interpret the logic behind their decisions.
The failure of Facebook’s machine-translation system is just one of the many cases in which the opacity of deep-learning algorithms has caused larger troubles.
What’s widely known as the AI “black box” problem has become the focus of academic institutions, government agencies, and tech companies that are researching methods to explain AI decisions or to create AI that is more transparent and open to investigation.
Their efforts will be crucial to the development of the AI industry — especially as deep learning finds its way into critical domains where mistakes can have life-changing consequences.
The rise of deep learning
In classical approaches to creating software, developers meticulously specify the rules that define the behavior of a system. In contrast, deep-learning algorithms develop their behavior by examining and comparing numerous examples. The concept and science behind deep learning has existed for decades, but only in recent years has the abundance of data and compute resources pushed it from research labs and academic papers into practical domains. And with its rise in popularity, deep learning has introduced changes in the way developers create software.
For Kate Saenko, who has been involved in computer vision since the early 2000s, those changes are very tangible. Computer vision is a field of artificial intelligence that enables computers to process and understand the context and content of digital images and videos. It is the technology used in a wide range of fields, including image classification, facial recognition, and the automated diagnosis of MRI and X-ray images. It’s one of the fields where rules-based programming has historically struggled, because the number of rules developers have to write down are virtually endless.
“Back in those days, we had a very different approach, where first you designed your features, and a lot of thought and design process went into that,” said Saenko, an associate professor at the Department of Computer Science at Boston University.
For instance, if developers wanted to detect cats, they had to write code manually that could probe pictures for cat features such as heads or tails. “You designed these features first, and then you designed methods to extract those features. And then you would do machine learning on top of the features,” Saenko said.
The process was arduous and lengthy because each of those features can vary in shape and size, depending on the species of the animal and the angle at which the picture was taken.
In contrast, a deep-learning algorithm that is meant to classify pictures as “cat” or “not cat” only needs to be given many cat pictures. It will create its own rules to determine how to detect cats in pictures and performs much better than previous methods that involved a lot of manually written features. In 2012, researchers from the University of Toronto used deep learning for the first time to win a famous computer-vision competition and improve the field by a large margin. Deep learning has since found its way into many other fields, including voice recognition, natural language processing, fraud detection and arts.
“The reason deep learning is so successful is because there’s very little design that goes into neural networks,” said Saenko. “We just let the machine discover the most useful pattern from raw data. We’re not going to tell it what to look for. We’re not going to tell it any high-level features. We let it search through all of its training data and find those patterns that lead to the highest accuracy in solving the problem.”
The challenges of debugging deep-learning software
The benefits in accuracy that deep learning provides are not without their trade-offs.
“In classical computer programming, you have precision with the algorithm. You know exactly in mathematical terms what you are doing,” said Sheldon Fernandez, CEO of DarwinAI, an Ontario-based AI company. “With deep learning, the behavior is data-driven. You are not prescribing behavior to the system. You are saying, ‘Here’s the data, figure out what the behavior is.’ That is an inherently fuzzy and statistical approach.”
This means that when you let a neural network develop its own behavioral model, you are basically losing visibility into its reasoning process. And mostly, the inner parameters and connections that neural networks develop are so numerous and complex that they become too difficult for humans to understand.
As Saenko explained, when using deep learning, engineers must choose “between how much human-imposed, top-down design you put into something to make it more interpretable versus how much performance you lose as a result of that.”
“The real challenge of deep learning is that it’s not modeling, necessarily, the world around it. It’s modeling the data it’s getting,” Fernandez said. “And that modeling often includes bias and problematic correlations. It can include nonsensical correlations. And all those things can find [their] way into the behavior of the system.”
A while ago, Seanko developed a deep-learning algorithm that captioned images and videos with impressive accuracy. The problem was that her captioning application had developed a bias toward certain types of decisions, a problem that is common in deep-learning algorithms. For instance, in cooking videos, it often captioned kitchen workers as women — even when they were men. On the other hand, in science videos, the algorithm was more inclined to label scientists as men. But she couldn’t determine for certain why the network was making the mistakes. And without being able to find the reasons of those errors, she couldn’t fix them.
In some cases, the opacity of AI algorithms can cause frustration. But in other cases, not being able to explain the reasoning behind AI decisions can have more serious consequences.
In 2017, Fernandez, then a computer scientist at Avande, an IT consulting company, was using deep learning to help a bank in the UK detect fraudulent transactions. They basically trained the deep neural network with all the historical data of the bank and let it figure out for itself the patterns that defined fraudulent transactions.
Their algorithm was able to detect fraud 3 or 4 percent better than the client’s best-in-class system. The problem was that they had no idea why it was performing better. “We had no insight into what data the neural network was triggering off in order to make better predictions,” Fernandez said.
Naturally, the client could not confer sensitive financial decision-making onto an automated system if they couldn’t understand the logic behind its decisions.
The financial industry is one of several domains where interpretability has become a requirement for the use of AI algorithms in critical decisions. Other fields where the opacity of deep learning has become a hurdle include health care and medicine, hiring and human resources, criminal justice, and the military. In all these domains, a bad decision can have a negative and irreversible effect on the career, health, or life of one or many humans and can have severe legal consequences for the person who makes those decision. That’s why experts are generally skeptical about trusting an automated system to make decisions on their behalf.
Moreover, European Union’s General Data Protection Regulation (GDPR), which went into effect in May, requires organizations that use automated decision-making to provide meaningful information about the information and logic involved in those decisions when users or customers demand it. The GDPR, which is legally binding for any company and organization that does business in the EU zone, is considered a de facto gold standard for all tech companies handling personal information.
“One of the real powers of explainable AI is to illustrate how the AI is triggering data points to reach a decision, and surfacing those data points to a human for verification,” Fernandez said.
Investigating the AI black box
There are generally two pathways toward making decisions made by neural networks interpretable. The first, called “local explanations,” tries to understand the motives and parameters behind individual decisions made by an AI algorithm. “Global explanations” try to describe the general reasoning logic of an AI model.
After her neural networks failed to reveal the reasons they were mislabelling videos and pictures, Saenko and a team of researchers at Boston University engaged in a project to find the parameters that influenced those decisions.
When you provide an image-classification network with an image input, what it returns is a set of classes, each associated with a probability. Normally, you’d have no insight into how the AI reached that decision. But RISE provides you with a heatmap that describes which parts of the image are contributing to each of those output classes.
For instance, in the above image, it’s clear that the network in question is mistaking brown sheep for cows, which might mean that it hasn’t been trained on enough examples of brown sheep. This type of problem happens often. Using the RISE method, Saenko was able to discover that her neural networks were specifying the gender of the people in the cooking videos based on pots and pans and other objects that appeared in the background instead of examining their facial and physical features.
The idea behind RISE is to randomly obscure parts of the input image and run it through the neural network to observe how the changes affect the output weights. By repeating the masking process multiple times, RISE is able to discern which parts of the image are more important to each output class.
Since RISE works by manipulating inputs, it is a “black box” explanation method, which means it is model-agnostic: It can work with any AI model, without the need to access its inner workings or its training examples.
Methods such as RISE can also help build trust with the end users of AI algorithms in fields such as radiology. “When you give a doctor and AI image model that can look at a medical image or an MRI and detect cancer with very high accuracy, they often still don’t trust it because they don’t know why it’s making that decision,” Saenko said. RISE can clarify why an AI is making a diagnosis by pointing out which parts of the image it is considering relevant to the symptoms it is reporting.
Looking for what isn’t there
Most AI explanation methods focus on what’s present in the input. But sometimes, focusing on what’s missing can provide a better picture of the reasoning behind AI decisions.
“If you want to describe a colleague to me, a very natural kind of explanation you might use is, ‘He has long hair and is tall, but he doesn’t wear glasses,'” said Amit Dhurandhar, scientist at IBM Research. “However, none of the methods that do local explanations of AI models explicitly capture this idea.”
Contrastive Explainable Method (CEM), a joint project by researchers at IBM and the University of Michigan, tries to describe decisions made by neural networks by pointing out what it’s not seeing in the input. Like RISE, CEM is a local explanation method, which means it tries to interpret individual decisions made by an AI algorithm.
Basically, like other local explanation methods, CEM tries to tell you why a certain neural network has classified your input in a particular way. But it also tells you what could be added to the input to change its class. For instance, the image below was extracted from a classifier for digits that was run through the CEM probe. On the left is the original input image and the original prediction of the neural network. The middle images highlight in cyan which parts of the image contributed to the original prediction. On the right, the pink highlights show the minimal additions that could lead to a change in prediction.
As Dhurandhar explained, medical diagnosis is one of the fields that stands to benefit much from this explanation method, because doctors reach conclusions not only by looking for the symptoms that are present but also by looking for those that are absent.
“If you go to a doctor, they will register facts such as whether your heart rate was normal. But they will also write things like arrhythmia was absent and a bunch of things that were not present,” Dhurandhar said. “The reason is that in your next checkup, if you have an issue, the doctor will know what you were checked for. Also, if you switch a doctor, it’s easy for the other person to know your diagnosis process.”
Therefore, with methods like CEM, a doctor will be better positioned to probe an automated decision both for the positive and negative contributing factors.
Understanding the general behavior of AI models
While local models are helpful in investigating individual AI decisions, some domains require full transparency of the behavioral model of the software they use.
A few years ago, Dhurandhar developed a deep-learning model that helped a semiconductor-chip-manufacturing company predict which chips would likely become defective further down the production line. The model performed much better than the company’s previous prediction software and enabled it to discard or fix chips at early production stages and improve its yield by several percent, which translated to millions of dollars in costs savings per year.
But the engineers controlling the system, whose jobs were on the line, weren’t willing to let the AI make decisions without knowing exactly how it worked. What they wanted was to improve their original software, not to replace it with a black box that, albeit more accurate, would not provide them with insights on how it worked.
“Since in many domains, there’s a human making the final decision — even if you have a higher-performing model, if the person doesn’t understand, the overall performance of the system might be lower than a lower-performing model that the person is able to understand,” Dhurandhar said.
Improving Simple Models with Confidence Profiles, another AI-explanation method Dhurandhar helped develop with other researchers at IBM, addresses this issue by trying to transfer the behavior of neural networks to interpretable software structures. This is a global explanation model, which means instead of trying to interpret individual decisions, it tries to paint a general picture of how an AI model works.
Dhurandhar describes the “improving simple models” method as trying to achieve “best of both worlds,” which means to benefit from the improvements that a neural network provides while adhering to other constraints that domain experts impose.
The method involves inserting software probes in the various layers of a neural network and monitoring its behavior as it trains on examples and evolves. In later stages, those probes try to replicate the observed behavior of the network on a decision tree, rule-based structure, or another model that is interpretable. In the case of the semiconductor company, Dhurandhar was able to map the behavior of the neural network on the software structure that the company already used.
The resulting model did not perform as well as the neural network but managed to improve the performance of the company’s original software considerably while also maintaining its interpretability. Effectively, the engineers were willing to trade some of the accuracy of the neural network for full visibility and control on how the prediction software worked.
Using AI to understand AI
Fernandez, who co-founded DarwinAI with University of Waterloo professor Alex Wong, reached AI explainability through a different approach. As an academic, Wong, who had years of experience in computer vision, had worked on a technique called evolutionary synthesis (it’s where the name DarwinAI comes from). Evolutionary synthesis is meant to make neural networks more efficient by treating them like organisms that evolve over time and shed their redundant components to become more efficient.
At DarwinAI, Wong helped develop Generative Synthesis, a new technology that builds on the ideas of evolutionary synthesis and takes it a step further.
“The idea behind Generative Synthesis is to take artificial intelligence itself and see if we can better understand and develop neural networks,” Fernandez said.
Generative Synthesis uses machine learning to probe and understand neural networks in a fundamental way. It then develops a complex mathematical representation of the model, which it uses to generate a second neural network that is just as accurate as the first one but is also more compact and faster. Making neural networks smaller makes them deployable in UAVs (unmanned aerial vehicles), driverless cars, and other edge environments that are resource-constrained or need real-time access to AI functionality.
But a byproduct of this approach is a thorough understanding of the way the neural network operates. By having monitored and documented the entire evolution of a neural network, DarwinAI’s Generative Synthesis approach was able to point out the factors and data points that influenced each of the decisions its neural networks made.
“We had a kind of roundabout way of getting to the technology, but it’s really powerful in trying to understand how these neural networks are making decisions,” Fernandez said.
Beyond finding mistakes
“There are correlations that are demonstrably bad, that just shouldn’t happen, such as bias. We need to recognize it in the system and eradicate it,” Fernandez said. In the future, explainability methods can help find and fix those errors before they lead to an unjustified arrest or an unfairly declined loan.
But the benefits of interpreting deep-learning models expand beyond troubleshooting and fixing errors. In some cases, they can help shed light on previously unknown aspects of the domains they’re deployed in.
“Explainability can also work in another direction. It can also give you insights into correlations that you didn’t know existed,” Fernandez said. During his work on applying deep learning to the banking sector, Fernandez’s exploration of interpretable networks helped uncovered new insights on the characteristics of fraudulent transactions.
For example, thanks to explainable AI, they discovered that if a person is using the Chrome browser, the chances of a transaction being fraudulent is higher than if they’re using Internet Explorer or Safari. And that’s because as technical people, cybercriminals are much more likely to use Chrome rather than their operating system’s preinstalled browser.
In another case, a travel agency was able to discover that some people were interested in hotels located on street corners. They later added this as an option for their clients.
“Getting these insights is just as important as eradicating bias, because these insights are valuable to business,” Fernandez said.