Solving AI’s ‘black box’ problem: Learning how algorithms make decisions

Standard


In 2017, a Palestinian construction worker in the West Bank settlement of Beiter Illit, Jerusalem, posted a picture of himself on Facebook in which he was leaning against a bulldozer. Shortly after, Israeli police arrested him on suspicions that he was planning an attack, because the caption of his post read “attack them.”

Except that it didn’t. The real caption of the post was “good morning” in Arabic. But for some unknown reason, Facebook’s artificial intelligence-powered translation service translated the text to “hurt them” in English or “attack them” in Hebrew. The Israeli Defense Force uses Facebook’s automated translation to monitor the accounts of Palestinian users for possible threats. In this case, they trusted Facebook’s AI enough not to have the post checked by an Arabic-speaking officer before making the arrest.

The Palestinian worker was eventually released after the mistake came to light—but not before he underwent hours of questioning. Facebook apologized for the mistake and said it took steps to correct it.

Advances in deep learning and neural networks have improved the precision of AI algorithms and enabled the automation of tasks that were previously thought to be the exclusive domain of human intelligence. But the precision in performance comes at a cost to transparency. Unlike with traditional software, we don’t always have an exact idea of how deep-learning algorithms work. Troubleshooting them is very difficult, and they often fail in unanticipated and unexplainable ways. Even the creators of deep-learning algorithms are often hard-pressed to investigate and interpret the logic behind their decisions.

The failure of Facebook’s machine-translation system is just one of the many cases in which the opacity of deep-learning algorithms has caused larger troubles.

What’s widely known as the AI “black box” problem has become the focus of academic institutions, government agencies, and tech companies that are researching methods to explain AI decisions or to create AI that is more transparent and open to investigation.

Their efforts will be crucial to the development of the AI industry — especially as deep learning finds its way into critical domains where mistakes can have life-changing consequences.

The rise of deep learning

In classical approaches to creating software, developers meticulously specify the rules that define the behavior of a system. In contrast, deep-learning algorithms develop their behavior by examining and comparing numerous examples. The concept and science behind deep learning has existed for decades, but only in recent years has the abundance of data and compute resources pushed it from research labs and academic papers into practical domains. And with its rise in popularity, deep learning has introduced changes in the way developers create software.

For Kate Saenko, who has been involved in computer vision since the early 2000s, those changes are very tangible. Computer vision is a field of artificial intelligence that enables computers to process and understand the context and content of digital images and videos. It is the technology used in a wide range of fields, including image classification, facial recognition, and the automated diagnosis of MRI and X-ray images. It’s one of the fields where rules-based programming has historically struggled, because the number of rules developers have to write down are virtually endless.

“Back in those days, we had a very different approach, where first you designed your features, and a lot of thought and design process went into that,” said Saenko, an associate professor at the Department of Computer Science at Boston University.

For instance, if developers wanted to detect cats, they had to write code manually that could probe pictures for cat features such as heads or tails. “You designed these features first, and then you designed methods to extract those features. And then you would do machine learning on top of the features,” Saenko said.

The process was arduous and lengthy because each of those features can vary in shape and size, depending on the species of the animal and the angle at which the picture was taken.

In contrast, a deep-learning algorithm that is meant to classify pictures as “cat” or “not cat” only needs to be given many cat pictures. It will create its own rules to determine how to detect cats in pictures and performs much better than previous methods that involved a lot of manually written features. In 2012, researchers from the University of Toronto used deep learning for the first time to win a famous computer-vision competition and improve the field by a large margin. Deep learning has since found its way into many other fields, including voice recognition, natural language processing, fraud detection and arts.

“The reason deep learning is so successful is because there’s very little design that goes into neural networks,” said Saenko. “We just let the machine discover the most useful pattern from raw data. We’re not going to tell it what to look for. We’re not going to tell it any high-level features. We let it search through all of its training data and find those patterns that lead to the highest accuracy in solving the problem.”

The challenges of debugging deep-learning software

The benefits in accuracy that deep learning provides are not without their trade-offs.

“In classical computer programming, you have precision with the algorithm. You know exactly in mathematical terms what you are doing,” said Sheldon Fernandez, CEO of DarwinAI, an Ontario-based AI company. “With deep learning, the behavior is data-driven. You are not prescribing behavior to the system. You are saying, ‘Here’s the data, figure out what the behavior is.’ That is an inherently fuzzy and statistical approach.”

This means that when you let a neural network develop its own behavioral model, you are basically losing visibility into its reasoning process. And mostly, the inner parameters and connections that neural networks develop are so numerous and complex that they become too difficult for humans to understand.

A simplified view of how data flows in neural networks.

A simplified view of how data flows in neural networks.

Image: Akritasa/wikimedia commons 

As Saenko explained, when using deep learning, engineers must choose “between how much human-imposed, top-down design you put into something to make it more interpretable versus how much performance you lose as a result of that.”

Also, the reasoning that a neural network develops does not necessarily reflect that of humans, even though it produces accurate results most of the time.

“The real challenge of deep learning is that it’s not modeling, necessarily, the world around it. It’s modeling the data it’s getting,” Fernandez said. “And that modeling often includes bias and problematic correlations. It can include nonsensical correlations. And all those things can find [their] way into the behavior of the system.”

A while ago, Seanko developed a deep-learning algorithm that captioned images and videos with impressive accuracy. The problem was that her captioning application had developed a bias toward certain types of decisions, a problem that is common in deep-learning algorithms. For instance, in cooking videos, it often captioned kitchen workers as women — even when they were men. On the other hand, in science videos, the algorithm was more inclined to label scientists as men. But she couldn’t determine for certain why the network was making the mistakes. And without being able to find the reasons of those errors, she couldn’t fix them.

In some cases, the opacity of AI algorithms can cause frustration. But in other cases, not being able to explain the reasoning behind AI decisions can have more serious consequences.

In 2017, Fernandez, then a computer scientist at Avande, an IT consulting company, was using deep learning to help a bank in the UK detect fraudulent transactions. They basically trained the deep neural network with all the historical data of the bank and let it figure out for itself the patterns that defined fraudulent transactions.

Their algorithm was able to detect fraud 3 or 4 percent better than the client’s best-in-class system. The problem was that they had no idea why it was performing better. “We had no insight into what data the neural network was triggering off in order to make better predictions,” Fernandez said.

Naturally, the client could not confer sensitive financial decision-making onto an automated system if they couldn’t understand the logic behind its decisions.

The financial industry is one of several domains where interpretability has become a requirement for the use of AI algorithms in critical decisions. Other fields where the opacity of deep learning has become a hurdle include health care and medicine, hiring and human resources, criminal justice, and the military. In all these domains, a bad decision can have a negative and irreversible effect on the career, health, or life of one or many humans and can have severe legal consequences for the person who makes those decision. That’s why experts are generally skeptical about trusting an automated system to make decisions on their behalf.

Moreover, European Union’s General Data Protection Regulation (GDPR), which went into effect in May, requires organizations that use automated decision-making to provide meaningful information about the information and logic involved in those decisions when users or customers demand it. The GDPR, which is legally binding for any company and organization that does business in the EU zone, is considered a de facto gold standard for all tech companies handling personal information.

“One of the real powers of explainable AI is to illustrate how the AI is triggering data points to reach a decision, and surfacing those data points to a human for verification,” Fernandez said.

Investigating the AI black box

There are generally two pathways toward making decisions made by neural networks interpretable. The first, called “local explanations,” tries to understand the motives and parameters behind individual decisions made by an AI algorithm. “Global explanations” try to describe the general reasoning logic of an AI model.

After her neural networks failed to reveal the reasons they were mislabelling videos and pictures, Saenko and a team of researchers at Boston University engaged in a project to find the parameters that influenced those decisions.

What came out of the effort was RISE, a method that tries to explain to interpret decisions made by AI algorithms. Short for “randomized input sampling for explanation of black-box models,” RISE is a local explanation model.

When you provide an image-classification network with an image input, what it returns is a set of classes, each associated with a probability. Normally, you’d have no insight into how the AI reached that decision. But RISE provides you with a heatmap that describes which parts of the image are contributing to each of those output classes.

RISE is a method that tries to explain decisions about images made by AI algorithms with a heatmap.

RISE is a method that tries to explain decisions about images made by AI algorithms with a heatmap.

Image: cornell uniVERSITY library 

For instance, in the above image, it’s clear that the network in question is mistaking brown sheep for cows, which might mean that it hasn’t been trained on enough examples of brown sheep. This type of problem happens often. Using the RISE method, Saenko was able to discover that her neural networks were specifying the gender of the people in the cooking videos based on pots and pans and other objects that appeared in the background instead of examining their facial and physical features.

The idea behind RISE is to randomly obscure parts of the input image and run it through the neural network to observe how the changes affect the output weights. By repeating the masking process multiple times, RISE is able to discern which parts of the image are more important to each output class.

How RISE works: It randomly obscures parts of an input image, running them through the neural network to observe how the changes affect the output weights.

How RISE works: It randomly obscures parts of an input image, running them through the neural network to observe how the changes affect the output weights.

Image: CORNELL UNIVERSITY LIBRARY 

Since RISE works by manipulating inputs, it is a “black box” explanation method, which means it is model-agnostic: It can work with any AI model, without the need to access its inner workings or its training examples.

Methods such as RISE can also help build trust with the end users of AI algorithms in fields such as radiology. “When you give a doctor and AI image model that can look at a medical image or an MRI and detect cancer with very high accuracy, they often still don’t trust it because they don’t know why it’s making that decision,” Saenko said. RISE can clarify why an AI is making a diagnosis by pointing out which parts of the image it is considering relevant to the symptoms it is reporting.

Looking for what isn’t there

Most AI explanation methods focus on what’s present in the input. But sometimes, focusing on what’s missing can provide a better picture of the reasoning behind AI decisions.

“If you want to describe a colleague to me, a very natural kind of explanation you might use is, ‘He has long hair and is tall, but he doesn’t wear glasses,'” said Amit Dhurandhar, scientist at IBM Research. “However, none of the methods that do local explanations of AI models explicitly capture this idea.”

Contrastive Explainable Method (CEM), a joint project by researchers at IBM and the University of Michigan, tries to describe decisions made by neural networks by pointing out what it’s not seeing in the input. Like RISE, CEM is a local explanation method, which means it tries to interpret individual decisions made by an AI algorithm.

Basically, like other local explanation methods, CEM tries to tell you why a certain neural network has classified your input in a particular way. But it also tells you what could be added to the input to change its class. For instance, the image below was extracted from a classifier for digits that was run through the CEM probe. On the left is the original input image and the original prediction of the neural network. The middle images highlight in cyan which parts of the image contributed to the original prediction. On the right, the pink highlights show the minimal additions that could lead to a change in prediction.

CEM is another method for interpreting individual decisions made by an AI algorithm, highlighting parts of an image in cyan (middle) to show what contributed to the original prediction (left), and showing possible additions in pink (right) that could lead to a different outcome.

CEM is another method for interpreting individual decisions made by an AI algorithm, highlighting parts of an image in cyan (middle) to show what contributed to the original prediction (left), and showing possible additions in pink (right) that could lead to a different outcome.

Image: cornell university library 

As Dhurandhar explained, medical diagnosis is one of the fields that stands to benefit much from this explanation method, because doctors reach conclusions not only by looking for the symptoms that are present but also by looking for those that are absent.

“If you go to a doctor, they will register facts such as whether your heart rate was normal. But they will also write things like arrhythmia was absent and a bunch of things that were not present,” Dhurandhar said. “The reason is that in your next checkup, if you have an issue, the doctor will know what you were checked for. Also, if you switch a doctor, it’s easy for the other person to know your diagnosis process.”

Therefore, with methods like CEM, a doctor will be better positioned to probe an automated decision both for the positive and negative contributing factors.

Understanding the general behavior of AI models

While local models are helpful in investigating individual AI decisions, some domains require full transparency of the behavioral model of the software they use.

A few years ago, Dhurandhar developed a deep-learning model that helped a semiconductor-chip-manufacturing company predict which chips would likely become defective further down the production line. The model performed much better than the company’s previous prediction software and enabled it to discard or fix chips at early production stages and improve its yield by several percent, which translated to millions of dollars in costs savings per year.

But the engineers controlling the system, whose jobs were on the line, weren’t willing to let the AI make decisions without knowing exactly how it worked. What they wanted was to improve their original software, not to replace it with a black box that, albeit more accurate, would not provide them with insights on how it worked.

“Since in many domains, there’s a human making the final decision — even if you have a higher-performing model, if the person doesn’t understand, the overall performance of the system might be lower than a lower-performing model that the person is able to understand,” Dhurandhar said.

Improving Simple Models with Confidence Profiles, another AI-explanation method Dhurandhar helped develop with other researchers at IBM, addresses this issue by trying to transfer the behavior of neural networks to interpretable software structures. This is a global explanation model, which means instead of trying to interpret individual decisions, it tries to paint a general picture of how an AI model works.

Dhurandhar describes the “improving simple models” method as trying to achieve “best of both worlds,” which means to benefit from the improvements that a neural network provides while adhering to other constraints that domain experts impose.

The method involves inserting software probes in the various layers of a neural network and monitoring its behavior as it trains on examples and evolves. In later stages, those probes try to replicate the observed behavior of the network on a decision tree, rule-based structure, or another model that is interpretable. In the case of the semiconductor company, Dhurandhar was able to map the behavior of the neural network on the software structure that the company already used.

The resulting model did not perform as well as the neural network but managed to improve the performance of the company’s original software considerably while also maintaining its interpretability. Effectively, the engineers were willing to trade some of the accuracy of the neural network for full visibility and control on how the prediction software worked.

Using AI to understand AI

Fernandez, who co-founded DarwinAI with University of Waterloo professor Alex Wong, reached AI explainability through a different approach. As an academic, Wong, who had years of experience in computer vision, had worked on a technique called evolutionary synthesis (it’s where the name DarwinAI comes from). Evolutionary synthesis is meant to make neural networks more efficient by treating them like organisms that evolve over time and shed their redundant components to become more efficient.

At DarwinAI, Wong helped develop Generative Synthesis, a new technology that builds on the ideas of evolutionary synthesis and takes it a step further.

“The idea behind Generative Synthesis is to take artificial intelligence itself and see if we can better understand and develop neural networks,” Fernandez said.

Generative Synthesis uses machine learning to probe and understand neural networks in a fundamental way. It then develops a complex mathematical representation of the model, which it uses to generate a second neural network that is just as accurate as the first one but is also more compact and faster. Making neural networks smaller makes them deployable in UAVs (unmanned aerial vehicles), driverless cars, and other edge environments that are resource-constrained or need real-time access to AI functionality.

But a byproduct of this approach is a thorough understanding of the way the neural network operates. By having monitored and documented the entire evolution of a neural network, DarwinAI’s Generative Synthesis approach was able to point out the factors and data points that influenced each of the decisions its neural networks made.

“We had a kind of roundabout way of getting to the technology, but it’s really powerful in trying to understand how these neural networks are making decisions,” Fernandez said.

Beyond finding mistakes

“There are correlations that are demonstrably bad, that just shouldn’t happen, such as bias. We need to recognize it in the system and eradicate it,” Fernandez said. In the future, explainability methods can help find and fix those errors before they lead to an unjustified arrest or an unfairly declined loan.

But the benefits of interpreting deep-learning models expand beyond troubleshooting and fixing errors. In some cases, they can help shed light on previously unknown aspects of the domains they’re deployed in.

“Explainability can also work in another direction. It can also give you insights into correlations that you didn’t know existed,” Fernandez said. During his work on applying deep learning to the banking sector, Fernandez’s exploration of interpretable networks helped uncovered new insights on the characteristics of fraudulent transactions.

For example, thanks to explainable AI, they discovered that if a person is using the Chrome browser, the chances of a transaction being fraudulent is higher than if they’re using Internet Explorer or Safari. And that’s because as technical people, cybercriminals are much more likely to use Chrome rather than their operating system’s preinstalled browser.

In another case, a travel agency was able to discover that some people were interested in hotels located on street corners. They later added this as an option for their clients.

“Getting these insights is just as important as eradicating bias, because these insights are valuable to business,” Fernandez said.

This article originally published at PCMag
here

from Mashable! http://bit.ly/2EeOL38
via IFTTT

Google’s cross-platform Flutter UI toolkit hits version 1.0

Standard

Flutter, Google’s UI toolkit for building mobile Android and iOS applications, hit its version 1.0 release today. In addition, Google also today announced a set of new third-party integrations with the likes of Square and others, as well as a couple of new features that make it easier to integrate Flutter with existing applications.

The open source Flutter project made its debut at Google’s 2017 I/O developer conference. Since then, it’s quickly grown in popularity and companies like Groupon, Philips Hue, Tencent, Alibaba, Capital One and others have already built applications with it, despite the fact that it had not hit version 1.0 yet and that developers have to write their apps in the Dart language, which is an additional barrier to entry.

In total, Google says, developers have already published “thousands” of Flutter apps to the Apple and Google app stores.

“Flutter is our portable UI toolkit for creating a beautiful native experience for iOS and Android out of just a single code base,” Tim Sneath, Google’s group product manager for Dart, explained. “The problem we’re solving is the problem that most mobile developers face today. As a developer, you’re kind of forced to choose. Either you build apps natively using the platform SDK, whether you’re building an iOS app or an Android app. And then you’ve to build them twice.”

Sneath was also part of the Silverlight team at Microsoft before he joined Google in 2017, so he’s got a bit of experience in learning what doesn’t work in this space of cross-platform development. It’s no secret, though, that Facebook is trying to solve a very similar problem with React Native, which is also quite popular.

“I mean, React Native is obviously a technology that’s proven quite popular,” Sneath said. “One of the challenges that React Native developers face, or have reported in the past — one challenge is that native React Native code is written in JavaScript, which means that it’s run using the browser’s JavaScript engine, which immediately kind of move this a little bit away from the native model of the platform. The bit that they are very native in is that they use the operating system’s own controls. And while on the surface, that seems like a good thing in practice, that had quite a few challenges for developers around compatibility.”

Google, obviously believes that its ability to compile to native code — and the speed gains that come with that — set its platform apart from the competition. In part, it does this by using a hardware-accelerated 2D engine and, of course, by compiling the Dart code to native ARM code for iOS and Android. The company also stresses that developers get full control over every pixel on the screen.

With today’s launch, Google is also announcing new third-party integrations to Flutter. The first is with Square, which announced two new Flutter SDKs for building payments flows, both for in-app experience and in-person terminals using a Square reader. Others are 2Dimensions, for building vector animations and embedding them right into Flutter, as well as Nevercode, which announced a tool for automating the build and packaging process for Flutter apps.

As for new Flutter features, Google today announced ‘Add to App,’ a new feature that makes it easier for developers to slowly add Flutter code to existing apps. In its early days, Flutter’s focus was squarely on building new apps from scratch, but as it has grown in popularity, developers now want to use it for parts of their existing applications as they modernize them.

The other new feature is ‘Platform Views,’ which is essentially the opposite of ‘Add to App’ in that it allows developers to embed Android and iOS controls in their Flutter apps.

from TechCrunch https://tcrn.ch/2UecCVF
via IFTTT

One Man Spent A Decade Studying Hangovers In Hopes Of Finding A Cure–Here Is His Best Practice

Standard


iStockphoto

One of the many things I was blindsided by in adulthood was the ever-expanding awfulness of hangovers as life goes on. If I binge drink for back-to-back nights, I leave a note for my mother telling her I love her in case I don’t make it. I find some solace in knowing that I am not alone, as there’s a scientific explanation for why hangovers become more debilitating.

  1. You don’t have as many liver enzymes to rid the body of the toxic element of alcohol.
  2. The recovery process of your body is weaker, no thanks to metabolism and neuroplasticity.
  3. Your meds exacerbate the problem.
  4. Everything else sucks as you age, so why should hangovers be any different!

Everyone has their own hangover routine to soften the blow. The late, great Anthony Bourdain prefers aspirin, a Coca-Cola, a joint, and spicy Szechuan food. No one, however, has yet found a cure.

Toronto-based writing professor and former bar owner Shaughnessy Bishop-Stall has spent nearly a decade studying hangovers in hopes of making them a thing of the past, even writing a book titled “Hungover: The Morning After and One Man’s Quest for the Cure.”

“We seem to be so adept at progressing scientifically . . . except when it comes to this strange little phenomenon,” the 44-year-old told the New York Post.

In order to find the cure, Bishop-Stall subjected himself to countless nights of relentless drinking and recorded everything he drank and the severity of his symptoms. He then tried hundreds of cures, from old “cures” like eels and pickled sheep’s eyes to the high-end nutrient IV. After extensive research, here is his big takeaway.

Thankfully, it’s not pickled animal parts, but a handful of easily obtainable over-the-counter supplements, taken between “your last drink and before you pass out.”

The hero ingredient, per Bishop-Stall, is a “high dose” — about 1,500 milligrams — of an amino acid called N-acetylcysteine (NAC). NAC, he explains, is “sort of a magic ingredient”: It helps the body produce a powerful anti-oxidant called glutathione. Plus, it’s earned its reputation as a toxicity cure: NAC is used in hospital settings to treat Tylenol overdoses.

Along with NAC, Bishop-Stall recommends taking vitamins B1, B6 and B12, which purportedly make NAC more effective, along with boswellia (frankincense), a supposed anti-inflammatory, and milk thistle, an herb that contains even more glutathione.

Bishop-Stall goes on to stress that timing is everything. If you’ve done nothing to prevent a hangover while drinking, by the morning you have a huge mountain to climb.

[h/t New York Post]

 

from BroBible.com http://bit.ly/2Ru46Ql
via IFTTT

Why Does Weed Make You Hungry? We Turned To Science To Get To The Bottom Of The Munchies

Standard


why does weed make you hungry

iStockphoto

Around the turn of the 20th century, a dangerous new drug burst onto the scene that threatened to bring America to its knees and tear apart the fabric of society as we know it.

Over the next few decades, vigilant activists did everything they could to protect the people of the United States from falling victim to this vicious menace before eventually outlawing it all together to protect the population from succumbing to the horrors of Public Enemy No. 1: marijuana.

As we all know by now, the War on Drugs has been a resounding success and made it virtually impossible for people to get their hands on illicit substances in any shape or form— with the notable exception of every single drug in existence.

It took over a century, but the world has slowly started to accept the fact that smoking weed might not actually drive someone to jump out of a window on a whim or mercilessly slaughter someone in cold blood.

That’s not to say marijuana is totally harmless, as you do run the risk of accidentally eating enough Flamin’ Hot Cheetos to put you in the hospital.

Virtually everyone who’s smoked weed has fallen victim to a serious case of the munchies, a crippling condition that can drive a person to consume an entire Domino’s pizza and multiple cans of Arizona in a single sitting.

Why does marijuana make you so hungry in the first place? I’ve come across a number of conflicting explanations over the course of my time online and decided it was time to get to the bottom of things once and for all.

Why Does Weed Give You The Munchies? 

why does weed make you hungry

iStockphoto

Marijuana contains dozens of different cannabinoids, all of which impact the brain and body in different ways.

Cannabinoids themselves are a naturally-occurring substance produced by the brain to keep your body in a state of homeostasis and help it adjust to external factors like changes in temperature and stress. If you happen to injure yourself, it’s cannabinoids that are responsible for instructing your nerves to compensate and reduce inflammation and pain in the process.

If you’ve ever tried (and failed) to resist the urge to order three fortune cookies worth of Chinese, there’s one major cannabinoid to blame: THC (a.k.a. “the thing that gets you high”).

THC is one of the most active ingredients in marijuana and impacts different areas of the brain in a variety of ways; for one, it serves as an inhibitor, negatively impacting memory, coordination, and reaction time (as highlighted by this vintage piece of anti-weed propaganda).

A study published in 2015 found THC latches onto a certain type of cannabinoid receptor in the brain known as “CB1.” CB1s are found in multiple parts of your body’s control center— including the hypothalamus, the part of the mind responsible for telling you your stomach wants some attention.

That rumble you feel in your midsection comes courtesy of a naturally-occurring hormone called “ghrelin,” which is commonly referred to as the “hunger hormone.” Absorbing THC causes your body to start pumping out ghrelin at an accelerated rate and stimulates your hypothalamus in the process, which results in an increased craving for nourishment.

Most fans of the Devil’s lettuce are likely familiar with one of weed’s other side effects: the ability to make even the most mediocre food taste like the most delicious thing to ever grace your tongue.

This is the result of THC impacting CB1s in multiple areas of the brain, including one that makes eating more enjoyable and another that makes foods more pleasurable to the palate.

It also affects the olfactory system and increases your sensitivity to odors. Your sense of smell plays a major factor in how you perceive taste, and in turn, THC ups your ability to detect and appreciate certain flavor combinations you might not normally pick up on.

When you consider junk food producers engineer their products to make you keep you coming back for more it makes sense that you find it harder than usual to resist the urge to house a box of Zebra Cakes when you’re high.

And that, my friend, is why weed makes you hungry.

Yay science!

from BroBible.com http://bit.ly/2rmylxb
via IFTTT

Vinyl record production has finally joined the modern age

Standard


When you think of manufacturing in the US, vinyl records probably isn’t the first thing that springs to mind, but the industry has been chugging along as best it can. For decades, pressing plants have been using aging machines that require a complex infrastructure of piping for the steam-based heating (and cooling) mechanisms — not to mention an engineering support team to keep them in working order. New vinyl presses just weren’t being made, at least until a few years ago.

Two companies emerged to fill that need. Newbilt Machinery launched around 2015 in Germany with slightly updated (cloned) versions of old presses, adding electronic controls and hydraulic power. In February 2017, Jack White’s Third Man pressing plant opened in Detroit running Newbilt’s manual Duplex machines.

That same year, Toronto-based Viryl Technologies joined the market with its WarmTone presses. These machines weren’t clones, but built fresh from the ground up including a modular construction, fully automated operation and remote machine monitoring (even from a mobile device) with its ADAPT software. Viryl’s tech support can log into the system remotely to help troubleshoot any problems. Still, like Newbilt, they required a large boiler system and network of piping to support their operation. Anyone looking to start a pressing plant still faced hefty startup and maintenance costs, a difficult permit and zoning process, as well as a less-than-ideal impact on the environment.

Very recently, this all changed. Viryl has developed a first-in-the-industry: A steamless system that will make massive boilers and piping systems a thing of the past. Not only does it obviate some of the costs and permits previously involved, but it also becomes a more environmentally friendly process. Vinyl record pressing has finally bootstrapped itself into the modern age on all counts and stands to encourage new pressing plants to support vinyl’s resurgent popularity.

Traditionally, the molds used to stamp out vinyl discs are heated by steam which is delivered to the press from a boiler. Viryl’s steamless module electrically heats water to the desired 285 degrees Fahrenheit so the molds can melt pucks of PVC into a record. This new method of heating, removes gas, the boiler and extensive plumbing from the equation.

Vinyl plate at Viryl

A vinyl mold used to press one side of a record.

This new setup is a closed system that can live right next to the press, allowing for a smaller footprint in your workspace. It also reduces water waste, although you’ll still need cooling lines. One of the biggest factors here, though, is that no boiler means none of the treatment chemicals used to keep a boiler in working order, so the environment wins. A setup that requires less square footage could also make Viryl’s new presses a more attractive solution when space is limited or at a premium. Existing customers luck out as well, since it’s possible to retrofit presses with the new option. Modularity FTW.

Still, the steamless module is very new. In fact only one WarmTone fitted with it has shipped so far: Smashed Plastic, one of the first new plants in the Chicago area in decades, although it won’t officially open until February 2019. It’s a joint venture between CHIRP Radio DJs and founders Andy Weber and John Lombardo, along with Matt Bradford and Stationary Heart label owner Steve Polutnik.

The timing couldn’t have been better for Weber and Smashed Plastic, as they were actively searching for a record press to start the business — and learning the trade as they go. Newbilt machines had been considered, especially knowing that Third Man had adopted them. Word of production delays, though, kept them looking around for other solutions. They soon discovered Viryl Technologies, which was a relatively convenient 8-hour drive away. The WarmTone’s feature set, modular construction and the level of customer service and support at Viryl was what initially drew them to the company. But after several pitches for the steamless version, the decision became obvious.

While they’d been considering Viryl’s machinery, the Smashed Plastic crew had been scouring the area for a viable location, pulling in quotes for boiler set up and inquiring about necessary permits with the city. Getting a manufacturing business off the ground is never easy, but even the few hard-won answers still left a lot of questions, especially in a Union-heavy area like Chicago. With all the restrictions and hassle involved in setting up a vinyl record pressing plant, going steamless made it that much easier.

Smashed Plastics / Viryl

The Steamless WarmTone vinyl press at Smashed Plastic in Chicago.
The heating module is the dark grey unit, bottom right.

All this talk about automation and the benefits of a steamless operation shouldn’t downplay the seriousness of this industry, or the investment required. (New presses cost around $200,000.) Weber made it clear that it’s still an artful and technological craft, and not a plug-and-play type of operation. So far, Smashed Plastic has been doing unofficial test runs for a select few customers and there’s still lots of fine tuning left before the grand opening next year.

The need for a pressing plant, at least locally in Chicago, seems obvious when you consider that Smashed Plastic is already quoting dozens of orders in advance of their opening. Weber tells me they don’t currently have plans to press for customers outside of the city right now, as there’s plenty of a market already. This is generally due to most big pressing plants being tied up with major label reissues or new releases, making it difficult for smaller clients to get their records pressed without extensive delays.

“Oh the irony. Smashed Plastic’s factory space in Chicago’s Workshop 4200 comes complete with a disconnected boiler unit.”

Viryl hopes its new steamless option may expand opportunities for others like Smashed Plastic looking to press vinyl, helping to feed the current need that’s obvious in the market. It’s also encouraging those interested in pressing records in less traditional surroundings, since the infrastructure requirements have been drastically reduced.

Still, you’re not getting a machine that can easily be dropped into a boutique, where small batch records are pressed as you browse clothing racks. “You have to have a water chiller, a powerful electrical setup and drains, so it’s not a pop-up shop item. It’s industrial gear for sure” says Weber.

The record pressing industry has been long overdue for these advancements, and the environmental benefits of Viryl’s Steamless WarmTone (or the scaled-down Steamless LiteTone) come at the perfect time as society tries to reign in its bad habits, yet still craves the analog tones of freshly pressed vinyl.

Images: Viryl Technologies (Record mold, WarmTone installation); Smashed Plastic (Boiler image)

from Engadget https://engt.co/2Swx7LH
via IFTTT

Tumblr will delete all porn from its platform

Standard

Tumblr, a microblogging service that’s impact on internet culture has been massive and unique, is preparing for a massive change that’s sure to upset many of its millions of users.

On December 17, Tumblr will be banning porn, errr “adult content,” from its site and encouraging users to flag that content for removal. Existing adult content will be set to a “private mode” viewable only to the original poster.

What does “adult content” even mean? Well, according to Tumblr, the ban means the removal of any media that depicts “real-life human genitals or female-presenting nipples, and any content—including photos, videos, GIFs and illustrations—that depicts sex acts.”

This is a lot more complicated than just deleting some hardcore porn from the site; over the past several years Tumblr has become a hub for communities and artists with more adult themes. This has largely been born out of the fact that adult content has been disallowed from other multimedia-focused social platforms. There are bans on nudity and sexual content on Instagram and Facebook, though Twitter has more relaxed standards.

Why now? The Tumblr app was removed from the iOS app store several weeks ago due to an issue with its content filtering that led the company to issue a statement. “We’re committed to helping build a safe online environment for all users, and we have a zero tolerance policy when it comes to media featuring child sexual exploitation and abuse,” the company had detailed. “We’re continuously assessing further steps we can take to improve and there is no higher priority for our team.”

We’ve reached out to Tumblr for further comment.

Update: In a blog post titled “A better, more positive Tumblr,” the company’s CEO Jeff D’Onofrio minimized claims that the content ban was related to recent issues surrounding child porn, and is instead intended to make the platform one “where more people feel comfortable expressing themselves.”

“As Tumblr continues to grow and evolve, and our understanding of our impact on our world becomes clearer, we have a responsibility to consider that impact across different age groups, demographics, cultures, and mindsets,” the post reads. “Bottom line: There are no shortage of sites on the internet that feature adult content. We will leave it to them and focus our efforts on creating the most welcoming environment possible for our community.”

The imminent “adult content” ban will not apply to media connected with breastfeeding, birth or more general “health-related situations” like surgery, according to the company.

Tumblr is attempting to make aims to minimize the impact on the site’s artistic community as well, but this level of nuance is going to be incredibly difficult for them to enforce uniformly and will more than likely lead to a lot of frustrated users being told that their content does not qualify as “art.”

Tumblr is also looking to minimize impact on the more artistic storytelling, “such as erotica, nudity related to political or newsworthy speech, and nudity found in art, such as sculptures and illustrations, are also stuff that can be freely posted on Tumblr.”

I don’t know how much it needs to be reiterated that child porn is a major issue plaguing the web, but a blanket ban on adult content on a platform that has gathered so many creatives working with NSFW themes is undoubtedly going to be a pretty controversial decision for the company.

from TechCrunch https://tcrn.ch/2Ed01x6
via IFTTT

Pager service in Japan is finally coming to an end

Standard



ET1972 via Getty Images

After nearly five decades, Japan is finally ending pagers for good. The last service provider in the country, Tokyo Telemessage, announced that it will terminate its service in September 2019, according to SoraNews24. The company said about 1,500 people still use pagers in its service area, which covers Tokyo and several neighboring regions.

Pagers, which are known as “poke-beru” or “pocket bell” in Japan, certainly had their day. The small devices that send short messages via radio waves reached peak popularity in 1996 when as many as 10 million units were in use in Japan. Pagers were quickly overtaken by cellphones once the devices became widely available. Major telecommunications firm NTT — the company that first introduced pagers in the country — discontinued service for the devices in 2007. Now over a decade later, Tokyo Telemessage is following suit and pulling the plug on its last users.

While the remaining 1,500 subscribers are likely to be disappointed to learn their pagers’ days are numbered, they probably should have seen this coming. Tokyo Telemessage stopped manufacturing pagers device 20 years ago. That said, old technology has a way of sticking around in Japan. Faxes are still a popular means of communication in the country, and you can still find cassette tapes in convenience stores.

from Engadget https://engt.co/2Rwd13R
via IFTTT