Telegram Passport stores your real-world IDs in the cloud

Standard



Telegram

Telegram has rolled out a massive update for mobile, which gives it the ability to store copies of your IDs in the cloud. The new feature, called Passport, can share your identification documents with other apps and services whenever needed. Telegram describes it as a “unified authorization method” financial services and other industries can use to verify your identify, so you won’t need to upload photos of your passport or driver’s license again and again.

While it has the potential to become quite a useful tool, it’s easy to see why the security-conscious would balk at the idea of storing sensitive info in the cloud. Telegram says your documents are protected by end-to-end encryption, though, and it apparently doesn’t see whatever it is you upload. The company also plans to move all Passport data to a decentralized cloud in the future, which means it’ll be distributed across multiple computers to make the system safer.

Services that choose to integrate Passport into their system will offer the option to sign up using Telegram’s new feature. Doing so will give them a way to request for the documents you’ve stored in Passport. The company aims to introduce third-party verification in the near future, though, and when it arrives, the firms that use Passport won’t even need to request for your documents. They’ll allow you to easily sign up for their services knowing that a verification provider already confirmed that your IDs are legit.

The updated app is now available for both Android and Apple devices, further squashing fears that the iOS version will lag behind its Android counterpart. Back in May, Telegram’s founder said Apple blocked the messenger’s updates from rolling out after it was banned in Russia. Cupertino started approving updates again shortly after he aired his grievances. To access the new feature, simply go to Settings > Privacy & Security > Telegram Passport on Android or to Settings > Telegram Passport on iOS.

from Engadget https://engt.co/2LRZtgl
via IFTTT

How (and how not) to fix AI

Standard
Joshua New
Contributor
Joshua New is a senior policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology and public policy.

While artificial intelligence was once heralded as the key to unlocking a new era of economic prosperity, policymakers today face a wave of calls to ensure AI is fair, ethical and safe. New York City Mayor de Blasio recently announced the formation of the nation’s first task force to monitor and assess the use of algorithms. Days later, the European Union enacted sweeping new data protection rules that require companies be able to explain to consumers any automated decisions. And high-profile critics, like Elon Musk, have called on policymakers to do more to regulate AI.

Unfortunately, the two most popular ideas — requiring companies to disclose the source code to their algorithms and explain how they make decisions — would cause more harm than good by regulating the business models and the inner workings of the algorithms of companies using AI, rather than holding these companies accountable for outcomes.

The first idea — “algorithmic transparency” — would require companies to disclose the source code and data used in their AI systems. Beyond its simplicity, this idea lacks any real merits as a wide-scale solution. Many AI systems are too complex to fully understand by looking at source code alone. Some AI systems rely on millions of data points and thousands of lines of code, and decision models can change over time as they encounter new data. It is unrealistic to expect even the most motivated, resource-flush regulators or concerned citizens to be able to spot all potential malfeasance when that system’s developers may be unable to do so either.

Additionally, not all companies have an open-source business model. Requiring them to disclose their source code reduces their incentive to invest in developing new algorithms, because it invites competitors to copy them. Bad actors in China, which is fiercely competing with the United States for AI dominance but routinely flouts intellectual property rights, would likely use transparency requirements to steal source code.

The other idea — “algorithmic explainability” — would require companies to explain to consumers how their algorithms make decisions. The problem with this proposal is that there is often an inescapable trade-off between explainability and accuracy in AI systems. An algorithm’s accuracy typically scales with its complexity, so the more complex an algorithm is, the more difficult it is to explain. While this could change in the future as research into explainable AI matures — DARPA devoted $75 million in 2017 to this problem — for now, requirements for explainability would come at the cost of accuracy. This is enormously dangerous. With autonomous vehicles, for example, is it more important to be able to explain an accident or avoid one? The cases where explanations are more important than accuracy are rare.

The debate about how to make AI safe has ignored the need for a nuanced, targeted approach to regulation.

Rather than demanding companies reveal their source code or limiting the types of algorithms they can use, policymakers should instead insist on algorithmic accountability — the principle that an algorithmic system should employ a variety of controls to ensure the operator (i.e. the party responsible for deploying the algorithm) can verify it acts as intended, and identify and rectify harmful outcomes should they occur.

A policy framework built around algorithmic accountability would have several important benefits. First, it would make operators responsible for any harms their algorithms might cause, not developers. Not only do operators have the most influence over how algorithms impact society, but they already have to comply with a variety of laws designed to make sure their decisions don’t cause harm. For example, employers must comply with anti-discrimination laws in hiring, regardless of whether they use algorithms to make those decisions.

Second, holding operators accountable for outcomes rather than the inner workings of algorithms would free them to focus on the best methods to ensure their algorithms do not cause harm, such as confidence measures, impact assessments or procedural regularity, where appropriate. For example, a university could conduct an impact assessment before deploying an AI system designed to predict which students are likely to drop out to ensure it is effective and equitable. Unlike transparency or explainability requirements, this would enable the university to effectively identify any potential flaws without prohibiting the use of complex, proprietary algorithms.

This is not to say that transparency and explanations do not have their place. Transparency requirements, for example, make sense for risk-assessment algorithms in the criminal justice system. After all, there is a long-standing public interest in requiring the judicial system be exposed to the highest degree of scrutiny possible, even if this transparency may not shed much light on how advanced machine-learning systems work.

Similarly, laws like the Equal Credit Opportunity Act require companies to provide consumers an adequate explanation for denying them credit. Consumers will still have a right to these explanations regardless of whether a company uses AI to make its decisions.

The debate about how to make AI safe has ignored the need for a nuanced, targeted approach to regulation, treating algorithmic transparency and explainability like silver bullets without considering their many downsides. There is nothing wrong with wanting to mitigate the potential harms AI poses, but the oversimplified, overbroad solutions put forth so far would be largely ineffective and likely do more harm than good. Algorithmic accountability offers a better path toward ensuring organizations use AI responsibly so that it can truly be a boon to society.

from TechCrunch https://tcrn.ch/2uPzy2r
via IFTTT

Tom Cruise and James Corden’s skydiving trip is 11 minutes of hilarious tension

Standard

TwitterFacebook

Tom Cruise and James Corden always seem to have fairly energetic meet-ups. First they acted out Cruise’s entire film career in 9 minutes, then they went on a boat trip, and now they’re going on their most ambitious day out yet — an actual skydive.

"The worst problem is, in all of this, if we both die I will get zero press," says a nervous pre-dive Corden. "I will be a footnote."

Well, he needn’t have worried. Terrified swearing aside, they ended up having a lovely time. Read more…

More about Uk, James Corden, Tom Cruise, The Late Late Show With James Corden, and Culture

from Mashable! http://bit.ly/2LJXZI4
via IFTTT