A Google company built artificial intelligence that just taught itself how to walk

Standard

Alphabet — Google’s parent company — has an artificial intelligence company, called DeepMind. The company has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance. The result is as impressive as it is goofy.

Follow Tech Insider: On Facebook 

 

Join the conversation about this story »

from SAI http://read.bi/2tF2ee5
via IFTTT

Trump sued for blocking users on Twitter

Standard

The Knight First Amendment Institute at Columbia University warned President Trump to stop blocking Twitter users. The freedom of speech organization sent the president a letter last month arguing that when Twitter is used by a President, it operates as a "designated public forum" like a city council or school board meeting. When blocking users for mocking or critical tweets, it argues, the president is violating their First Amendment rights. While the letter did not explicitly promise legal action, the implication was clear. Now, the same institute has filed suit, asserting that President Trump and "his communications team are violating the First Amendment by blocking individuals from the @realDonaldTrump Twitter account because they criticized the president or his policies."

The lawsuit, filed in the Southern District of New York on the behalf of seven individual Twitter users who have been blocked by the president or his team, affirms that the block impedes the individuals from reading, responding to or participating in discussions around the president’s tweets. The lawsuit asks the court to declare the blocks unconstitutional and to order the White House to restore access to the people named in it. In addition to arguing that the account is a "public forum," the suit also claims that the individuals have a right to @readDonaldTrump for redress of grievances.

In addition, the blocks prevent everyone else from hearing dissenting voices. "The White House is transforming a public forum into an echo chamber," said Knight Institute attorney Katie Fallow in a statement. "Its actions violate the rights of the people who’ve been blocked and the rights of those who haven’t been blocked but who now participate in a forum that’s being sanitized of dissent."

Source: Knight INstitute

from Engadget http://engt.co/2ufj312
via IFTTT

Building “the switch” using machine learning

Standard

If you have been around algorithmic trading for a while you have probably heard some version of the “switch” concept. This is one of the holy grails of systematic trading, describing an ability to be able to change the way one acts in the market according to market conditions. Today I want to talk about our quest at Asirikuy to build a practical, adaptable and ever-learning switch, a dream that seems each day closer thanks to advances in machine learning and the large amounts of data we have been able to gather from our price-action based system mining projects using GPU technology. On this post I will talk about what we want to build, how we have been building it and some of the things we have achieved up until now.

In the early days of trading systems – back in the 1960’s and 70’s – some people started to notice that specific groups of trading systems generated most of their returns under some specific sets of market conditions. Most notably trend following strategies generated most of their returns when the market had strong momentum while they suffered and entered drawdown periods when the market was caught between ranges. This is when the idea of “the switch” first came up. Traders imagined they could trade much more profitably if they could “flip a switch” at the right point in time and change from a trend-following system to a trend-fading – or range-trading – strategy.

The problems with this started to become obvious from the start. The main issue is that if a financial time series has trended in the past it does not mean it will trend in the future or vice versa. Since what you’re trying to do is predict what will be best to trade in the future – not in the past – analyzing past conditions is of very limited utility. More often than not people found out that they “lagged” the market significantly, meaning that when they chose their trend follower the financial instrument then started ranging and when they chose their range trader the instrument started trending. This happened with enough randomness that most traders decided it was best to compromise and trade portfolios of trend and range strategies that could survive the bad periods instead of trying to predict what was going to happen more precisely. This worked for a while but eventually just the alpha decay of strategies implied that some sort of switch was indeed necessary.

Things are not that simple however. Strategies are no longer so simple as to always be classified as “trend followers” or “trend faders” and therefore much more complex analysis needs to be used to group systems and decide when a system needs to be traded or not. In essence the idea of “the switch” is simply to make successful predictions about whether a system will or will not be profitable along some future period given some set of inputs (which can be past system results, financial time series properties, etc). The level of inference required to create a switch is now so dynamic and complex that it’s above the level of inference that can be easily carried out by a human brain.

At Asirikuy we have tackled the above problem using machine learning. Our idea is to create machine learning models that can use information from our price action based system repository to decide when a system needs to be traded or not. The first image in this blog post shows the current work-flow we are using (last step using individual system models is currently being implemented). As you can see we seek to build a mechanism that allows us to trade with some sort of quantitatively derived forward expectation since just creating systems that are profitable under historical data is not going to be enough (as this is a trivial problem almost anyone can solve). Although historical profitability does increase the probability of success in the future it is not enough, nowadays you cannot just trade blindly without knowing if you can reasonably expect to be profitable.

Another great advantage of these models is that they become better with time, as more data comes in (see here). Since what you are doing with machine learning is basically inference, your inference becomes more and more powerful as the amount of data you have grows. Not only that but you can also decide to go for more complex models as you gather more data since a larger data set means you can add more data without increasing your curve-fitting (compared to a simpler model using less data). This means you have access to deeper and deeper insights that are simply not accessible from a human-level perspective (because of all the complexities involved within the data).

So although the days of looking for a simple trend/range switch are over we can now look for much more powerful switches that allow us to make predictions of out-of-sample trading profitability for a wide arrange of strategies with an almost endless arrays of characteristics. If you would like to learn more about our machine learning models and how you can trade using their selections please consider joining Asirikuy.com, a website filled with educational videos, trading systems, development and a sound, honest and transparent approach towards automated trading.strategies

 

from Mechanical Forex http://bit.ly/2u5e8PJ
via IFTTT

Audi’s new A8 will have Level 3 autonomy via ‘traffic jam pilot’

Standard

The new Audi A8 revealed by the automaker today features a new level of self-driving for production vehicles – Level 3 autonomy, which actually allows drivers to stop monitoring the vehicle under certain conditions – and where allowed by law to do so. Level 3 is one that some autonomous experts have argued shouldn’t even exist, since it requires a driver to still be ready to resume control, but Audi’s production implementation of the tech is on its way regardless.

The A8’s new automated driving feature is called “traffic jam pilot,” and kicks in when the car finds itself in slow-moving traffic on divided highways at speeds of 37.3 mph and under only. That’s very different from Level 2 highway assistance features like Tesla’s Autopilot, and it’s clearly intended to mitigate any risks associated with use of the system. But it’s also a feature that sounds very attractive from a user perspective, since it handles driving in those conditions where it’s not at all fun to have manual control – amid stop-and-go freeway traffic.

Traffic jam pilot handles all aspects of driving in these conditions, including starting from a stop, accelerating, steering and braking. Drivers can take their hands off the wheel entirely, and (where legal) can even text, watch the built-in TV or read. The system is designed to alert the driver in plenty of time to get them to resume manual control where necessary.

That’s a tough needle to thread, which is why some experts and automakers, including Ford, have said they’re skipping Level 3 entirely and heading straight to Level 4, where the vehicle assumes full control of all driving operations. Audi clearly has a lot of confidence in its solution, which is powered by a central driver assistance controller that’s computing a sensor fusion image of the vehicle’s surroundings based on data from radar, front camera, ultrasonics and laser scanning.

Rollout of traffic jam pilot will vary based on testing and approvals as required market-to-market, so this could be variably available when the A8 ships. Still, it’ll a big step forward in autonomy when it does make it to consumers.

The A8 also includes AI-based remote parking for both surface spaces and garages, which don’t even require the driver to be seated in the car to both hail and send the vehicle to its spot. All of that is controlled via a mobile app for the driver’s smartphone. Audi could find itself at the leading edge of autonomy with the A8, but being this close to the frontier mean it’ll be closely monitored for risk, too.

from TechCrunch http://tcrn.ch/2v91a09
via IFTTT

Lumosity doesn’t actually improve your cognitive skills

Standard

Brain training apps such as Lumosity and Elevate are supposedly useful in order to keep your cognitive skills sharp, but there’s been quite a bit of doubt cast on whether they are actually useful. Now, scientists led by University of Pennsylvania psychologist Joseph Kable are chiming in. As published in The Journal of Neuroscience, the team "found no evidence that cognitive training influences neural activity during decision-making, nor did we find effects of cognitive training on measures of delay discounting or risk sensitivity."

The team’s interest in the issue was in regard to instant gratification. Specifically, they wanted to see whether cognitive training could change behavior, leading users to prefer delayed or less risky rewards. They set up young adults with the brain training app Lumosity; each participant completed a grand total of 50 sessions over 10 weeks.

The results were pretty clear. There was no change in choices or decision-making behavior by study participants. The exception was "specifically trained" cognitive task performance. In other words, the only thing that using Lumosity improved was users’ ability to play the games in Lumosity.

This isn’t all that surprising, given the history of brain training apps such as Lumosity. Last year, parent company Lumos Labs was ordered to pay $2 million to the FTC because of charges it misled the public. And way back in 2014, a Florida State University-based team determined that playing Portal 2 actually improves cognitive skills, while Lumosity does not. This current study is just another nail in the coffin for the brain training fad.

Source: The Journal of Neuroscience

from Engadget http://engt.co/2uaal3a
via IFTTT

Fabric 1.0: Hyperledger Releases First Production-Ready Blockchain Software

Standard

Open-source software isn’t so much built, it grows.

And today, the open-source blockchain consortium Hyperledger has announced that its first production-ready solution for building applications, Fabric, has finished that process.

But even before the formal release of Fabric 1.0 today, hundreds of proofs-of-concept had been built. With contributions to the platform for building shared, distributed ledgers across a number of industries (coming from 159 different engineers in 28 organizations), no single company owns the platform, which is hosted by the Linux Foundation.

For those going forward with that work, the group’s executive director Brian Behlendorf indicated that production-grade functionality is just a download and a few tweaks away.

Behledorf told CoinDesk:

“It’s not as easy as drop in and upgrade. But the intent is that anyplace where there were changes, that those changes will be justified.”

Once existing users of Fabric’s previous versions “grab” the new version 1.0 code, as Behlendorf described the process, a few changes to the interface will need to be made, and any changes made to the “Chaincode” already being used from the earlier version will need to be modified.

While changes to the application programming interface (API) that integrates a user’s software with Fabric were kept to a minimum, Behlendorf said the improvements will be noticeable.

Specifically, he highlighted improved support for Fabric’s “private channels,” which enable transactions in a “subset of the broader chain” with the same degree of reliability as the overall network.

According to Behlendorf, these improvements are fundamental for providing varying degrees of access to information (such as a provenance tracking company that needs to prove the origin of an object to its very source), while still protecting the price paid in a business transaction, for example.

“You’ll still be able to provide proof of those transactions to the broader network if you ever need,” he explained. “But at least on that private channel you can get the speed and confidentiality that you get with direct connection.”

Already in use

Even before today’s launch, an unknown number of companies were already building increasingly mature products using earlier versions of Fabric.

Though exact numbers of projects using the open-source software are impossible to gauge due to an intentional lack of tracking software, Behledorf estimates the number is in the “high hundreds to low thousands,” based on how many members are in the consortia and publicly disclosed endeavors.

But to give an idea of the diversity of companies exploring the technology, contributions to the Fabric codebase were made by engineers with day jobs at the Depository Trust and Clearing Corporation (DTCC), Digital Asset Holdings, Fujitsu, GE, Hitachi, Huawei Technologies, State Street Bank and more, according to a statement.

Among the 30 or so projects that are tracked on the Hyperledger site, are ones including companies such as the Santiago Stock Exchange, Swift and the TMX Group.

Rob Palatnick, chief technology architect of the DTCC, a founding Hyperledger member, explained in a statement why his company was an advocate of the open-source technology.

Palatnick said:

“The Hyperledger Fabric 1.0 release marks a significant milestone in the evolution of enterprise DLT technology, and represents another step forward in making DLT adoption across critical sectors a reality.”

Beyond Fabric

While Fabric was the first Hyplerledger project to be incubated after its original codebase was donated to Linux by IBM, and the first to enter active status earlier this year, it is far from the only offering by the consortium.

Of the 145 members, several have made other open-source contributions for further development.

Notably, Behlendorf said that blockchain identity platform Indy has completed its migration from the Sovrin Foundation that originally developed it, and that “development is picking up steam.”

Also, developers from the Intel-contributed Sawtooth Lake project are now “working with” developers from Monax-contributed Burrow “to get the ethereum virtual machine running on top of Sawtooth” as its smart contract engine, he added.

“That’s the kind of modularity that we’d like to see happen across our different projects,” Behlendorf said, concluding:

“The collaboration between these different things that suggests a future path of an emergent architecture coming out of the soup.”

Hatching turtle image via Shutterstock

The leader in blockchain news, CoinDesk is an independent media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. Interested in offering your expertise or insights to our reporting? Contact us at news@coindesk.com.

from CoinDesk http://bit.ly/2uNiHM8
via IFTTT

Microsoft’s Plan to Beam Internet Over TV Frequencies Is So Crazy It Might Work

Standard
Image: Wikipedia

In the same hotel where Alexander Graham Bell once demoed coast-to-coast telephone calls, Microsoft will announce plans for a new white space internet service on Tuesday. This ludicrous technology sends broadband internet wirelessly over the unused channels of the television spectrum. It’s also ingenious.

Understandably, you probably have some questions about this postmodern concept. If you were born before 1985, you might remember the days when TV signals floated through thin air, delivering episodes of Married With Children to homes across America without any wires. Those TV signals still exist, and in between the channels, there’s unused spectrum called white space. Enterprising scientists have figured out how to turn that white space into a sort of super wi-fi and broadcast internet service to a many miles-wide radius. What’s extra special is that, unlike wi-fi or cellular service, the stronger TV signal can penetrate buildings and other obstacles. This makes it ideal for rural areas, where conventional broadband service is either unavailable or prohibitively expensive.

Image: Carlson Wireless

As Tuesday’s announcement makes clear, Microsoft scientists have been on the bleeding edge of white space research. The increasingly hip company intends to drop $10 billion to launch a new white space service in 12 states, including New York and Virginia, connecting an estimated two million Americans to the internet, The New York Times reports. This plan ought to please FCC chairman Ajit Pai, who’s made expanding high speed internet access a priority since he took the helm of the agency. Then again, many believe that Pai’s mission amounts to an empty promise, one that stands to line the pockets of big telecom companies instead of actually helping rural America. But that’s a whole ‘nother can of beans.

Advertisement

Exciting as it may sound, Microsoft’s new white space initiative does face some tricky challenges. Infrastructure is a big one. While white space internet service does utilize the very familiar TV spectrum, the ability to connect to the internet requires some special hardware. On the regional level, we’ll need to build special base stations, equip them with white space antennas, and supply them with electricity. (Solar power is an option for base stations that are off the electric grid.) On the local level, white space customers will need to access to special receivers that can turn the white space signal into something their computer understands, like wi-fi. All of this will cost money.

Good news is Microsoft has a lot of money. It’s not yet clear how much the company will charge for the new service, but presumably, it will cover the expense of building the new base stations. Customers will have to buy the hardware for their homes at a sobering price of $1,000 or more, but Microsoft says these costs will come down to $200 per device by next year. That’s not nothing for a lot of rural Americans, and then they’ll have to pay for access — a fee that Microsoft says will be “price competitive” with regular old cable internet (again: not cheap).

But hey, progress matters. While this white space internet technology has been in development for years, Microsoft is set to become the first major company to bring it to the masses, and that might just mean others will follow. Far-future solutions for rural broadband access like Facebook’s laser-powered drones, Google’s silly balloons, or Elon Musk’s pie-in-the-sky satellites remain theoretical for the time being, while white space already works. And soon, it could be working in a middle-of-nowhere near you.

Advertisement

[New York Times]

from Gizmodo http://bit.ly/2sLS34V
via IFTTT

Amazon’s Prime Day event could lead to $10 billion in lost productivity, says CNBC analyst (AMZN)

Standard

Jeff Bezos

Monday night marked the beginning of Amazon Prime Day, a day-long sales event in which Amazon advertises a steady stream of special offers to members of its Prime subscription service.

The event lasts through Tuesday, and, if history is any indication, is likely to reel in plenty of sales for Amazon.

But the prospect of new sales at such a massive online shopping presence could also have a less-than-ideal side effect: people getting distracted at work.

To put a number on it: Prime Day could result in roughly $10 billion in lost productivity, according to CNBC data journalist Eric Chemi. Chemi suggests that with most of the roughly 85 million Prime members spending an estimated average of 1 minute on Amazon every time a new round of deals pop up, over the length of a 30-hour event, there’ll be many hours not spent looking at spreadsheets and word processors. And that should only be exacerbated by the number of people who’ll think about buying Prime to look into what’s available.

At the same time, Chemi notes, it’s not like everyone in the office spends their entire day working. Amazon will likely cut into the time spent on Facebook, Reddit, and other popular distractions as well as work.

Whatever the case, that Amazon has managed to manufactured a event like Prime Day, whose deals are regularly hit-or-miss, is a testament to its popularity. And as it continues to use events like this to gobble up Prime members, who spend roughly twice as much on Amazon than their non-Prime counterparts, its vise grip on the online shopping market is likely to grow, and its contributions to time-wasting along with it.

You can check out the full video below:

SEE ALSO: Before buying anything on Amazon, use these 2 tools to make sure you’re getting a good deal

Join the conversation about this story »

NOW WATCH: Two sites you should check before buying anything on Amazon

from SAI http://read.bi/2ueroll
via IFTTT

Alexa, How Are Voice-Activated Virtual Assistants Changing Shopping, Search, and Media Behavior? [Infographic]

Standard

The rapid rise in popularity of voice-activated virtual assistants—artificial intelligence (AI) devices such as Amazon Alexa and Google Home—is significantly influencing consumer behavior, according to a study by Toluna.

The real-time digital consumer insights company conducted a survey of more than 1,000 US consumers and learned how these devices affect the shopping, search, and media consumption behaviors of men and women, and compiled the results into an infographic.

For example, the study found only 6% of men said the use of personal assistants has no effect on their shopping behavior, whereas 22% of women said the same.

These devices have at least somewhat changed the buying behavior of most people who use them, which is a shift in the overall e-commerce landscape, creating new opportunities for savvy brands to engage customers.

The study also found that the top barrier to owning a virtual assistant is price. But with more options coming to market each year, that may not be a barrier for long.

To check out the results of the survey, click on the infographic to view a larger version:

Laura Forer is the manager of MarketingProfs: Made to Order, Original Content Services, which helps clients generate leads, drive site traffic, and build their brands through useful, well-designed content.

LinkedIn: Laura Forer

from Marketing Profs – Concepts, Strategies, Articles and Commentarie http://bit.ly/2uarYj8
via IFTTT

Aphex Twin talks to Korg’s Tatsuya Takahashi, blows our minds

Standard

Richard D. James worked with KORG on the microtuning feature of the Monologue. Warp has published a sprawling, nerdy interview he conducted with Tatsuya.

And there’s an all-KORG track, too:

Listening to former KORG engineer Tatsuya Takahashi, who led teams that developed the volca series, monotron, and others, interviewed by Aphex Twin – well, that’s got to be a nerdgasm.

And they deliver. They get deep into design and engineering philosophy, into tuning (naturally), on slop and imperfection, and then into geometry and a whole lot more.

Richard D. James speaks to Tatsuya Takahashi [Warp]

In case you’re wondering, “Tats” has moved on from Korg to a position in Germany at Yadastar. More on that soon, hopefully.

The post Aphex Twin talks to Korg’s Tatsuya Takahashi, blows our minds appeared first on CDM Create Digital Music.

from Create Digital Music http://bit.ly/2uMOOLL
via IFTTT