This scale reveals more than your weight — it can actually measure heart health

Standard

Many people try to cut down weight in order to get healthier. The truth is, however, there is much more to know than just body weight to determine one’s health status. Body Cardio, a smart digital scale, is created by Withings. It can provide a wide range of information about the users’ body, so they could make better health decisions.

Follow Tech Insider: On Facebook

from SAI http://ift.tt/2ePVFeg
via IFTTT

WTF is computer vision?

Standard

Someone across the room throws you a ball and you catch it. Simple, right?

Actually, this is one of the most complex processes we’ve ever attempted to comprehend – let alone recreate. Inventing a machine that sees like we do is a deceptively difficult task, not just because it’s hard to make computers do it, but because we’re not entirely sure how we do it in the first place.

What actually happens is roughly this: the image of the ball passes through your eye and strikes your retina, which does some elementary analysis and sends it along to the brain, where the visual cortex more thoroughly analyzes the image. It then sends it out to the rest of the cortex, which compares it to everything it already knows, classifies the objects and dimensions, and finally decides on something to do: raise your hand and catch the ball (having predicted its path). This takes place in a tiny fraction of a second, with almost no conscious effort, and almost never fails. So recreating human vision isn’t just a hard problem, it’s a set of them, each of which relies on the other.

Well, no one ever said this would be easy. Except, perhaps, AI pioneer Marvin Minsky, who famously instructed a graduate student in 1966 to “connect a camera to a computer and have it describe what it sees.” Pity the kid: 50 years later, we’re still working on it.

Serious research began in the 50s and started along three distinct lines: replicating the eye (difficult); replicating the visual cortex (very difficult); and replicating the rest of the brain (arguably the most difficult problem ever attempted).

To see

Reinventing the eye is the area where we’ve had the most success. Over the past few decades, we have created sensors and image processors that match and in some ways exceed the human eye’s capabilities. With larger, more optically perfect lenses and semiconductor subpixels fabricated at nanometer scales, the precision and sensitivity of modern cameras is nothing short of incredible. Cameras can also record thousands of images per second and detect distances with great precision.

An image sensor one might find in a digital camera.

An image sensor one might find in a digital camera.

 

Yet despite the high fidelity of their outputs, these devices are in many ways no better than a pinhole camera from the 19th century: They merely record the distribution of photons coming in a given direction. The best camera sensor ever made couldn’t recognize a ball — much less be able to catch it.

The hardware, in other words, is severely limited without the software — which, it turns out, is by far the greater problem to solve. But modern camera technology does provide a rich and flexible platform on which to work.

To describe

This isn’t the place for a complete course on visual neuroanatomy, but suffice it to say that our brains are built from the ground up with seeing in mind, so to speak. More of the brain is dedicated to vision than any other task, and that specialization goes all the way down to the cells themselves. Billions of them work together to extract patterns from the noisy, disorganized signal from the retina.

Sets of neurons excite one another if there’s contrast along a line at a certain angle, say, or rapid motion in a certain direction. Higher-level networks aggregate these patterns into meta-patterns: a circle, moving upwards. Another network chimes in: the circle is white, with red lines. Another: it is growing in size. A picture begins to emerge from these crude but complementary descriptions.

A "histogram of oriented gradients," finding edges and other features using a technique like that found in the brain's visual areas.

A “histogram of oriented gradients,” finding edges and other features using a technique like that found in the brain’s visual areas.

 

Early research into computer vision, considering these networks as being unfathomably complex, took a different approach: “top-down” reasoning — a book looks like /this/, so watch for /this/ pattern, unless it’s on its side, in which case it looks more like /this/. A car looks like /this/ and moves like /this/.

We can barely come up with a working definition of how our minds work, much less how to simulate it.

For a few objects in controlled situations, this worked well, but imagine trying to describe every object around you, from every angle, with variations for lighting and motion and a hundred other things. It became clear that to achieve even toddler-like levels of recognition would require impractically large sets of data.

A “bottom-up” approach mimicking what is found in the brain is more promising. A computer can apply a series of transformations to an image and discover edges, the objects they imply, perspective and movement when presented with multiple pictures, and so on. The processes involve a great deal of math and statistics, but they amount to the computer trying to match the shapes it sees with shapes it has been trained to recognize — trained on other images, the way our brains were.

Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera's field of view, overlaying lines of text that describe items in the environment. Here, a street scene is labeled by the prototype, running up to 120 times faster than a conventional cell-phone processor. (Purdue University image/e-Lab)

Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera’s field of view, overlaying lines of text that describe items in the environment. Here, a street scene is labeled by the prototype, running up to 120 times faster than a conventional cell-phone processor. (Purdue University image/e-Lab)


What an image like this one above (from Purdue University’s E-lab) is showing is the computer displaying that, by its calculations, the objects highlighted look and act like other examples of that object, to a certain level of statistical certainty.

Proponents of bottom-up architecture might have said “I told you so.” Except that until recent years, the creation and operation of artificial neural networks was impractical because of the immense amount of computation they require. Advances in parallel computing have eroded those barriers, and the last few years have seen an explosion of research into and using systems that imitate — still very approximately — the ones in our brain. The process of pattern recognition has been sped up by orders of magnitude, and we’re making more progress every day.

To understand

Of course, you could build a system that recognizes every variety of apple, from every angle, in any situation, at rest or in motion, with bites taken out of it, anything — and it wouldn’t be able to recognize an orange. For that matter, it couldn’t even tell you what an apple is, whether it’s edible, how big it is or what they’re used for.

The problem is that even good hardware and software aren’t much use without an operating system.

Artificial intelligence and cybernetics

 

For us, that’s the rest of our minds: short and long term memory, input from our other senses, attention and cognition, a billion lessons learned from a trillion interactions with the world, written with methods we barely understand to a network of interconnected neurons more complex than anything we’ve ever encountered.

The future of computer vision is in integrating the powerful but specific systems we’ve created with broader ones.

This is where the frontiers of computer science and more general artificial intelligence converge — and where we’re currently spinning our wheels. Between computer scientists, engineers, psychologists, neuroscientists and philosophers, we can barely come up with a working definition of how our minds work, much less how to simulate it.

That doesn’t mean we’re at a dead end. The future of computer vision is in integrating the powerful but specific systems we’ve created with broader ones that are focused on concepts that are a bit harder to pin down: context, attention, intention.

That said, computer vision even in its nascent stage is still incredibly useful. It’s in our cameras, recognizing faces and smiles. It’s in self-driving cars, reading traffic signs and watching for pedestrians. It’s in factory robots, monitoring for problems and navigating around human co-workers. There’s still a long way to go before they see like we do — if it’s even possible — but considering the scale of the task at hand, it’s amazing that they see at all.

Featured Image: Bryce Durbin

from TechCrunch http://ift.tt/2fpzsaU
via IFTTT

How to take a smartphone picture of the supermoon that isn’t a blurry blob

Standard

A supermoon is a majestic sight — a glowing orb that reminds us we are but human, here for a short time in an infinite universe. 

And yet somehow, it ends up looking like discarded chewing gum in your Instagram feed. Life doesn’t have to be this way.

Monday’s supermoon is well worth capturing — it will be the largest and brightest in 70 years. Unfortunately, it won’t look too much more impressive than your average moon, but there are likely to be good opportunities for photography at sunset when it seems biggest, especially along the coast.

If your don’t have a fancy camera, here are some tips for capturing its glory with only your smartphone, weather permitting. 

Birds fly past as a supermoon rises in the sky on Aug. 10, 2014 in Rio de Janeiro, Brazil.

Birds fly past as a supermoon rises in the sky on Aug. 10, 2014 in Rio de Janeiro, Brazil.

1. Find a vantage point

Remember to scope your spot in advance.

The shot will be most impressive if it shows a sense of scale. Try capturing the moon when it’s close to the horizon, so the image includes a foreground. As it rises over a city or a local landmark, for example, or over headlands if you’re near the coast.

NASA’s senior photographer Bill Ingalls recommends being in an urban area where there’s more light.

"It’s all relative. For me, it would be maddening and frustrating — yet it may be a good challenge, actually," he told NASA. "You’re not going to get a giant moon in your shot, but you can do something more panoramic, including some foreground that’s interesting."

2. Get your gear

According to smartphone photography specialist Leigh Stark, a smartphone with some extra camera oomph can help.

"iPhone 7 Plus owners should get the most fun out of this simply due to the zoom afforded by that extra camera, as it will let you get closer optically," he told Mashable in an email. "That’s what we want, by the way, since digital zoom — your regular pinch to zoom — blows up pixels. 

"The iPhone 7 Plus will let you zoom in with an optical zoom, while the iPhone 7 works with digital zoom."

The supermoon rises behind Glastonbury Tor on Sept. 27, 2015 in Glastonbury, England.

The supermoon rises behind Glastonbury Tor on Sept. 27, 2015 in Glastonbury, England.

It’s also worth using a tripod to eliminate wobble for the crispest photo possible. Failing that, prop it up against a book or post. 

Also, don’t forget you could use a telescope. 

"You may be able to find a mount that lets you connect your iPhone to the telescope for the best close-ups of the moon," Stark added. "You shouldn’t need one during a supermoon unless you desperately want to see what the craters look like up close!"

3. Get the right apps

Stark recommends budding phone photographers download the free iOS and Android Adobe Lightroom app, which comes with an in-app camera. It’s most important feature? RAW support.

"If you’re not familiar with RAW, think of it as a digital negative, capturing more information and detail than a standard JPEG, and allowing you to extract this later on," he explained.

In the app, click on the camera where two options should be offered — "JPG" or "DNG." 

Choose DNG. "DNG is Adobe’s RAW format, also known as Digital Negative, and when you capture images in this format, you’ll be able to get more detail out of them later on," Stark explained. "Or to put it more simply, think of RAW and DNG as the same format professional photographers capture in."

By capturing in RAW format, you could blend images later to get the best overall shot.

An eclipsed supermoon is shown on Sept. 27, 2015 in Los Angeles, California.

An eclipsed supermoon is shown on Sept. 27, 2015 in Los Angeles, California.

4. Hone your focus technique

If you’re only using a smartphone camera, make sure you touch the moon to focus and also adjust for brightness.

"If you’re trying to get the moon in a scene, make sure to focus on where you want the light to be," Stark said. 

"For instance, if you’re up high enough to capture the moon over your capital city, hanging in the skyline, focusing on the moon could provide enough light for the buildings, while focusing on the darker buildings and exposing for this could turn the super bright light into a sun found at night."

Go forth and photo. Your Instagram followers will thank you.

from Mashable! http://ift.tt/2eVUFbH
via IFTTT

One laptop can take down major internet servers

Standard

You don’t need a massive botnet to launch overwhelming denial of service attacks — in some cases, a personal PC and so-so broadband are all that’s required. Researchers at TDC Security Operations Center have revealed a new attack technique, BlackNurse, that can take down large servers using just one computer (in this case, a laptop) and at least 15Mbps of bandwidth. Instead of bombarding a server with traffic, you send specially formed Internet Control Message Protocol packets that overwhelm the processors on server firewalls from Cisco, Palo Alto Networks and others. The firewalls end up dropping so much data that they effectively knock servers out of commission, even if they have tons of network capacity.

The good news? There are ways to fight against BlackNurse. TDC recommends setting up software filters to prevent this kind of flooding. Also, this is mainly a concern with firewall makers that allow ICMP packets from outside. Palo Alto, for instance, notes that its firewalls drop those kinds of requests by default — unless you change the settings and don’t follow its guidelines for anti-flood protection, you’re safe. Cisco doesn’t see a major issue, either.

The danger is that not every firewall is guaranteed to follow similar rules, and that some businesses may have reasons to tweak their settings to let ICMP data in. Even if the threat isn’t high, the discovery is a reminder that denial of service attacks can take many shapes. In the right circumstances, one person at home could be just as dangerous as a dedicated cyberattack group.

Via: Ars Technica

Source: TDC SOC (PDF)

from Engadget http://ift.tt/2g559rL
via IFTTT

Facebook opens analytics and FbStart to developers of Messenger’s 34,000 bots

Standard

Facebook has been putting a lot of effort into growing Messenger as a bot platform this year, and now there are 34,000 of them in existence, built to automatically give you news and entertainment, let you shop, and more, expanding Messenger’s use beyond simple chats with friends. Today, that strategy is getting a significant boost: Facebook says it will now make bots trackable on its free analytics platform, alongside analytics for ads and apps. And Facebook is also opening up its developer program, FbStart, to bot developers as well.

Both potentially give bot makers more reasons to build and monitor how their new widgets are working.

Josh Twist, a product manager for Facebook’s bots efforts in Messenger, tells me that Facebook expanded the analytics and FbStart tools after a lot of requests from the developers.

“Getting bot support for messenger is the most frequently requested feature,” he said. This shouldn’t be too much of a surprise: Facebook already provided these kinds of tools to other developers on its platform, and bots have seen a huge surge of interest, both from users interested in trying these out to see how they work and also developers keen to see if this is the next big thing.

Analytics, of course, is an essential tool for a developer, both to be able to track how well something is working and other kids of feedback. Here Facebook says that features that will be included are reaches across mobile and desktop devices and measurement of customers’ journeys across apps and websites.

Developers also will be able to view reports on messages sent, messages received, and people who block or unblock your app. And they will also get access to anonymized data reports on bot demographics, which include details like age, gender, education, interests, country and language to figure out who is using your bot.

message-received-252112

FbStart, meanwhile, currently has some 9,000 members who get feedback from Facebook on their apps, ads and bots, as well as Facebook ads credits and other free tools from partners like Amazon, Dropbox, and Stripe. If Facebook was looking at ways of swelling those ranks, tapping 34,000 developers could be one way of doing that.

Twist points out that while there are a lot of standalone bot developers coming to Facebook for the first time, there is a lot of crossover with other Facebook services like apps and ads. Those who are leveraging these together — for example using the recent ability to channel a person from a News Feed ad through to your Messenger experience — will be able to look at the effectiveness of those efforts now, and make potentially more ad buys based on them.

Twist tells me that for now, the analytics will cover bots built just for Messenger, although don’t be surprised if Facebook expands it to other platforms. “It is something we have talked about and haven’t ruled it out,” he said. “It’s possible, absolutely, since we already support analytics for other platforms for apps. But right now we’re prioritizing support for Messenger bots.

 

from TechCrunch http://ift.tt/2fPavm1
via IFTTT

Microsoft announces Visual Studio for Mac will launch in November

Standard

Fans of cross-platform coding will be happy to know that Visual Studio, a “a true mobile-first, cloud-first development tool for .NET and C#,” will arrive for Mac during the Connect() conference in November. The move places Microsoft’s IDE on Macs. The IDE follows Visual Studio Code, Microsoft’s code editor, to OS X.

Why is Microsoft seemingly abandoning the quest for Windows hegemony? The writing is one the wall: cloud computing is there future and tools like AWS and Azure are quickly replacing the local server. Microsoft is losing out to tools like Docker and Heroku on the web and it’s only a matter of time before coders are more comfortable with their MacBooks and VIM than with Windows.

“They make their money off Azure and other services. In other words, they are making their money mainly off of developers now and its in their best interest to get on the good side of devs which is why they suddenly have a vested interest in open sourcing tools and helping Mac/Linux,” wrote Hacker News user BoysenberryPi.

The IDE is very similar to the one found on Windows. In fact, that is presumably the point. By making it easy for OS X users to switch back and forth between platforms Microsoft is able ensure coders can quickly become desktop agnostic or, barring that, give Windows a try again. From the release:

At its heart, Visual Studio for Mac is a macOS counterpart of the Windows version of Visual Studio. If you enjoy the Visual Studio development experience, but need or want to use macOS, you should feel right at home. Its UX is inspired by Visual Studio, yet designed to look and feel like a native citizen of macOS. And like Visual Studio for Windows, it’s complemented by Visual Studio Code for times when you don’t need a full IDE, but want a lightweight yet rich standalone source editor.

You can read more about the platform here and prepare yourself for a little C# coding with Visual Studio Code.

from TechCrunch http://ift.tt/2ewPO22
via IFTTT

Microsoft announces Visual Studio for Mac will launch in November

Standard

Fans of cross-platform coding will be happy to know that Visual Studio, a “a true mobile-first, cloud-first development tool for .NET and C#,” will arrive for Mac during the Connect() conference in November. The move places Microsoft’s IDE on Macs. The IDE follows Visual Studio Code, Microsoft’s code editor, to OS X.

Why is Microsoft seemingly abandoning the quest for Windows hegemony? The writing is one the wall: cloud computing is there future and tools like AWS and Azure are quickly replacing the local server. Microsoft is losing out to tools like Docker and Heroku on the web and it’s only a matter of time before coders are more comfortable with their MacBooks and VIM than with Windows.

“They make their money off Azure and other services. In other words, they are making their money mainly off of developers now and its in their best interest to get on the good side of devs which is why they suddenly have a vested interest in open sourcing tools and helping Mac/Linux,” wrote Hacker News user BoysenberryPi.

The IDE is very similar to the one found on Windows. In fact, that is presumably the point. By making it easy for OS X users to switch back and forth between platforms Microsoft is able ensure coders can quickly become desktop agnostic or, barring that, give Windows a try again. From the release:

At its heart, Visual Studio for Mac is a macOS counterpart of the Windows version of Visual Studio. If you enjoy the Visual Studio development experience, but need or want to use macOS, you should feel right at home. Its UX is inspired by Visual Studio, yet designed to look and feel like a native citizen of macOS. And like Visual Studio for Windows, it’s complemented by Visual Studio Code for times when you don’t need a full IDE, but want a lightweight yet rich standalone source editor.

You can read more about the platform here and prepare yourself for a little C# coding with Visual Studio Code.

from TechCrunch http://ift.tt/2ewPO22
via IFTTT

Microsoft announces Visual Studio for Mac will launch in November

Standard

Fans of cross-platform coding will be happy to know that Visual Studio, a “a true mobile-first, cloud-first development tool for .NET and C#,” will arrive for Mac during the Connect() conference in November. The move places Microsoft’s IDE on Macs. The IDE follows Visual Studio Code, Microsoft’s code editor, to OS X.

Why is Microsoft seemingly abandoning the quest for Windows hegemony? The writing is one the wall: cloud computing is there future and tools like AWS and Azure are quickly replacing the local server. Microsoft is losing out to tools like Docker and Heroku on the web and it’s only a matter of time before coders are more comfortable with their MacBooks and VIM than with Windows.

“They make their money off Azure and other services. In other words, they are making their money mainly off of developers now and its in their best interest to get on the good side of devs which is why they suddenly have a vested interest in open sourcing tools and helping Mac/Linux,” wrote Hacker News user BoysenberryPi.

The IDE is very similar to the one found on Windows. In fact, that is presumably the point. By making it easy for OS X users to switch back and forth between platforms Microsoft is able ensure coders can quickly become desktop agnostic or, barring that, give Windows a try again. From the release:

At its heart, Visual Studio for Mac is a macOS counterpart of the Windows version of Visual Studio. If you enjoy the Visual Studio development experience, but need or want to use macOS, you should feel right at home. Its UX is inspired by Visual Studio, yet designed to look and feel like a native citizen of macOS. And like Visual Studio for Windows, it’s complemented by Visual Studio Code for times when you don’t need a full IDE, but want a lightweight yet rich standalone source editor.

You can read more about the platform here and prepare yourself for a little C# coding with Visual Studio Code.

from TechCrunch http://ift.tt/2ewPO22
via IFTTT

Google Play Music will now offer tunes based on where you are and what you do

Standard

If you don’t mind Google knowing everything about your life, then there are some benefits to handing over your data to the big G. 

The overhauled Google Music, which will launch this week on Android, iOS and on the web, will now offer streaming mixes based on your location and activity. 

For example, if you’re heading out to the gym, Google will assume you want your workout music of preference to play. On the other hand, if you’re commuting to work, the service might soothe you with tunes you like to listen to on the subway. 

Google’s machine learning algorithms also take the weather into account, so you’ll get a different set of tunes when it’s raining than you would on a beautiful sunny day. 

And if you’re concerned about privacy, don’t worry: the service is opt-in.

The new Google Play Music also has a revamped home screen, which recommends music based on all the contextual data mentioned above, as well as your listening habits. To deliver the experience, Google mixes machine learning with human curators, and it’s likely to get better at guessing what you like over time. 

For those moments when you’re out of internet connectivity, Google will also always have an offline playlist handy, even if you don’t download any music yourself. 

Google’s competitors such as Spotify and Apple Music also offer a music discovery feature, but Google’s trove of data on user habits promises more precision when it comes to matching your mood, location and activity. 

Google says the new Play Music will start to roll out in 62 countries this week. For a full list of the markets where the service (and other Google digital content services) is available, go here

from Mashable! http://ift.tt/2f86FVj
via IFTTT

Warsaw’s ReaktorX aims to help busy people start businesses

Standard

While some entrepreneurs are independently wealthy and have no need for a day job, most of them aren’t. That’s why Marta Diana Koziarska and Borys Musielak created ReaktorX, an accelerator for folks who are working or have no time or acumen for creating a business from scratch and want to connect with like-minded professionals. Koziarska and Musielak also run Reaktor Warsaw, a traditional accelerator in Poland.

The accelerator will give around $15,000 per company and they’ve raised funding from Nowa Era, Aviva, Toolbox for HR, SGP Legal, companies that would like to access these younglings in the nest.

ReaktorX is a non-profit organization and Musielak says he is “doing it to help Polish founders succeed.”

The accelerator comes out of the frustration the Reaktor team has seen within the startup community. In short, the idea creators and coders are at odds and he wants to bring them together.

“I’ve been meeting wannabe founders, often professional bankers, physiotherapists, doctors, teachers, who have been pitching the same concept for months without doing anything to make it happen,” he said. “I’ve been meeting tech people who thought startups are about shipping code, the sooner the better, with no interest in customer development or UX or actually building something people want.”

“Those two groups don’t talk to each other. They don’t know how to find each other. This is true even on a college level. Business students from SGH don’t talk to those losers from Politechika. It’s insane. It needs to change if we are to create a sustainable startup ecosystem here in Warsaw,” he said.

The plan is to put these groups together in coding, PR, and product design workshops and hope that a match takes place. The events and classes are held after hours.

The application deadline is December 12. It’s a fascinating effort and one badly needed. I’ve seen countless “corporate types” who want to build something cool but don’t know where to start. This, at the very least, gives them some of the tools and the ability to find a technical co-founder.

“We’ve been helping startups over the last 5 years and we helped the Warsaw startup scene grow from non-existent to one of the leaders in CEE. This is the next step,” he said.

from TechCrunch http://ift.tt/2ewXLV5
via IFTTT