Earth just hit a terrifying milestone for the first time in more than 800,000 years

Standard

smog carbon dioxide pollution

  • The average concentration of carbon dioxide in Earth’s atmosphere just topped 410 parts per million, according to measurements from Mauna Loa Observatory in Hawaii.
  • This is the highest CO2 levels have gotten in the 800,000 years we have good data for.
  • This is expected to have a catastrophic effect on human health and the planet itself.

We have a pretty good idea of what Earth’s atmosphere has looked like for the past 800,000 years.

Humans like us — Homo sapiens — only evolved about 200,000 years ago, but ice-core records reveal intricate details of our planet’s history from long before humans existed. By drilling more than 3 kilometers deep into the ice sheets over Greenland and Antarctica, scientists can see how temperature and atmospheric carbon dioxide levels have changed from then until now.

From that record, we know the atmosphere and the air that we breathe has never had as much carbon dioxide in it as it does today. 

For the first time in recorded history, the average monthly level of CO2 in the atmosphere exceeded 410 parts per million (ppm) in the month of April, according to observations made at the Mauna Loa Observatory in Hawaii.

The new record is not a coincidence — humans have rapidly transformed the air we breathe by pumping CO2 into it over the past two centuries. In recent years, we’ve pushed those gas levels into uncharted territory.

That change has inevitable and scary consequences. Research indicates that unchecked, this trend could directly lead to tens of thousands of pollution-related deaths, reach a point at which it slows human cognition, and lead to the rising sea levels, searing heat waves, and superstorms that scientists project as effects of climate change.

"As a scientist, what concerns me the most is what this continued rise actually means: that we are continuing full speed ahead with an unprecedented experiment with our planet, the only home we have," climate scientist Katharine Hayhoe said on Twitter about the new record.

thwaites glacier

Breathing the air of a new world

For the 800,000 years we have records of, average global CO2 levels fluctuated between about 170 ppm and 280 ppm. Once humans started to burn fossil fuels in the industrial era, things changed rapidly.

Only in the industrial era has the number risen above 300 ppm. The concentration first crept above 400 ppm in 2013, and continues to climb. 

Scientists debate the last time CO2 levels were this high. It might have happened during the Pliocene era, between 2 and 4.6 million years ago, when sea levels were at least 60 to 80 feet higher than today. It may have been in the Miocene, 10 to 14 million years ago, when seas were more than 100 feet higher than now. 

In our 800,000-year record, it took about 1,000 years for CO2 levels to increase by 35 ppm. We’re currently averaging an increase of more than 2 ppm per year, meaning that we could hit an average of 500 ppm within the next 45 years, if not sooner.

Humans have never had to breathe air like this. And it does not seem to be good for us.

Global temperature tracks very closely to atmospheric levels of CO2. The potential impacts of higher average temperatures include tens of thousands of deaths from heat waves, increased air pollution that leads to lung cancer and cardiovascular disease, higher rates of allergies and asthma, more extreme weather events, and the spread of diseases carried by ticks and mosquitoes — an effect we’re already seeing.

Global annual temperature and CO2 levels, 1959–2016

Higher levels of CO2 also exacerbate ozone pollution. One 2008 study found that for every degree Celsius the temperature rises because of CO2 levels, ozone pollution can be expected to kill an additional 22,000 people via respiratory illness, asthma, and emphysema. A recent study calculated that overall, air pollution already kills 9 million people every year.

Other research has raised even more reasons for concern. The average CO2 level doesn’t represent the air most of us breathe. Cities tend to have far more CO2 than average — and those levels rise even higher indoors. Some research indicates that this may have a negative effect on human cognition and decision-making. (There’s a full list of possible ways climate change will affect human health on an archived EPA page.)

President Obama’s EPA ruled in 2009 that CO2 was a pollutant that needed to be regulated under the Clean Air Act, though the Trump administration is re-evaluating that decision.

Pedestrians cross a road amidst smog on a polluted day in Nanjing, Jiangsu province, China January 30, 2018. REUTERS/Stringer

Drowning in CO2

The human health effects from CO2 increases are just one part of the bigger story here.

The change we’ve seen in CO2 levels recently has been much more rapid than the natural historical trends. Some experts think we’re on track to hit 550 ppm by the end of the century, which would cause average global temperatures to rise 6 degrees Celsius. (For context, the increase in superstorms, rising sea levels, and spreading tick-borne disease that we’re already seeing comes after a 0.9-degree rise.)

temperature and CO2

Sea-level rise projections will only get bigger as CO2 levels will continue to climb.

Right now, carbon-dioxide emissions are still rising. The goal set in the Paris agreement is to limit the global temperature increase to 2 degrees C or less. But as a recent feature in Nature put it, we’re currently on track for more than 3 degrees of warming.

The latest measurements from Mauna Loa show that if we want to avoid that, we’ll need to make some dramatic changes very quickly.

SEE ALSO: One of the scariest effects of climate change might already be happening — and it’d mean our projections are way off

Join the conversation about this story »

NOW WATCH: The terrible things that would happen if all the coral reefs died off

from SAI https://read.bi/2HY7Hpj
via IFTTT

Microsoft’s AI future is rooted in its gaming past

Standard


Microsoft

The Kinect will never die.

Microsoft debuted its motion-sensing camera on June 1st, 2009, showing off a handful of gimmicky applications for the Xbox 360; it promised easy, controller-free gaming for the whole family. Back then, Kinect was called Project Natal, and Microsoft envisioned a future where its blocky camera would expand the gaming landscape, bringing everyday communication and entertainment applications to the Xbox 360, such as video calling, shopping and binge-watching.

This was the first indication that Microsoft’s plans for Kinect stretched far beyond the video game industry. With Kinect, Microsoft popularized the idea of yelling at our appliances — or, as it’s known today, the IoT market. Amazon Echo, Google Home, Apple’s Siri and Microsoft’s Cortana (especially that last one) are all derivative of the core Kinect promise that when you talk to your house, it should respond.

Kinect for Xbox 360 landed in homes in 2010 — four years before the first Echo — and by 2011 developers were playing around with a version of the device specifically tailored for Windows PCs. Kinect for Windows hit the market in 2012, followed by an Xbox One version in 2013 and an updated Windows edition in 2014.

None of these devices disrupted the video game or PC market on a massive scale. Even as artists, musicians, researchers and developers found innovative uses for its underlying technology, Kinect remained an unnecessary accessory for many video game fans. Support slowed and finally disappeared completely in October 2017, when Microsoft announced it would cease production of the Kinect. It had sold 35 million units over the device’s lifetime.

However, the Kinect lives on today in some of Microsoft’s most forward-looking products, including drones and in artificial intelligence applications. Kinect sensors are a crucial component in HoloLens, the company’s augmented reality glasses, for example. And just today, Microsoft revealed Project Kinect for Azure, a tiny device with an advanced depth sensor, 360-degree mic array and accelerometer, all designed to help developers overlay AI systems on the real world.

“Our vision when we created the original Kinect for Xbox 360 was to produce a device capable of recognizing and understanding people so that computers could learn to operate on human terms,” said Alex Kipman, Technical Fellow for AI Perception and Mixed Reality at Microsoft. “Creative developers realized that the technology in Kinect (including the depth-sensing camera) could be used for things far beyond gaming.”

While Kipman’s version of events makes it sound like the Kinect’s evolution as an AI tool was happenstance, Microsoft has long recognized video games’ impact on broader industries, and it’s not afraid to use the Xbox platform as a proving ground for new technologies. Just nine days after the first public demonstration of Project Natal in 2009, Microsoft published a presentation called Video Games and Artificial Intelligence, which dives into the myriad ways video games can be used as AI testbeds.

“Let us begin with a provocative question: In which area of human life is artificial intelligence (AI) currently applied the most? The answer, by a large margin, is Computer Games,” the presentation’s synopsis said. “This is essentially the only big area in which people deal with behavior generated by AI on a regular basis. And the market for video games is growing, with sales in 2007 of $17.94 billion marking a 43 percent increase over 2006.”

Today, the video game market is worth more than $100 billion — a figure that continues to climb year-over-year. Not only is Microsoft putting the ghost of Kinect to work in its newest AI and AR systems, but it’s planning to test the limits of its machine learning initiative within the gaming realm. During the Game Developers Conference this year, Microsoft touted some practical applications of its new Windows Machine Learning API — namely, it wants developers to use dynamic neural networks to create personalized experiences for players, tailoring battles, loot and pacing to individual play styles. Of course, Microsoft will be collecting all of this data along the way, learning from players, developers and games themselves.

Video games are the perfect proving ground for AI systems, as the industry continues to pioneer new technologies. Just take a look at virtual reality, a field that found its momentum in video games and has since exploded onto the mainstream stage. Even Microsoft’s digital assistant, Cortana, is named after a character in Halo, one of the company’s most beloved gaming franchises. Kinect was doing visual overlay and responding to audio commands years before Snapchat or an Echo came out, and now Microsoft is implementing its systems into HoloLens, the most prominent consumer-facing AR headset on the market.

“With HoloLens we have a device that understands people and environments, takes input in the form of gaze, gestures and voice, and provides output in the form of 3D holograms and immersive spatial sound,” Kipman wrote. “With Project Kinect for Azure, the fourth generation of Kinect now integrates with our intelligent cloud and intelligent edge platform, extending that same innovation opportunity to our developer community.”

At Microsoft, there’s a clear highway from game development to everyday, mainstream applications — and this road travels both ways. As video games feed the company’s AI and AR applications, serving as testing grounds for new technologies, advances in AI feed the game-development process, allowing creators to build smarter, larger, more personalized and more beautiful titles. However, a lot of this technology doesn’t end with games. More often than not, video games are just the beginning.

Click here to catch up on the latest news from Microsoft Build 2018!

Images: Will Lipman for Engadget (Xbox Kinect)

from Engadget https://engt.co/2I5osef
via IFTTT

Watch a guy named Feliks Zemdegs solve a Rubik’s Cube in 4.22 seconds

Standard

TwitterFacebook

Most of us can’t solve a Rubik’s Cube to save a life, let alone finish the puzzle in seconds.

Australian speedcuber Feliks Zemdegs certainly makes it look easy, especially when he broke the Rubik’s Cube world record (again) at the Cube for Cambodia competition in Melbourne, Australia on Sunday.

Zemdegs, who had previously held the world record, managed to solve a Rubik’s Cube at a frighteningly quick 4.22 seconds, beating a record of 4.59 which was both held by Feliks and South Korea’s SeungBeom Cho.

Correction: A previous version of this article stated that Patrick Ponce was the previous record holder, when it was both Feliks and Cho who simultaneously held the record. Read more…

More about Australia, Culture, Rubiks Cube, Rubik S Cube, and Culture

from Mashable! http://bit.ly/2rqLupA
via IFTTT

MIDI Polyphonic Expression is now a thing, with new gear and software

Standard

MIDI Polyphonic Expression (MPE) is now an official part of the MIDI standard. And Superbooth Berlin shows it’s catching on everywhere from granular synths to modular gear.

For decades now, it’s been easy enough to add expression to a single, monophonic line, via various additional controls. But humans have more than one finger. And with MIDI, there was until recently no standard way of adding additional expressiveness for multiple notes/fingers at the same time. All of that changed with the adoption of the MPE (MIDI Polyphonic Expression) specification.

Here’s a nice video explanation from our friend, musician and developer Geert Bevin:

“Oh, fine,” naysayers were able to say, “but is that really for very many people?” And sure enough, there haven’t been so many instruments that knew what to do with the MPE data from a controller. So while you can pick up a controller like the ROLI Seaboard (or more boutique items from Roger Linn and Madrona Labs), and see support in major DAWs like Logic, Cubase, Reaper, GarageBand, and Bitwig Studio, mostly what you’d play would be specialized instruments made for them.

But that’s changing. It’s changing fast enough that you could spot the theme even at an analog-focused show like Superbooth.

Here’s a round-up of what was shown just at that show – and that isn’t even a complete list of the hardware and software support available now.

Thanks to Konstantin Hess from ROLI who helped me compile this list and provided some photos, as well as ROLI’s Jean-Baptiste Thiebaut.

So many controllers. In addition to old favorites like ROLI, Roger Linn Designs, and Madrona Labs, who worked together to champion the standard (and all of which were on-site at Superbooth), you now have many new devices that send MPE data.

Playful Instruments Joué. With engineering from the original creator of the JazzMutant Lemur hardware, Joué has impressively sensitive tracking – good enough that it can read through other materials. (So, if you don’t like the feeling of vinyl and whatnot, you could try leather or felt.) It’s a USB device with custom overlays. (Pictured above.)

Sensei Morph employs a similar idea, with the advantage of a bunch of pro app overlays – so your MIDI controller can help you get through video editing, too, for instance.

Expressive E Touché. This is a different concept, adding expression to existing controllers in the form of a beautiful wooden paddle. It’s terrifically sensitive, the design provides kinetic feedback to your hand, and it demonstrates that MPE can go very different directions. I’ll have a review this month; I’ve been applying it to various applications to test.

Enhancia is a 3-axis MIDI ring currently on Kickstarter. (I missed trying this one, but it was somewhere at Superbooth – as was a competing gyro ring from Sweden.)

Polyend/Dreadbox Medusa. This all-in-one sequencer/synth is one I’ll write up separately. That grid has dedicated X/Y/Z movement on it, and it’s terrifically expressive. What’s great is, it uses MPE so you can record and play that data in supported hosts – or presumably use the same to sequence oteher MPE-compatible gear. And that also means:

Polyend SEQ. The Polish builder’s standalone sequencer also works with SEQ. As on the Medusa, you can play that live, or increment through, or step sequence control input.

Tasty Chips GR-1 Granular Synthesizer. Granular instruments have always posed a challenge when it comes to live performance, because they require manipulating multiple parameters at once. That of course makes them a natural for MPE – and sure enough, when Tasty Chips crowd-funded their GR-1 grain synth, they made MPE one of the selling points. Connect something like a Seaboard, and you have a granular instrument at your command. (An ultra-mobile, affordable Seaboard BLOCK was there for the demo in Berlin.)

The singular Gaz Williams recently gave this a go:

Audio Damage Quanta. The newest iOS app/desktop plug-in from Audio Damage isn’t ready to use yet, but an early build was already at Superbooth connected to both a Linnstrument and a ROLI Seaboard for control. Set an iPad with your controller, and you have a mobile grain instrument solution.

Expert Sleepers FH-1. The FH-1 is a unique MIDI-to-CV modular interface, with both onboard USB host capabilities and polyphonic support. But what would polyphonic input be if you couldn’t also add polyphonic expression? And sure enough, the FH-1 is adding support for that natively. I’m hopeful that Bastl Instruments will choose to do the same with their own 1983 MIDI module.

Polyend Poly module. Also from Polyend, the Poly is designed around polyphony – note the eight-row matrix of CV out jacks, which makes it a sophisticated gateway from MIDI and USB MIDI to voltage. But this digital-to-analog gateway also has native support for MPE, meaning the moment you connect an MPE-sending controller, you can patch that expression into whatever you like.

Endorphin.es Shuttle Control. Shuttle Control is both a (high res) 12-bit MIDI-to-CV converter and practically a little computer-in-a-module all its own. It’s got MPE support, and was showing off that capability at Superbooth.

Once you have that MIDI bridge to voltage, of course, MPE gives you additional powers over a modular rig, so this opens up a lot more than just the stuff mentioned here.

I even know some people switching from Ableton Live to Bitwig Studio just for the added convenience of native MPE support. (That’s a niche, for sure, but it’s real.) I guess the key here is, it takes just one instrument or one controller you love to get you hooked – and then sophisticated modular and software environments can connect to still more possibilities.

It’s not something you’re going to need for every bassline or use all the time, but for some instruments, it adds another dimension to sound and playability.

Got some MPE-supporting picks of your own, or your own creations? Do let us know.

The post MIDI Polyphonic Expression is now a thing, with new gear and software appeared first on CDM Create Digital Music.

from Create Digital Music http://bit.ly/2IlbCvF
via IFTTT