This team of 17-year-old high-school seniors in California created a device that could help prevent future wildfires


Smart Wildfire Sensor

  • Sanjana Shah and Aditya Shah are seniors at Monta Vista High School in Cupertino, California who have created a device for helping predict and prevent wildfires. 
  • Their device, dubbed the "Smart Wildfire Sensor," captures photos of nearby fallen branches and leaves (known as "fuel") and uses machine learning to categorize and determine wildfire threat levels. 
  • Today, forest crews do not have real-time fuel conditions because they have to send out teams to physically record this information. 
  • Sanjana and Aditya have entered their Smart Wildfire Sensor into Google‘s AI for Social Good program, which will provide $25 million in grant funding to winning teams. 

Sanjana Shah was interning at the Lawrence Berkeley National Laboratory in the summer of 2017 when a wildfire caught in the nearby hills.

She and her co-workers were told to immediately leave the lab, evacuating with the red flames and black smoke at their backs. Sanjana made it to the nearest library on Berkeley’s campus and texted her parents, letting them know she needed a ride home early that day.

“It was just a really traumatic experience,” Sanjana remembers. She was 15 years old.

Two years later, Sanjana and her Monta Vista High School classmate Aditya Shah (the two aren’t related though they share the same last name) teamed up to try and fight an issue close to their hearts — wildfires. The 17-year-old Cupertino, California natives have both witnessed the destruction wildfires can cause (most recently with the Camp Fire, around 150 miles from the Bay Area) and are putting their bright, engineering minds together to find a better solution.

“The current problem with wildfire prediction is that forest crews do not have up-to-date fuel conditions in real-time because they physically have to go to each and every single forest site and classify fuels manually,” Sanjana told Business Insider. “We’re trying to prevent all of this manual labor from happening by predicting where a wildfire could occur in the first place.”

The two are creating what they call a “Smart Wildfire Sensor” to help predict areas of a forest that are highly susceptible to wildfires and provide alerts to local fire departments.

Smart Wildfire Sensor

Their device, which is still in its beta phase, works by being afixed to trees every square mile or so in a forest, capturing images of nearby, fallen branches, and leaves. Those photos are then classified using machine learning into 13 different categories of varying threat. Sanjana and Aditya are using an open-source machine-learning tool by Google called TensorFlow to process and categorize the photos.

When implemented, alerts will be sent to nearby fire crews when the forest fuel density and dryness reach a certain threat level.

“Especially in the last month with the Camp Fire taking around 60 lives, knowing that our device is actually able to prevent wildfires from occurring in the first place and knowing that we’ve been able to hone the technology in our generation to solve problems that have been existing for millions and millions of years,” Sanjana explains, “That’s the satisfaction we’ll receive after we’re done with the prototype to prove that it actually works."

Sanjana and Aditya are already in talks with Cal Fire to begin testing their Smart Wildfire Sensor, though discussions have been halted due to the recent fires.

Read more: Authorities are still searching for the remaining 993 missing people after the Camp Fire roared through Paradise

The high school senior duo are also entering their device to compete in Google’s AI for Social Good program, which will provide $25 million in grant funding to teams who are “[using] AI to help address some of the world’s greatest social, humanitarian and environmental problems,” according to the company’s website. 

If they were to receive funding from Google, Aditya says, “that would be really amazing. We would definitely use that money to benefit the social good by combating wildfires using our Smart Wildfire Sensor and developing it further.”

As for skipping college if they were awarded, say, $5 million from Google’s program, Sanjana and Aditya both said that idea wasn’t on their minds.

“We both think education is really important to us,” Sanjana says. “We’re both interested in engineering, whether it’s biology or computer software. We’re really interested to further our education. So we’d definitely be continuing our education even if we were to win $5 million.”

Join the conversation about this story »

NOW WATCH: Review: Google Pixel 3 and 3 XL are the best smartphones you can buy right now

from SAI

NASA chooses the landing site for its Mars 2020 rover mission


Five years and sixty potential locations later, NASA has chosen the Jezero Crater as the landing site for its Mars 2020 rover mission.

Slated to launch in July the Mars 2020 rover mission will touch down at the Jezero Crater as NASA’s exploration of the Red Planet enters its next phase.

The rover will be looking for signs of habitable conditions — and past microbial life — while also collecting rock and soil samples that will be stored in a cache on the Martian surface.

Alongside the European Space Agency, NASA is already studying future missions that will allow the agencies to retrieve the samples and return them to earth. According to NASA, this new landing is the first step of a planned decade-long exploration of Mars.

“The landing site in Jezero Crater offers geologically rich terrain, with landforms reaching as far back as 3.6 billion years old, that could potentially answer important questions in planetary evolution and astrobiology,” said Thomas Zurbuchen, associate administrator for NASA’s Science Mission Directorate, in a statement. “Getting samples from this unique area will revolutionize how we think about Mars and its ability to harbor life.”

The crater is located on the western edge of Isidis Planitia, a giant impact basin just north of the Martian equator, with some of the oldest and most scientifically interesting landscapes Mars has to offer, according to NASA scientists.

Mission scientists believe the 28-mile-wide crater once held an ancient river delta, and could have collected and preserved organic molecules and other potential signs of microbial life from the water and sediments that flowed into the crater.

NASA thinks it can collect up to five different kinds of Martian rock, including clays and carbonates that may preserve indicators of past life. There’s also the hope that minerals have been swept into the crater over the last billion years which Rover could also collect.

It was the geologic diversity of the Jezero crater that ultimately tipped the scales for NASA scientists, but the site’s contours will make it a bit more tricky for NASA entry, descent and landing engineers, according to a statement from the agency.

“The Mars community has long coveted the scientific value of sites such as Jezero Crater, and a previous mission contemplated going there, but the challenges with safely landing were considered prohibitive,” said Ken Farley, project scientist for Mars 2020 at NASA’s Jet Propulsion Laboratory, in a statement. “But what was once out of reach is now conceivable, thanks to the 2020 engineering team and advances in Mars entry, descent and landing technologies.”

This Mras mission will be the first to feature new Terrain Relative Navigation technologies to allow the rover to avoid hazardous areas during the “sky crane” descent stage — when the rocket-powered system carries the rover down to the surface.

The site selection is dependent upon extensive analyses and verification testing of the TRN capability. A final report will be presented to an independent review board and NASA Headquarters in the fall of 2019.

“Nothing has been more difficult in robotic planetary exploration than landing on Mars,” said Zurbuchen. “The Mars 2020 engineering team has done a tremendous amount of work to prepare us for this decision.  The team will continue their work to truly understand the TRN system and the risks involved, and we will review the findings independently to reassure we have maximized our chances for success.”

Now that the site has been selected, rover drivers and NASA’s science operations team can start planning for the exploration of the crater once the rover is on the ground. Using information from Mars orbiters, they will map the terrain and try to identify regions that could be the most interesting for the rover to explore.

Mars 2020 will launch from Cape Canaveral Air Force Station in Florida.

from TechCrunch

How to Stop Using So Much Disposable Plastic

Photo: Louis Hansel

We are drowning the world in plastic. It washes onto our beaches, it sits entombed for centuries in landfills, it floats around the ocean in a cloud of microscopic particles twice the size of Texas. Most of it—75% in the U.S.—never gets recycled. Recycling it still takes 10% of the energy of making new plastic—not nothing. It would be nice to use a little less in the first place.

Reporter Jenna Wortham asked Twitter followers for some ways to use less disposable plastic. You don’t have to follow all of the suggestions in the replies; I certainly don’t plan to bring my own take-home containers to restaurants. Sorry. But would it be that terrible to try one or two of these tips? No, it would not be terrible. It actually might be enjoyable.

Replace plastic bags with reusable bags

Bring reusable produce bags to the grocery store, says Emily M-M. Keep tote bags in your car or your purse. (Get the ones that fold up small.) Reuse your Ziplocs, at least the dry ones.


Here is a major caveat: Do not buy a plastic replacement unless you’re ready to use it. If you don’t use them, reusable grocery bags are just a new innovation in garbage. Decline freebies or find a good use. (I turn all the mediocre totes that inexplicably pile up in my home into “giveaway bags” that store all my donations to the charity shop.)

Replace your other disposable plastic

Try some reusable food wraps instead of Saran Wrap, like Kasia Mychajlowycz. Try bar soap instead of plastic-bottled body wash, like Aimee Louise Sison. Look for brands that sell the same thing in paper instead of plastic. Next time you host a party, see if you have enough real dishes to cover everyone.

When you do buy disposable stuff, lean toward paper again.

Carry your own food containers

Easy level: Make your own coffee and if you want it on the go, use a thermos. One thermos, which you keep for years. Keep a glass water bottle at your desk, and a collapsible water bottle in your bag.

Medium level: bring your own reusable silverware around, like Christine Friar. It’s a little weirder, but as a bonus, you never have to use the really shitty kind of plastic fork—the one that seems designed to hold onto zero food—again.


Hard level: Bring your own takeout containers to restaurants, says Ana Cecilia Alvarez. Handle your own doggie bag instead of making the staff go fetch you some fresh garbage for your leftovers. It’s a little awkward and it takes more planning, but hey, you get to choose your own Tupperware.

Get less takeout

This is a tough one for me. While Alvarez is right that we go through actual tons of plastic with takeout and delivery, switching to more home meals is an actual time commitment. But I certainly don’t think enough about how much crap I’m throwing away every time I order Indian from across the street.

Look up your local recycling rules

Bad recycling drives me crazy. Some people at the Lifehacker office—certainly not Lifehacker staffers!—throw goddamn plastic bags into the blue bin. You cannot do that! Not in most places! You are creating problems for the recycling center and you are ruining everyone else’s good work!

Now yeah, it’s frustrating that different cities and states have different recycling rules. But it’s not arbitrary—different recycling facilities have different capabilities—and all you really need to learn are the rules for where you live and where you work.


If you want to recycle plastic bags, you need to do the tiniest bit more work: Stick them in your pocket or purse and drop them off at the nearest grocery store or drugstore chain. A lot of these chains have a plastic bag recycling bin right out front; several states mandate this. Twitter’s @shityeahitscool points to, where you can find nearby dropoff locations. Not a difficult habit to get into. Worst case, you end up with some bags in your pocket when you’re at the grocery store and you forgot your tote again.

These tricks will not save the world. Plastic is just 10% of our country’s garbage, and our individual plastic-pinching lives in the shadow of massive corporate and industrial waste. But practicing and normalizing attentive consumption is healthy, and reducing the demand for petroleum hammers one more little dent into the power of those industrial giants.

from Lifehacker

Understanding Backpropagation


Understanding Backpropagation

By Varun Divakar

In the previous blog, we have learnt how to perform the forward propagation. In this blog, we will continue the same example and rectify the errors in prediction using the back-propagation technique.

What is Back Propagation?

Recall that we created a 3-layer (2 train, 2 hidden, and 2 output) network. But once we added the bias terms to our network, our network took the following shape.

After completing forward propagation, we saw that our model was incorrect, in that it assigned a greater probability to Class 0 than Class 1. Now, we will correct this using backpropagation.

Why Backpropagation?

During forward propagation, we initialized the weights randomly. Therein lies the issue with our model. Given that we randomly initialized our weights, the probabilities we get as output are also random. Thus, we must have some means of making our weights more accurate so that our output will be more accurate. We adjust these random weights using the backpropagation.

Loss Function

While performing the back-propagation we need to compute how good our predictions are. To do this, we use the concept of Loss/Cost function. The Loss function is the difference between our predicted and actual values. We create a Loss function to find the minima of that function to optimize our model and improve our prediction’s accuracy. In this document, we will discuss one such technique called Gradient Descent which is used to reduce this Loss. Depending on the problem we choose a certain type of loss function. In this example, we will use the Mean Squared Error or MSE method to calculate the Loss. In the MSE method, the Loss is calculated as the sum of the squares of the differences between actual and predicted values.

Loss = Sum (Predicted – Actual)²

Let us say that our Loss or error in prediction looks like this:

Error in prediction

We aim to reduce the loss by changing the weights such that the loss converges to the lowest possible value. We try to reduce the loss in a controlled way, by taking small steps towards the minimum loss. This process is called Gradient Descent (GD). While performing GD, we need to know the direction in which the weights should move. In other words, we need to decide whether to increase to decrease the weights. To know this direction, we must take the derivative of our Loss function. This gives us the direction of the change of our function. Below is an equation that shows how to update weights using the Gradient Descent.

Weights using gradient descent

Here the alpha term, α, is known as the learning rate and is multiplied by the derivative of our Loss function (J) (Please recollect that we have discussed how to calculate the derivatives of a function in the chain rule of derivatives document). We subtract this product from our initial weight to update it. It is also to be noted that this form of the derivative is known as the partial derivative. While finding the partial derivative, the remaining terms are treated as constants.


If you consider the curve in the above figure as our loss function with respect a feature, then we can say that the derivative is the slope of our loss function and represents the instantaneous rate of change of y with respect to x. While performing back-propagation we are to find the derivative of our Loss function with respect to our weights. In other words, we are asking “How does our Loss function change when we change our weights by one unit?”. We then multiply this by the learning rate, alpha. The learning rate controls the step-size of the movement towards the minima. Intuitively, if we have a large learning rate we are going to take big steps. In contrast, if we have a small learning rate, we are going to take small steps. Thus, the learning rate multiplied by the derivative can be thought of as steps being taken over the domain of our Loss function. Once we make this step, we update our weights. And this process is repeated for each feature.

In the example below, we will demonstrate the process of backpropagation in a stepwise manner.

Backpropagation Stepwise

Let’s break the process of backpropagation down into actionable steps.

  1. Calculate Loss Function; (i.e. Total Error of Neural Network)
  2. Calculate the Partial Derivatives of Total Error/Loss Function w.r.t. Each Weight
  3. Perform Gradient Descent and Update Our Weights

The first thing that we need to do is to calculate our error. We define our error using MSE formula as follows:

Error = (Target – Output) ²

This is the error for a single class. If we want to compute the error in predicted probabilities for both the classes of an example. Then we combine errors as follows.

Total Error = Error₁ + Error₂

Where Error₁ and Error₂ represent the errors in predictions for the two classes.
Recall that our output was a3 which was computed to be:

Error in prediction

denoting a lesser prediction for Class 1 than Class 0. In our example, we stated that Class 1 should have had a greater probability and thus been our predicted class label. To further illustrate this, we create some hypothetical target probability values for Class 0 and Class 1 for the ease of understanding.

Let us assign the following target values(t) for output layer probabilities:

Target values for output probabilities

Now, let’s compute the errors.

Error calculation


So, the Total Error in the prediction is 0.009895.


Each error contains the predicted value, and each predicted value is a function of the weights and inputs from the previous layer. Extending this logic, one can say that our total error is a function of the different weights or in other words is multivariate. And because we have multiple weights, we must use partial derivatives, or find out how a change in one specific weight changes our total error equation. This means that we must use the chain rule to decompose the errors.

Once we have computed the partial derivative of our error function with respect to a weight, we can then apply Gradient Descent equation to update the weights. We repeat this for each of the weights and for all the examples in the train data. This process is repeated many times and every such pass over all the examples is called an Epoch. We perform these passes until there is a convergence in the loss, or the loss function stops improving.

Now that we have understood the process of backpropagation, let’s implement it. To perform the backpropagation, we need to find the partial derivatives of our error function w.r.t each of our weights. Recall that we have a total of eight weights (i.e. Before adding bias terms). We have two weights from our first input to our hidden layer and two weights from our second input to our hidden layer. We also have four weights from our hidden layer to our output layer.
Let’s label these weights as follows.

Labeling weights


We call the weights from our first input neuron as w1 and w3, and the weights from the second input neuron a w2 and w4. The weights from our hidden layer’s first neuron are w5 and w7 and the weights from the second neuron in the hidden layer are w6 and w8.
In this example, we will demonstrate the backpropagation for the weight w5. Note that we can use the same process to update all the other weights in the network.

Let us see how to represent the partial derivative of the loss with respect to the weight w5, using the chain rule.

Chain rule

Where ‘i’ in the subscript denotes the first neuron in the output layer.
To compute the first derivative of the chain, we express our total error equation as:

Total error equation

Here j in the subscript denotes the second neuron in the output layer.
The partial derivative of our error equation with respect to the output is:

Partial error equation

Substituting the corresponding values, we will get:

Next, we find the second term in our equation. Recall that in the forward propagation step, we used the sigmoid or logistic function as our activation function. So, for calculating the second element in the chain we must take the partial derivative of the sigmoid with respect to its input.

Now, recollect that the sigmoid function is as follows:

Sigmoid function

The derivative of this activation function can also be written as follows:

Derivative of activation function

The derivative can be applied for the second term in the chain rule as follows:

Second term in chain rule

Substituting the output value in the equation above we get:

0.7333(1 – 0.733) = 0.1958

Next, we compute the final term in the chain equation. Our third term encompasses the inputs that we used to pass into our sigmoid activation function. Recall that during forward propagation, the outputs of the hidden layer are multiplied by the weights. These linear combinations are then passed into the activation function and the final output layer.

Recollect that these weights are given by Theta2.


And let us say that the outputs from our Hidden Layer are given as follows.

Output of hidden layer

To visualize the matrix multiplication that follows, please see the diagram below:

Here, H1 and H2 denote the hidden layer neurons.

Our equation for the third term is concerned with the partial derivative of the input into the node with respect to our fifth weight. Our fifth weight is associated with the second neuron in our hidden layer as shown above. So, when we perform the partial differentiation with respect to w5, all the other weights are treated as constants and their derivatives are taken as zeros.

So, when the input which is the value we received from the combination of Theta 2 and the outputs of our Hidden Layer is differentiated, then the result looks like is:

Hidden neuron output


Where output is the Hidden neuron H1’s output.

Now that we have found the value of the last term in our equation, we can compute the product of all three terms to derive the partial derivative of our error function w.r.t w5.

Partial derivative of error function

We can now use this partial derivative in our Gradient Descent equation as shown, to adjust the weight w5.

w5 gradient descent equation


So, the updated weight w5 is 0.3995. As you can see, the value of w5 has changed little, as our learning rate (0.1) is very small. This small change in the value of w5 may not affect the final probability much. But, if the same process s performed multiple times for both the examples and the weights are adjusted for every run (epoch) then we will get a final neural network that has the expected prediction.

Updating our Model

After completing backpropagation and updating both the weight matrices across all the layers multiple times, we arrive at the following weight matrices corresponding to the minima.

weight matrices

We can now use these weights and complete the forward propagation to arrive at the best possible outputs. Recall that the first step in this process is to multiply the weights with inputs as shown below.


Recall that we take the transpose of our X matrix to ensure that our weights line up. Here we are using our new updated weights for Theta1, and our matrix multiplication will now look like the following:

Updated multiplication table

This is our new z² matrix, or the output of the first layer.

Recall that our next step in forward propagation was to apply the sigmoid function element-wise to our matrix. This will yield the following:

Application of sigmoid function

Here,  is the output of the hidden layer.
Again, this is our activation layer and will serve as the new input into our final layer. We again add back our bias term and thus our new a² looks like the following:

Final layer

Now we will use our new values for our Theta2 weight matrix to create the input for our output layer. We now perform the following computation to arrive at the new value of our z3, or the output matrix.

Output matrix

Output matrix 1

After this matrix multiplication, we apply our sigmoid function element-wise and arrive at the following for our final output matrix.

Final back propagation

We can see here that after performing backpropagation and using Gradient Descent to update our weights at each layer we have a prediction of Class 1 which is consistent with our initial assumptions.

If you want to learn how to apply Neural Networks in trading, then please check our new course on Neural Networks In Trading.


10 of the coolest things in space that you had no idea existed


stars, sky, starry night

Although we don’t know much about our expanding and potentially infinite universe, what we have found so far is a mix of awe-inspiring, terrifying, and downright weird.

Here are a few space oddities that you had no idea existed.

SEE ALSO: 22 of the best photos of stars, galaxies, and space taken this year

FOLLOW US: INSIDER is on Facebook

There’s a giant space cloud that might smell like rum.

Space cloud Sagittarius B2 is a vast cloud of dust and gas at the center of our galaxy. The cloud is largely composed of ethyl formate, which is the molecule that gives rum its unique aroma and provides raspberries with their fruity taste.

So if you were to float through Sagittarius B2, you might be surrounded by the aroma of rum and the taste of raspberries.

Scientists have found a planet that might be made of solid diamond.

In 2017, an international research team of astronomers discovered what may be a planet made of solid diamond.

Pulsars are tiny, dead neutron stars that are only around 12.4 miles (20 kilometers) in diameter and spin hundreds of times a second while emitting beams of radiation.

This planet is paired with pulsar PSR J1719-1438 and scientists think it is entirely made of carbon so dense that it must be crystalline, meaning a large part of the world would be diamond. Incredibly, the planet "orbits its star every two hours and 10 minutes, has slightly more mass than Jupiter but is 20 times as dense," according to Reuters. 

There’s also a planet that’s made completely of ice – but it’s on fire.

Gliese 436b is a bit of a paradox. The faraway exoplanet is made mostly out of ice. But strangely, this ice appears to be on fire.

The surface of Gliese 436b is a searing 822 degrees Fahrenheit (439 degrees Celsius), but the planet’s icy landscape stays frozen due to the immense gravitational force exerted by the planet’s core. This force keeps the ice much denser than the ice we’re familiar with here on Earth and is thought to even compress any water vapor that might evaporate.

See the rest of the story at Business Insider

from SAI

Gallery: a new documentary digs into techno’s 80s Detroit roots


“God Said Give ‘Em Drum Machines” is the story through the eyes of a documentary team that grew up in Detroit – and with time running out, they’re short of their funding goal. Happily, you have the power to change that.

God Said Give ‘Em Drum Machines: The Story of Detroit Techno

Behind all the history and legend, there’s always a human story of how things happen. What’s appealing about this film above others is, it’s not just one icon or one machine, but the relationships between the artists that takes the spotlight. And, it’s at last a film about Detroit’s influence from Detroit’s perspective – not just the European scene where the genre eventually turned into a runaway financial success.

The requisite originators all star – Juan Atkins, Kevin Saunderson, Derrick May, Eddie Fowlkes, Blake Baxter, and more – so this is definitely one I look forward to watching.

Of course, funding independent film is these days a major ordeal, particularly for American filmmakers. And so it’s disheartening to see that with days running out on crowd funding, the filmmakers haven’t made their very modest funding goals. There are some lovely benefits in there – just US$5 gets you an exclusive mixtape – so I hope you’ll get the chance to give this a nod.

Motor City natives Kristian Hill and Jennifer Washington are looking just for the finishing funds to put this out.

I asked Jennifer to walk us through some stills from the film, so here’s an exclusive gallery for CDM.

Young child at Movement Festival, Detroit.

Motor City, now.

Cover of Record Mirror, June 1988.

The Scene Dance Show, Detroit, circa 1983.

Cybotron’s vision of future cities, 1983.

Blake Baxter plays those drum machines.

Kevin Saunderson, Derrick May, Juan Atkins.

Juan Atkins, Eddie Fowlkes.

Classic Transmat label, illustrated by Alan Oldham.

Mike Huckaby.

Kevin Saunderson.

God Said Give ‘Em Drum Machines: The Story of Detroit Techno [Kickstarter]


Detroit techno, the 90s comic book – and epic new DJ T-1000 techno

In a documentary film, a return to Detroit and speaker f***ing

from Create Digital Music