“Mounted Chimeras” Features Fantastic Creatures “Made” Through Multiple Exposures

Standard




“Mounted Chimeras” demonstrates one of the most interesting and clever uses of analog double/multiple exposures we’ve seen so far.

Ever wanted to make something different using double or even multiple exposures with film cameras? Something that’s not a silhouette of someone set against cityscapes or flowers? We’ve found an interesting project that makes use of this beloved analog technique in a rather fantastic way — creating surreal, mythological creatures.

Aptly titled Mounted Chimeras, Italian photographer and filmmaker Silvia Kuro describes her project as “an analog photography bestiarium where fantastic animals, trapped inside museums, are created with multiple exposures on BW film.” Indeed, her creatures will tickle the imagination: an ostrich with the head of an antelope, a deer with the body of a cheetah, and a great horn bill with the body of what looks like a deer. With every creature — or Chimera — seamlessly “made” without post-production or manipulation, one can’t help but wonder about the ideas and process behind the project.

Kuro is currently raising $3,437 for Mounted Chimeras on Kickstarter, with the funds going to films and chemicals for making new Chimeras and printing a book. On the campaign, she described how the project began with a random visit in a zoology museum and seeing there some taxidermy animals. She also became inspired by the many representations of animals in prehistoric art, including peculiar petroglyphs and paintings of human-animal hybrids.

Likewise, the bestiarium of the Middle Ages, with its draws and descriptions of both real and fantastic animals, became another source of inspiration for the project. “This approach was dictated by the few notices of travellers about some of the animals, for example the crocodile was often drawn with ears and body similar to a dog. The manticore instead was a creature from Persia, the body was that of a lion, scorpion’s tail and human head!”

If this project is something you’d like to support, head to the Mounted Chimeras Kickstarter campaign to learn more and make your pledge.

 

All images from the Kickstarter campaign by Silvia Kuro

Related



from The Phoblographer http://bit.ly/2AqX8G2
via IFTTT

Hubble Telescope Was Broken And NASA Fixed After Turning It Off And On Again

Standard


The Hubble Space Telescope was broken and NASA engineers weren’t sure about how to fix it. So they turned it on and off. The Hubble Space Telescope now works perfectly again. The Hubble Space Telescope… it’s just like your internet router!

Two weeks ago, the Hubble Space Telescope entered a safe mode after a gyroscope failure. The telescope utilizes three motion-sensing gyroscopes in order to keep itself stable while freely floating in space. In early October, one of the gyroscopes failed and the backup gyro didn’t pick up the slack. The mechanism that had been deactivated for over seven and a half years was rotating too fast, which didn’t keep the telescope in place.

NASA engineers restarted the gyro on October 16th by turning it off for one second and then turning it back on. Then the Hubble Space Telescope did a series of maneuvers that switched the gyro from high-rotation mode to low-rotation mode to “dislodge any blockage that may have accumulated around the float.” The relatively simple fix worked and the gyroscopes started performing normally once again.

via GIPHY

They essentially turned it on and off, then wiggled the wires to get the space telescope working correctly again. So basically the same procedure that customer service provides you as a fix to any malfunctioning home electronics is the same that NASA uses to fix the $1.5 billion spacecraft.

[Engadget]

from BroBible.com http://bit.ly/2O2qr5d
via IFTTT

Watch as a ghostly creature swims through dark waters off the California coast

Standard


On Tuesday at some 10,000 feet beneath the sea, marine scientists spotted a little-seen octopus swimming through the dark, black waters.

A robotic Remote Operated Vehicle (ROV) piloted by the Ocean Exploration Trust filmed this genus of Octopus, the bell-shaped Grimpoteuthis, as the ROV maneuvered around a deep-sea reef off the central California coast. 

This specific area, near the inactive volcano known as the Davidson Seamount, is an uncharted deep-sea world, according to the exploration group. 

In these hard-to-reach, largely alien places, scientists regularly observe life that has never been seen or documented before.

A Grimpoteuthis octopus.

A Grimpoteuthis octopus.

Image: Ocean Exploration Trust

Grimpoteuthis, however, has been studied to limited degrees. But still, the species isn’t well known to science.

While these octopuses may generally be little-known, scientists have identified 14 species of the Grimpoteuthis genus, though Ocean Exploration Trust scientists couldn’t determine which species they captured on the ROV’s camera.

What is known, however, is that Grimpoteuthis are largely deep-dwelling critters, and they have two U-shaped fins on their sides that they often use to propel themselves through the water.

While moving through a light falling of marine snow, the octopus travels around the water almost like a jellyfish, before revealing its long, almost webbed tentacles. 

Marine scientists acknowledge that our vast oceans are poorly explored. Much of the deep sea remains uncharted — similar to distant moons orbiting Saturn and Jupiter. 

Earth’s undersea realm largely remains a mystery.

from Mashable! http://bit.ly/2OKSoUd
via IFTTT

The Concorde made its final flight 15 years ago and supersonic air travel has yet to recover — here’s a look back at its awesome history

Standard

Concorde last take off

  • British Airways operated its final commercial Concorde flight on October 24, 2003, from New York’s JFK International Airport to London Heathrow. 
  • It was the last commercial passenger flight for the Concorde in a career that began in 1976.
  • A total of 14 Concordes entered service with British Airways and Air France
  • Co-developed by the British and the French, Concorde was the first and only viable supersonic commercial airliner. 
  • The Concorde could cruise at Mach 2.02 or around 1,340 mph at fly comfortably at altitudes of up to 60,000 feet.

For a fleeting thirty years during the 20th century, supersonic commercial air travel was a reality. But on October 24, 2003, that era came to an abrupt end.

That day, British Airways operated its last commercial Concorde service from JFK International Airport to London Heathrow. Air France pulled its Concordes from service a few months earlier. Thus, it would be the Concorde’s last ever commercial flight in a career that started in January 1976. 

The Anglo-French Concorde was co-developed by BAC, a forerunner of BAE Systems, and Aerospatiale, now a part of Airbus.

The Concorde was never the commercial success for which its creators had hoped. Environmental and operational limitations of the Concorde limited its commercial appeal among airline customers. Only 20 of the planes were ever built and just 14 of them were production aircraft. The Concorde saw service with only two airlines — Air France and British Airways — on just two routes. 

However, its lack of commercial success doesn’t diminish its role as an icon of modern aviation and as a technological marvel. 

In fact, 15 years after its last flight for British Airways, the world is still without a viable form of supersonic passenger service. 

Here’s a look back at the awesome history of the Aerospatiale-BAC Concorde supersonic airliner: 

SEE ALSO: The incredible history of the Airbus A380 superjumbo jet, which went from airline status symbol to reject in just 10 years

FOLLOW US: On Facebook for more car and transportation content!

As soon as Chuck Yeager crossed the sound barrier in 1947, commercial aviation companies began planning to take passengers past Mach 1.

On November 29, 1962, the governments of France and Great Britain signed a concord agreement to build a supersonic jetliner, hence the name of the plane that resulted: Concorde.

Together, Aérospatiale — a predecessor of Airbus Industries — and British Aircraft Corporation agreed to produce a four-engine, delta-wing supersonic airliner.

See the rest of the story at Business Insider

from SAI https://read.bi/2R7y6RA
via IFTTT

The winning photos of Astronomy Photographer of the Year 2018 contest are out of this world

Standard


Winners of the 2018 Astronomy Photographer of the Year contest have just been announced. This is the tenth year of the competition, and just like before, the winning images didn’t disappoint. The judges had a difficult task of selecting 31 out of 4,200 images from 91 countries. But the selected best of the best will take your breath away.

The 2018 Astronomy Photographer of the Year is run by the Royal Observatory Greenwich, in association with Insight Investment and BBC Sky at Night Magazine. Professional photographers, as well as the amateurs, submitted their work, competing in nine categories:

  • People and Space
  • Aurorae
  • Galaxies
  • Our Moon
  • Our Sun
  • Planets, Comets and Asteroids
  • Skyscapes
  • Stars and nebulae
  • Young Astronomy Photographer of the Year

Additionally, there are two special prizes: The Sir Patrick Moore Prize for Best Newcomer and the Robotic Scope prize.

American photographer Brad Goldpaint was selected as the overall winner for his photo titled “Transport the Soul.” He received the main prize of £10,000 (around $13,000) for his stunning photo, while the winners of subcategories won £1,500 (around $1,950).

Dr. Melanie Vandenbrouck, Curator of Art at Royal Museums Greenwich and judge for the competition, said that picking just 31 winners from the 134 shortlisted images was “fiendishly difficult:”

“With a competition that keeps on flourishing over the years, the growing community of amateur astrophotographers have time after time surprised us with technically accomplished, playfully imaginative and astoundingly beautiful images that sit at the intersection of art and science. This year did not disappoint. Their mesmerising, often astonishing photographs, show us the exquisite complexity of space, and movingly convey our place in the universe. And to see our young winners compete with seasoned photographers in their skill, imagination, and aesthetic sense, remains the greatest reward of all.”

The winning photos will be exhibited in the National Maritime Museum from 24 October 2018, so don’t miss it if you’re in London. But for all of you living far (like I do), here are the winning images from all categories. I’m sure you’ll enjoy them!

People and Space

© Brad Goldpaint (USA) – Transport the Soul
Category winner and overall winner
Nikon D810 camera, 14 mm f/4.0 lens, ISO 2500, 20-second exposure

© Andrew Whyte (UK) – Living Space
Runner-up
Sony ILCE-7S camera, 28-mm f/2 lens, ISO 6400, 15-second exposure

© Mark McNeill (UK) – Me versus the Galaxy
Highly commended
Nikon D810 camera, 20-mm f/1.4 lens, ISO 5000, 10-second exposure

Aurorae

© Nicolas Lefaudeux (France) – Speeding on the Aurora lane
Winner
Sony ILCE-7S2 camera, 20-mm f/1.4 lens, ISO 2000, 3.2-second exposure

© Matthew James Turner (UK) – Castlerigg Stone Circle
Runner-up
Sony ILCE-7R camera, 22-mm f/4 lens, ISO 1000, 30-second exposure

© Mikkel Beiter (Denmark) – Aurorascape
Highly commended
Canon EOS 5DS R camera, 17-mm f/2.8 lens, ISO 2000, 8-second exposure

Galaxies

© Steven Mohr (Australia) – NGC 3521, Mysterious Galaxy
Winner
Planewave CDK 12.5 telescope, Astrodon Gen II LRGB, Baarder H lens at 2541 mm f/8, Astro Physics 900 mount, SBIG STXL-11000 camera, Luminance: 33 x 1200 seconds [11hrs], H: 12 x 1200 seconds [4hrs], Red-Green-Blue: 450 x 12–18 seconds

© Raul Villaverde Fraile (Spain) – From Mirach
Runner-up
Takahashi FSQ 106ED telescope, Idas lps 2-inch lens, SkyWatcher Nq6pro mount, Canon 6D camera, 414-mm f/3.9 lens, ISO 1600, 24x30x400″ exposure

© César Blanco (Spain) – Fireworks Galaxy NGC 6939
Highly commended
Takahashi FSQ 106 ED telescope, LRGB Baader filters, ORION ATLAS EQ-G mount, QSI 583ws camera, 530-mm f/5 lens, 36 hours 30 mins exposure

Our Moon

© Jordi Delpeix Borrell (Spain) – Inverted Colours of the boundary between Mare Serenitatis and Mare Tranquilitatis
Winner
Celestron 14 telescope, Sky-Watcher NEQ6 Pro mount, ZWO ASI 224MC camera, 4,200-mm f/12 lens, multiple 20ms exposures

© Peter Ward (Australia) – Earth Shine
Runner-up
Takahashi FSQ85 telescope, Losmandy Starlapse mount, Canon 5D Mark IV camera, 500-mm f/5 lens, 9 exposures ranging from ISO 100 to 900, 150 2-seconds through to 1/4000th second exposures

© László Francsics (Hungary) – From the Dark Side
Highly commended
Homemade 250-mm f/4 Carbon Newton telescope, f/11, 250/1000 mirror lens, Skywatcher EQ6 mount, ZWO ASI 174 MM camera, 6250 mm f/4 lens increased to f/11, multiple 1/200-second exposures

Our Sun

© Nicolas Lefaudeux (France) – Sun King, Little King, and God of War
Winner
AF-S NIKKOR 105-mm f/1.4E ED lens, Nikon D810 camera on an untracked tripod, 105 mm f/1.4 lens, ISO 64, multiple exposures of 0.3-second, 0.6-second and 1.3-second

© Stuart Green (UK) – Coloured Eruptive Prominence
Runner-up
Home-built telescope based on iStar Optical 150mm f/10 lens, double stacked hydrogen-alpha filter at 5250 mm, Sky-Watcher EQ6 Pro mount, Basler acA1920-155um camera, 150-mm f/35 lens, multiple 0.006-second exposures as an AVI

© Haiyang Zong (China) – AR2673
Highly commended
Sky-Watcher DOB10 GOTO telescope, Optolong R Filter, QHY5III290M camera, 3,600-mm f/4.7 lens, ISO 160, 0.7ms exposure

Planets, Comets and Asteroids

© Martin Lewis (UK) – The Grace of Venus
Winner
Home-built 444-mm Dobsonian reflecting telescope, Astronomik 807nm IR filter, Home-built Equatorial tracking platform, ZWO ASI174MM camera, 12.4-m f/28 lens, 6msec frame time, 5.3sec total exposure duration

© Martin Lewis (UK) – Parade of the Planets
Runner-up
Home-built 444-mm Dobsonian Newtonian reflector telescope (Mercury used 222-mm Dobsonian), various IR filters for Uranus, Neptune, Mercury, Saturn (L). UV filter for Venus, home-built Equatorial Platform, ZWO ASI174MC/ASI174MM/ ASI290MM camera, various focal lengths f/12 to f/36, various exposures

© Gerald Rhemann (Austria) – Comet C/2016 R2 Panstarrs the blue carbon monoxide comet
Highly commended
ASA 12-inch (300 mm) Astrograph telescope at f/3.62, ASA DDM 85 telescope mount, ASI ZWO 1600 MC colour CCD camera, exposure: RGB composite, 4.6-hours total exposure

Skyscapes

© Ferenc Szémár (Hungary) – Circumpolar
Winner
Minolta 80–200 f/2.8 telescope, tripod, Sony SLT-A99V camera, 135-mm f/2.8 lens, ISO 640, 50 x 300-second exposures

© Chuanjin Su (China) – Eclipsed Moon Trail
Runner-up
Sony ILCE-7RM2 camera, 17-mm f/4 lens, ISO 100, 950 x 15-seconds

© Ruslan Merzlyakov (Latvia) – Midnight Glow over Limfjord
Highly commended
Canon EOS 6D camera, 14-mm f/2.8 lens, ISO 400, 10-second exposure

Stars and nebulae

© Mario Cogo (Italy) – Corona Australis Dust Complex
Winner
Takahashi FSQ 106 ED telescope, Astro-Physics 1200 GTO mount, Canon EOS 6D Cooling CDS Mod camera, 530-mm f/5 lens, ISO 1600, total 6-hours exposure

© Mario Cogo (Italy) – Rigel and the Witch Head Nebula
Runner-up
Takahashi FSQ 106 ED telescope, Astro-Physics 1200 GTO mount, Canon EOS 6D Cooling CDS Mod camera, 383-mm f/3.6 lens, ISO 1600, 1, 3 and 6 min, total 5 Hours exposure

© Rolf Wahl Olsen (Denmark) – Thackeray’s Globules in Narrowband Colour
Highly commended
Homebuilt 12.5-inch f/4 Serrurier Truss Newtonian telescope, Losmandy G-11 mount, QSI 683wsg-8 camera, 1,450-mm 12.5” f/4 lens, 14 hours and 40 minute exposure

Young Astronomy Photographer of the Year

© Fabian Dalpiaz (Italy – aged 15) – Great Autumn Morning
Winner
Canon EOS 5D Mark III camera, 50-mm panorama f/2.0 lens, ISO 6400, 8-second exposure

© Logan Nicholson (Australia – aged 13) – The Eta Carinae Nebula
Runner-up
Takahashi MT-160 telescope, f/4.8 reducer for MT-160, Celestron CGEM mount, Canon EOS 700D camera, 776-mm f/4.8 lens, ISO 800, 12 x 5 minute exposures

© Thea Hutchinson (UK – aged 11) – Inverted Sun
Highly commended
Lunt LS60 telescope, Celestron CGE Pro mount, ZWO ASI174MM camera, 1250 (500-mm with x2.5 Powermate) f/21 (f/8.3 x 2.5) lens, 2000 frames best 20% retained

© Casper Kentish (UK – aged 8) – First Impressions
Highly commended
SkyWatcher Skyliner 200 p, SkyWatcher 25mm wide angle, Dobsonian mount, Apple iPad 5th generation, 3.3-mm f/2.4 lens, ISO 250, 1/17-second exposure

© Davy van der Hoeven (Netherlands – aged 10) – A Valley on the Moon…
Highly commended
Celestron C11 Schmidt Cassegrain telescope, Baader red filter, SkyWatcher NEQ6 mount, Imaging Resource DMK21 camera, 2,700-mm f/10 lens, 1/300-second exposure

Sir Patrick Moore Prize for Best Newcomer

© Tianhong Li (China) – Galaxy Curtain Call Performance
Winner
Nikon D810A camera, 35-mm f/2 lens; sky: ISO 1250, 16 x 60-second exposures, total 16 pictures; ground: ISO 640, 4 x 120-second exposures, total 4 pictures

Robotic scope

© Damian Peach (UK) – Two Comets with the Pleiades
Winner
Takahashi FSQ106 telescope at 106 mm, Paramount ME mount, SBIG STL-11000M camera, 530-mm f/5 lens, exposure: four LRGB frames, each frame 30 minutes each

from DIYPhotography.net -Hacking Photography, One Picture At A Time http://bit.ly/2PSy5kp
via IFTTT

Deep Synth combines a Game Boy and the THX sound

Standard


Do you love the THX Deep Note sound – that crazy sweep of timbres heard at the beginning of films? Do you wish you had it in a playable synth the size of a calculator? Deep Synth is for you.

First, Deep Note? Just to refresh your memory: (Turn it up!!)

Yeah, that.

Apart from being an all-time great in sound design, the Deep Note’s underlying synthesis approach was novel and interesting. And thanks to the power of new embedded processors, it’s totally possible to squeeze this onto a calculator.

Enter Eugene, Oregon-based professional developer Kernel Bob aka kbob. A low-level Linux coder by day, Bob got interested in making an audio demo for the 1Bitsy-1UP game console, a powerful modern embedded machine with the form factor of a classic Game Boy. (Unlike a Game Boy, you have a decent processor, color screen, USB, and SD card.)

The Deep Note is the mother of all audio demos. That sound is owned by THX, but the basic synthesis approach is not – think 32 voices drifting from a relatively random swarm into the seat rocking final chord.

The results? Oh, only the most insane synthesizer of the year:

Whether you’re an engineer or not, the behind the scenes discussion of how this was done is fascinating to anyone who loves synthesis. (Maybe you can enlighten Bob on this whole bit about the sawtooth oscillator in SuperCollider.)

Read the multi-part series on Deep Synth and sound on this handheld platform:

Deep Synth: Introduction

And to try messing about with Deep Note-style synthesis on your own in the free, multi-platform coding for musicians environment SuperCollider:

Recreating the THX Deep Note [earslap]

All of this is open hardware, open code, so if you are a coder, it might inspire your own projects. And meanwhile, as 1Bitsy-1UP matures, we may soon all have a cool handheld platform for our noisemaking endeavors. I can’t wait.

Thanks to Samantha Lüber for the tip!

Previously:

THX Just Remade the Deep Note Sound to be More Awesome

And we got to interview the sound’s creator (and talk to him about how he recreated it):

Q+A: How the THX Deep Note Creator Remade His Iconic Sound

from Create Digital Music http://bit.ly/2R9rRwQ
via IFTTT

Questions UX Designers should ask while designing

Standard

01 Title

Ask and you shall know. Ask the correct questions and you shall know better!

Designing is a very empathetic process, wherein understanding the user’s troubles is the most important step. The fact of the matter is that often we end up assuming the problems faced by the users instead of finding their actual problems. And those problems can be found out only by asking some detailed questions regarding the product. To highlight the correct research process and make it easier for all the designers out there, the article below by Garrett Kroll (Head of Product Design @ Spent) published on Medium lists down the pertinent questions each of us should be asking while designing.

Want to recruit a designer who is thorough with the design process? Post your requirement with YD Job Board to connect with some of the most talented designers on the planet.

Looking for an interesting job opportunity? Check out Yanko Design Job Board to find relevant job openings in the best design companies.


The ability to ask meaningful questions is a fundamental yet often overlooked skill in the UX Designer’s toolkit. I’ve begun to notice a clear correlation between the number of questions a designer asks throughout the process and the quality of the final design output.

It’s much more than creating, it’s about understanding your problem so well that the solution is obvious.

In order to understand the challenge at hand, UX Designers must ask great questions at every stage of the process. I’ve cataloged a robust list of questions (100 to be exact) that I’ve found to be useful for projects spanning industries, devices, and personas. While by no means comprehensive, it should provide a framework for design thinking through different stages of a project.

02 image

Kickoff Meeting

In order to align the delivery team and stakeholders around the vision and project plan, the big questions need to be asked. Avoid jumping to solutions, instead focus on the underlying problems and insights that can give the team foundational knowledge to design from later.

  • What is the problem or need we are aiming to solve?
  • What does the product need to do?
  • What is the business opportunity? (e.g. acquisition, activation, retention, revenue, referral, etc.)
  • What are the Key Performance Indicators (KPIs)?
  • How else will we define success for this project?
  • How does this product fit into the overall strategy?
  • Who are the users or customers?
  • Why is this important to them?
  • Why do they care?
  • What are the users trying to do?
  • What are their pain points?
  • How can we reach users through this design process?
  • Are there any constraints (technological, business, etc.)?
  • How are we better than our competitors?
  • Are there any relevant products we can look at?
  • Who are the primary decision-makers on this project?
  • Does any relevant documentation exist (personas, user flows, etc.)?
  • Do brand guidelines exist?
  • Does a style guide exist?

Stakeholder Interviews

Further, understand the business and market by speaking with individuals who have a vested interest in the organization and the project. Many of these questions can be asked during kickoffs, but if asked individually they can yield better answers.

  • What is your role in this project?
  • What is the one thing we must get right to make this project worth undertaking?
  • How will you, personally, define success for this project?
  • What is the role of this project in achieving that success?
  • What are the goals you need to achieve from this project?
  • What have you tried that has/hasn’t worked?
  • What went wrong in that case?
  • Who are the biggest competitors and what worries you about them?
  • How do you expect to differentiate this product?
  • Where do you want the product to be in the next year, 5 years?
  • What keeps you up at night with regards to your users?
  • What assumptions do you think you are making about your users?
  • What do you know for sure about your users?
  • What are the most common problems your users face?
  • What worries you about this project?

03 image

User Research

Avoid the risk and expense of creating something users don’t want by first understanding their goals and pain points. Answers to these questions can give you the all-important “why” behind user behavior. These are best supplemented with observational findings (what users say and do can be different) and analytics if they exist.

The Context

  • What does your typical weekday look like?
  • Tell me about your role at your company.
  • What are your daily responsibilities?
  • What are some of the apps and websites you use the most?

The Problem

  • How do you currently go about [problem/task]?
  • Are you looking for a solution or alternative for [problem/task]?
  • Tell me about the last time you tried to [problem/task].
  • What are you currently doing to make this [problem/task] easier?
  • Have you tried any workarounds to help you with this?
  • Have you tried any other products or tools?
  • If so, how did you hear about them?
  • What’s the most frustrating part about [problem/task]?
  • How often do you encounter/perform [problem/task]?
  • How long do you spend on [problem/task]?

04 image

User Testing

Validate your assumptions and improve the experience by watching real users interact with your prototype or product. While this is to mostly gather qualitative feedback, there are opportunities to supplement these findings with qualitative answers (e.g. testing against success metrics).

First Impressions

  • What is your first reaction to this?
  • What is going through your mind as you look at this?
  • How does this compare to your expectations?
  • What can you do here?
  • What is this for?
  • Do you have any questions right now?
  • Why would someone use this?
  • How do you think this is going to help you?
  • What is the first thing you would do?

Task-Focused

  • If you wanted to perform [task], what would you do?
  • What would you expect to happen?
  • What parts of this were the most/least important for you?
  • How could we present the information in a more meaningful way?
  • Is there anything you would change/add/remove to make this better for you?
  • What was the hardest part about this?
  • Was there anything surprising or unexpected?
  • On a scale of 1–5, how [adjective] was this?

Summary

  • Would you use this today?
  • What might keep people from using this?
  • What is the most you would be willing to pay for this?
  • What, if anything, do you like or dislike?
  • If you had a magic wand, what would you change?
  • Does this feel like it was designed for you?
  • Is anything missing?
  • What adjectives would you use to describe this?
  • On a scale of 1–5, how likely or unlikely would you be to recommend this to a friend?
  • Since this isn’t finished, what would you like to see in the final version?

05 image

Design Reviews

Conducted with fellow designers or the larger project team, design reviews can ensure the “whys” behind design decisions align with user and business goals. Ask these questions to better understand how a designer arrived at their solution. Good design requires intentionality.

Overall

  • What part of this design are you looking for feedback on?
  • What constraints are you working within?

Interaction Design

  • What is the user trying to accomplish on this screen?
  • What problem is this solving?
  • How could this design fail?
  • How did you arrive at this solution?
  • What’s the simpler version of this?
  • Is there anything we can remove?
  • What assumptions are you making?
  • Why is that there?
  • Why is that shown at all?
  • Is that worth displaying by default?
  • Why is the screen organized this way?
  • Why is this a better solution than [established design pattern]?

Visual Design

  • What is your type hierarchy?
  • What UI patterns are you using?
  • What rules have you defined for these patterns?
  • Are there opportunities to be more consistent?
  • What are your margin and padding rules?
  • What rules have you defined for the color palette?

06 image

Stakeholders Reviews

Receive feedback from stakeholders that is clear, relevant, and helpful. They’re probably not experts in giving design feedback, so it’s your responsibility to ask questions that steer the feedback towards project goals and areas they are subject matter experts in.

  • Does this solve your users’ needs?
  • Does this effectively address [project goal(s)]?
  • Does this meet all functional requirements?
  • Does this effectively reflect the brand?
  • Why is [design request] important?

Parting Thoughts

  • Follow questions with a healthy dose of “why?” or “tell me more about that”.
  • Know what you don’t know.
  • Think of a product you love. Then think of the great questions the design team had to ask to arrive at that solution.

The original write up by Garrett Kroll (Head of Product Design @Spent) published on Medium can be found here.

YD has published the best of Industrial Design for over 15 years, so the designers you want are already on our network. YD Job Boards is our endeavor to connect recruiters with our super talented audience. To recruit now,  Post a Job with us!

07 mockup

 

from Yanko Design http://bit.ly/2R9wo25
via IFTTT

The future of photography is code

Standard

What’s in a camera? A lens, a shutter, a light-sensitive surface and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.

The reason for this shift is pretty simple: Cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.

Not enough buckets

An image sensor one might find in a digital camera

The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, OmniVision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.

But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.

Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.

The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?

In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.

Isn’t all photography computational?

The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats and color science.

For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.

These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.

In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.

The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8 mm or so, for a total of 40.6 mm2.

Roughly speaking, it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.

Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.

Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.

All competition therefore comprises what these companies build on top of that foundation.

Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.

A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.

To capture an image the camera system picks a point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.

Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.

Access to the stream allows the camera to do all kinds of things. It adds context.

Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.

A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.

This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.

These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous data sets and immense amounts of computation time.

What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.

But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.

Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.

Similarly the idea of combining five, 10, or 100 images into a single HDR image seems absurd, but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.

If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.

Double vision

One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.

This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.

A mock-up of what a line of color iPhones could look like

Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.

These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.

The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.

So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.

Light and code

The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices and everywhere that light is captured and turned into images.

Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: Glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.

What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

from TechCrunch https://tcrn.ch/2O3pfOZ
via IFTTT

Creating the Photograph: Pauleth Ip’s “Dying of the Neon Light”

Standard




Creating the Photograph is an original series where photographers teach you about how they conceived an image, shot it, and edited it. The series has a heavy emphasis on teaching readers how to light. Want to be featured? Email chrisgampat[at]thephoblographer[dot]com.

As working photographers, oftentimes we’re essentially guns for hire, executing concepts assigned to us from art directors, companies, or private clients. We may have creative input, but ultimately, the concept still belongs to someone else. This is why I feel it’s important to pursue personal projects whenever possible between paying assignments, as they play an integral part in our growth as photographers. Personal projects allow us to exercise our own creativity, and affords us opportunities to try new techniques and pursue creative visions without the burden of success. As the old adage goes, we learn more from our failures than our successes, so fail, and fail often, but fail on your own time and learn from your experiences.

This is why I started this personal project.

The Concept

For this image, which is part of an ongoing series I’ve working on called “Dying of the Neon Light,” I drew a lot of inspiration from my childhood as well as my love for cinematography. Growing up, I spent my very early years in Hong Kong, bathed in the polychromatic rays cast by an endless sea of neon signs attached to what felt like every building in existence. Whenever someone thinks of the Pearl of the Orient, as the city is sometimes called, what often comes to mind first is this seemingly undying neon glow which has become symbolic of the city itself. These days, however, the radiance from neon lights grow dimmer by the day across the city, facing inevitable obsolescence and gradually replaced by more sophisticated and energy efficient LED signage. This dying of the neon light is spreading around the world as well, with neon signage going the way of the dinosaurs and disappearing from metropolitan cities like Tokyo, New York, and Los Angeles.

I’ve always been a huge fan of films like Blade Runner and Ghost in the Shell, cinematic masterpieces that helped define the cyberpunk dystopian genre. It’s without question that the characteristic neon glow that was so commonplace in Hong Kong was a major influence to the overall cyberpunk dystopian aesthetic featured prominently in those films. As a photographer, I feel that lighting is the single most important tool in our arsenal when creating images, and I often draw inspiration from the works of renowned cinematographers like Robert Richardson, Dion Beebe, and Roger Deakins. Deakins, coincidentally, was the cinematographer for the Blade Runner sequel Blade Runner 2049, for which he won a Best Cinematography Oscar.

With this aesthetic in mind, I started thinking about how I could create images that would embody the sense of retro-futurism while capturing the nostalgia of the last of the neon lights while they are still around. When I told my friend Chelsie, who is a freelance model, about the concept, she was onboard right away.

 

The Gear

  • Sony A7RIII
  • Sony 85mm f1.4 G Master
  • Apple iPhone 7 Plus
  • Various crystals I’ve collected over the years
  • Blank CD-R

 

The Shoot

When Chelsie and I were throwing ideas around, I showed her some stills from Blade Runner and Blade Runner 2049, and luckily for me she already owned the outfit that she’s seen wearing in the image, so all we had to worry about was the location. The first place in New York City that I could think of with that characteristic look was Chinatown, so we hopped into my car and off to Chinatown we went. It had been a few years since I was last in the area, so I was shocked to see that even the neon lights in Chinatown were starting to disappear. It didn’t help that we had gotten to the area pretty late, so most of the neon signage that was left had already been turned off. I knew that to really capture that signature look given off by neon signage, I would have to rely on heavily on ambient light, unlike my usual body of work which features dramatic off camera lighting. We were also doing this guerrilla style with just the two of us, in the middle of one of the hottest and most humid summers in New York City history, so I went back to basics and used very minimal gear.

The image was shot with my trusty A7RIII paired with the 85mm f1.4 G Master, along with my iPhone, a blank CD-R, and some glass crystals I had amassed over the years from trips to various light fixture stores. The reason I incorporated my phone, a blank CD-R, and the various crystals was because I wanted to introduce some random optical distortions to the images. I used my phone and the blank CD-R as reflective surfaces, while the crystals helped distort the light going into my camera. You never know what you’ll get when introducing optical distortions on the fly with these makeshift tools, but this randomness adds to the overall feel of the final image.

We wandered around Chinatown for a while until we finally discovered a small pocket of neon signage next to a hair salon, and just like that, we had our location. I framed up the shot in my camera, and asked Chelsie to pose next to the entrance of the hair salon so that the light from the signage would illuminate her, and tried to introduce some optical distortion into the image for added visual interest. What you see in the lower left corner of the frame is the light from neon signs above Chelsie’s head reflected back into the camera.

 

Post Production

Before

I try to get everything in camera whenever possible, so for this image, I did very minimal adjustments in post in Capture One. Since I shot this image with available light during the evening, I made some basic exposure and contrast adjustments, and raised the shadows slightly. I also enhanced the hue and luminance of the colors to bring out that signature neon glow. You can see the before and after below.

After

 

Credits

Photographer: Pauleth Ip

Model: Chelsie Brugger

 

Related



from The Phoblographer http://bit.ly/2ScWzGx
via IFTTT

Britney Spears’ hit ‘…Baby One More Time’ turns 20 so you’re old as hell

Standard

TwitterFacebook

Tuesday marks the 20th anniversary of Britney Spears’ iconic "…Baby One More Time," and while this may prompt you to cry for hours about the passage of time and what to do with your life, we recommend turning up the volume on this evergreen banger and just enjoying the sheer genius that has endured.

"..Baby One More Time" was written by Max Martin and debuted on cassette (!) Oct. 23, 1998. Spears herself was a teen at the time, a former Mouseketeer prodigy about to get insanely famous at a young age. 

The video hasn’t aged quite as well, but contains what would become signature elements of Spears’ ’90s oeuvre; quality dance breaks, a loose narrative, and the extremely Britney "Oh baby, baby" riff throughout. It’s a potent dose of mighty nostalgia and a bop to boot. Read more…

More about Entertainment, Music, Celebrities, Single, and Pop

from Mashable! http://bit.ly/2EIyPHx
via IFTTT