Powered By
Gizmorama - July 26, 2017

Good Morning,

Water is helpful in so many ways, and now here's one more...scientists are can produce more accurate 3D scans of complex objects all thanks to H2O.

Learn about this and more interesting stories from the scientific community in today's issue.

Until Next Time,

P.S. Did you miss an issue? You can read every issue from the Gophercentral library of newsletters on our exhaustive archives page. Thousands of issues, all of your favorite publications in chronological order. You can read AND comment. Just click GopherArchives

*-- Water helps scientists scan 3D images of complex objects --*

Scientists have developed a new method for producing accurate 3D scans of complex objects. The method's key ingredient is water.

Most 3D scanning technologies rely exclusively on optical devices, laser scanners and cameras. The data collected by these devices can be noisy and incomplete.

Cameras can't pick up features out of their line of sight, resulting in imprecise scans. For example, 3D scanners fail to properly render the shape of a miniature elephant model's rotund underside.

A team of researchers found the addition of some relatively simple arithmetic can bolster 3D scanning technologies. The scientists used fluid displacement to measure the volume of the 3D objects.

As an object is slowly dipped into a tub of water by a robotic arm, a computer model measures the changes in volume displacement, recreating thin slices of the object in 3D. By repeatedly dipping the object at various angles, the computer model can accurately recreate the object's geometry.

Researchers used the dipping method to accurately scan several 3D models. They're scheduled to present their novel 3D-scanning method at SIGGRAPH 2017, a computer graphics conference being held later this summer in Los Angeles.

*-- Disney researchers are watching people watch movies --*

Scientists with Disney Research have developed a method for analyzing the facial expressions of movie viewers.

The new deep-learning algorithm is designed to analyze the full range of facial expressions offered by a diverse audience, but it learns by first watching a series of cues produced by a single face. Scientists dubbed the analysis technology "factorized variational autoencoders," or FVAEs.

"The FVAEs were able to learn concepts such as smiling and laughing on their own," Zhiwei Deng, a doctoral student at Simon Fraser University and former Disney Research lab associate. "What's more, they were able to show how these facial expressions correlated with humorous scenes."

Researchers tested their algorithm on a few thousand audience members during viewings of several box office hits, including Big Hero 6, The Jungle Book and Star Wars: The Force Awakens. Four infrared cameras inside the theater recorded the facial expressions of the audience. The algorithm picked up 16 million facial cues.

"It's more data than a human is going to look through," said researcher Peter Carr. "That's where computers come in -- to summarize the data without losing important details."

The analysis software is designed to hone in on faces exhibiting similar facial cues, which help the algorithm develop an understanding of the stereotypical response to a film scene. This helps the algorithm better understand the expressions of other viewers.

Researchers believe their algorithm could be used to analyze a range of subjects. For example, scientists suggest the technology could be used to analyze how different trees respond to winds.

"Once a model is learned, we can generate artificial data that looks realistic," said researcher Yisong Yue.

Scientists described their work this week at the IEEE Conference on Computer Vision and Pattern Recognition.


Missed an Issue? Visit the Gizmorama Archives

Top Viewed Issues