Author: Elmer Cat
-
Benchmarking Deep Network and Human Performance in the Periphery
—
by
Anne Harrington, Vasha DuTell, and Ruth Rosenholtz, with collaborators in Bill Freeman’s lab and Toyota Research have a new paper on a change detection paradigm to compare human and deep neural network object detection. Object Detection in Deep Neural Networks Differs from Humans in the Periphery
-
Training Object Detection Deep Neural Networks on Mongrel Images
—
by
Anne Harrington, Vasha DuTell, and Ruth Rosenholtz, with collaborators in Bill Freeman’s lab and Toyota Research have a new paper on using mongrels to evaluate and train deep neural networks. Evaluating Peripheral Vision as an Input Transformation to Understand Object Detection Model Behavior
-
Learning Statistics for Peripheral and Texture Models
—
by
Christian Kovesdi, Vasha DuTell, Anne Harrington, and Ruth Rosenholtz, with collaborators in Bill Freeman’s lab demonstrate a method of learning model statistics. StatTexNet: Evaluating the Importance of Statistical Parameters for Pyramid-Based Texture and Peripheral Vision Models
-
Interpreting Deep Neural Networks Using Temporal Stability Measures
—
by
Anne Harrington, Vasha DuTell, and Ruth Rosenholtz, with collaborators in Bill Freeman’s lab and Toyota Research Institute have a new ICLR paper on temporal stability in neural networks. Exploring perceptual straightness in learned visual representations
-
Creating Mongrel Images with GPU Acceleration
—
by
Vasha DuTell and Ruth Rosenholtz, with collaborators at NVIDIA, report on a new method for creating mongrel images. Efficient dataflow modeling of peripheral encoding in the human visual system
-
Why vision works as well as it does, yet we are poor at the details
—
by
Ruth Rosenholtz has a new paper re-examining limited capacity and visual attention in light of work on peripheral vision from the last decade. Demystifying visual awareness: Peripheral encoding plus limited decision complexity resolve the paradox of rich visual experience and curious perceptual failures
-
Modeling peripheral vision
—
by
Recent experimental work appears to challenge popular “pooling” models of crowding. Ruth Rosenholtz, Dian Yu, and Shaiyan Keshvari examine the evidence. Challenges to pooling models of crowding: Implications for visual mechanisms
-
Modeling visual crowding
—
by
Shaiyan Keshvari and Ruth Rosenholtz test a unifying account of visual crowding.
-
Paper accepted to IROS 2014
—
by
Rui and Wenzhen’s work on adapting the Gelsight sensor for robotic touch has been accepted to IROS 2014. This work was done in collaboration with the Platt group at NEU, and it was covered by MIT News.
-
Taking a new look at subway map design
—
by
The Rosenholtz lab’s Texture Tiling Model was used to evaluate subway maps for the MBTA Map Redesign Contest. Check out the FastCompany Design article, and the CSAIL news article. The news was also picked up by a couple other media sources: Smithsonian Magazine and The Dish. Here’s an older article about our research from Science Daily.
-
Tactile sensing for manipulation
—
by
If robots are to perform everyday tasks in the real world, they will need sophisticated tactile sensing. The tactile data must be integrated into multi-sensory representations that support exploration, manipulation, and other tasks. Tactile sensing for manipulation — (workshop held July 15, 2017)
-
Giving robots a sense of touch
—
by
GelSight technology lets robots gauge objects’ hardness and manipulate small tools. Giving robots a sense of touch
-
Fingertip sensor gives robot unprecedented dexterity
—
by
Armed with the GelSight sensor, a robot can grasp a freely hanging USB cable and plug it into a USB port. Fingertip sensor gives robot unprecedented dexterity
-
GelSight — Portable, super-high-resolution 3-D imaging
—
by
A simple new imaging system could help manufacturers inspect their products, forensics experts identify weapons and doctors identify cancers GelSight — Portable, super-high-resolution 3-D imaging
-
Artificial intelligence produces realistic sounds that fool humans
—
by
Armed with the GelSight sensor, a robot can grasp a freely hanging USB cable and plug it into a USB port. Artificial intelligence produces realistic sounds that fool humans