Differences

This shows you the differences between two versions of the page.

home [2018/05/06 22:46]
elmer
home [2020/01/29 10:05] (current)
rosenholtz
Line 56: Line 56:
//**News... **// //**News... **//
\\ \\
-**Peripheral vision, inference, and visual awareness**: An extended abstract is now available based on Ruth Rosenholtz' invited talk at the VSS 2017 Symposium, "The Role of Ensemble Statistics in the Visual Periphery." [[https://arxiv.org/abs/1706.02764|What modern vision science reveals about the awareness puzzle: Summary-statistic encoding plus decision limits underlie the richness of visual perception and its quirky failures]]+**Why vision works as well as it does, yet we are poor at the details**: Ruth Rosenholtz has a new paper re-examining limited capacity and visual attention in light of work on peripheral vision from the last decade. [[https://rdcu.be/b0EWf|Demystifying visual awareness: Peripheral encoding plus limited 
 +decision complexity resolve the paradox of rich visual experience 
 +and curious perceptual failures]].”
-**Attention and limited capacity**: Ruth Rosenholtz has a new paper on what we have learned about attention by studying peripheral vision. This leads us to a new conceptualization of limited capacity in vision and the mechanisms for dealing with it. "[[publications:attentionhvei2017|Capacity limits and how the visual system copes with them]]." +**Modeling peripheral vision**: Recent experimental work appears to challenge popular "pooling" models of crowding. Ruth Rosenholtz, Dian Yu, and Shaiyan Keshvari examine the evidence. [[https://jov.arvojournals.org/article.aspx?articleid=2740058|Challenges to pooling models of crowding: Implications for visual mechanisms]].
-**Modelling visual crowding**: Shaiyan and Ruth's work testing a unified account of visual crowding has been accepted to the [[http://jov.arvojournals.org/article.aspx?articleid=2498972|Journal of Vision]].+**Modelling visual crowding**: Shaiyan Keshvari and Ruth Rosenholtz test [[http://jov.arvojournals.org/article.aspx?articleid=2498972|a unifying account of visual crowding]].
**Paper accepted to IROS 2014**: Rui and Wenzhen's work on adapting the [[http://www.gelsight.com|Gelsight]] sensor for robotic touch has been accepted to IROS 2014. This work was done in collaboration with the [[http://www.ccs.neu.edu/home/rplatt/|Platt]] group at NEU, and it was covered by [[http://newsoffice.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|MIT News]]. **Paper accepted to IROS 2014**: Rui and Wenzhen's work on adapting the [[http://www.gelsight.com|Gelsight]] sensor for robotic touch has been accepted to IROS 2014. This work was done in collaboration with the [[http://www.ccs.neu.edu/home/rplatt/|Platt]] group at NEU, and it was covered by [[http://newsoffice.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|MIT News]].
 
home.1525661186.txt.gz · Last modified: 2018/05/06 22:46 by elmer