Differences

This shows you the differences between two versions of the page.

home [2016/12/13 14:42]
bwolfe
home [2017/10/19 22:19] (current)
elmer
Line 10: Line 10:
</WRAP> </WRAP>
---- ----
-<WRAP column 25% people>+<WRAP column 20% people>
//**Faculty**// //**Faculty**//
  * [[:people:adelson|Edward Adelson]]   * [[:people:adelson|Edward Adelson]]
  * [[:people:rosenholtz|Ruth Rosenholtz]]   * [[:people:rosenholtz|Ruth Rosenholtz]]
- 
-//**Research Staff**// 
-  * [[people:lavanya|Lavanya Sharan]] 
//**Postdoctoral Scholars**// //**Postdoctoral Scholars**//
Line 24: Line 21:
//**Graduate Students**// //**Graduate Students**//
-  * [[people:derya|Derya Akkaynak]]  
-  * [[people:phillip|Phillip Isola]] 
-  * [[people:rui | Rui Li]] 
  * [[people:wenzhen|Wenzhen Yuan]]   * [[people:wenzhen|Wenzhen Yuan]]
Line 33: Line 27:
  * [[:people:kimo|Kimo Johnson]]   * [[:people:kimo|Kimo Johnson]]
  * [[people:Aude Oliva]]   * [[people:Aude Oliva]]
-  * [[people:Whitman Richards]]   
   
//**Administration**// //**Administration**//
-  * [[people:canfield | John Canfield]]+  * [[people:canfield | John-Elmer Canfield]]
//**Recent Alumni**// //**Recent Alumni**//
Line 42: Line 35:
  * [[people:beixiao|Bei Xiao]]   * [[people:beixiao|Bei Xiao]]
  * [[people:kehinger | Krista Ehinger]]   * [[people:kehinger | Krista Ehinger]]
-  * [[people:Jie Huang]] 
-  * [[people:xiaodan | Xiaodan(Stella) Jia]] 
-  * [[people:Alvin Raj]] 
-  * [[people:ntwarog | Nathaniel Twarog]]   
  * [[people:Xuetao Zhang]]   * [[people:Xuetao Zhang]]
-  * [[:people:shuang|Shuang Song]] +  * [[people:lavanya|Lavanya Sharan]] 
-  * [[:people:fcole|Forrester Cole]]+  * [[people:derya|Derya Akkaynak]]  
 +  * [[people:phillip|Phillip Isola]] 
 +  * [[people:rui | Rui Li]]
//**[[people:alumni| All alumni]]**// //**[[people:alumni| All alumni]]**//
Line 55: Line 46:
<WRAP column 60%> <WRAP column 60%>
-Founded in 1994, the Perceptual Science Group of the Department of Brain and Cognitive Sciences at MIT does research in human visual perception, machine vision, image processing, and human-computer interaction. Both the Adelson Lab and the Rosenholtz Lab are located in Building 32.+The Perceptual Science Group of the Department of Brain and Cognitive Sciences at MIT does research in human vision, machine vision, human-computer interaction, and touch sensing for robotics. Both the Adelson Lab and the Rosenholtz Lab are part of the Computer Science and Artificial Intelligence Lab (CSAIL), located in the Stata Center.\\
\\ \\
-\\ 
-//**News**// 
---- ----
-**Modelling visual crowding**: Shaiyan and Ruth's work on comparing models of visual crowding has been accepted to the [[http://jov.arvojournals.org/article.aspx?articleid=2498972|Journal of Vision]]. 
-**Dr. Phillip Isola graduates!** Phil defended thesis, // The Discovery of Perceptual Structure from Visual Co-occurrences in Space and Time//, on August 17th, 2015. He has just started as a postdoc with Alexei (Alyosha) Efros at UC Berkeley. Check out a photo of Dr. Isola's [[:gallery:defenseparties|photo]] celebratory reception, complete with detective costume.+//**News... **// 
 +<WRAP box> 
 +<WRAP box left 50%>**[[https://sites.google.com/view/rss17ts/overview|Tactile sensing for manipulation]]**\\ 
 +If robots are to perform everyday tasks in the real world, they will need sophisticated tactile sensing. The tactile data must be integrated into multi-sensory representations that support exploration, manipulation, and other tasks.\\ 
 +</WRAP> 
 +[[https://sites.google.com/view/rss17ts/overview|{{:rss17-1.png?250}}]] 
 +//      (workshop held July 15, 2017)//\\ 
 +</WRAP> 
 +---- 
 +<WRAP box> 
 +<WRAP box right 50%>**[[http://news.mit.edu/2017/gelsight-robots-sense-touch-0605|Giving robots a sense of touch]]**\\ 
 +GelSight technology lets robots gauge objects’ hardness and manipulate small tools.\\ 
 +</WRAP> 
 +{{youtube>small:BIW_jq3dOEE}} 
 +</WRAP> 
 +---- 
 +<WRAP box> 
 +<WRAP box left 50%>**[[http://news.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|Fingertip sensor gives robot unprecedented dexterity]]**\\ 
 +Armed with the GelSight sensor, a robot can grasp a freely hanging USB cable and plug it into a USB port.\\ 
 +</WRAP> 
 +{{youtube>small:w1EBdbe4Nes}} 
 +</WRAP> 
 +---- 
 +<WRAP box> 
 +<WRAP box right 50%>**[[http://news.mit.edu/2011/tactile-imaging-gelsight-0809|GelSight — Portable, super-high-resolution 3-D imaging]]**\\ 
 +A simple new imaging system could help manufacturers inspect their products, forensics experts identify weapons and doctors identify cancers.\\ 
 +</WRAP> 
 +{{youtube>small:S7gXih4XS7A}} 
 +</WRAP> 
 +---- 
 +<WRAP box> 
 +<WRAP box left 50%>**[[http://news.mit.edu/2016/artificial-intelligence-produces-realistic-sounds-0613|Artificial intelligence produces realistic sounds that fool humans]]**\\ 
 +Video-trained system from MIT’s Computer Science and Artificial Intelligence Lab could help robots understand how objects interact with the world.\\ 
 +</WRAP> 
 +{{youtube>small:0FW99AQmMc8}} 
 +</WRAP> 
 +---- 
 +\\ 
 +**Peripheral vision, inference, and visual awareness**: An extended abstract is now available based on Ruth Rosenholtz' invited talk at the VSS 2017 Symposium, "The Role of Ensemble Statistics in the Visual Periphery." [[https://arxiv.org/abs/1706.02764|What modern vision science reveals about the awareness puzzle: Summary-statistic encoding plus decision limits underlie the richness of visual perception and its quirky failures]]
-**Dr. Rui Li graduates!** Rui defended thesis, // Touching is Believing: Sensing and Analyzing Touch Information with GelSight//, on April 30th, 2015. He is now working on a startup called  [[http://virtulus.com/|Virtulus]] in Cambridge. Here is a [[:gallery:defenseparties|photo]] from the post-defense reception.+**Attention and limited capacity**: Ruth Rosenholtz has a new paper on what we have learned about attention by studying peripheral vision. This leads us to a new conceptualization of limited capacity in vision and the mechanisms for dealing with it. "[[publications:attentionhvei2017|Capacity limits and how the visual system copes with them]]."
-**Paper accepted to IROS 2014**: Rui and Wenzhen's work on adapting the [[http://www.gelsight.com|Gelsight]] sensor for robotic touch has been accepted to IROS 2014. This work was done in collaboration with the [[http://www.ccs.neu.edu/home/rplatt/|Platt]] group at NEU, and it was covered by [[http://newsoffice.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|MIT News]].+**Modelling visual crowding**: Shaiyan and Ruth's work testing a unified account of visual crowding has been accepted to the [[http://jov.arvojournals.org/article.aspx?articleid=2498972|Journal of Vision]].
-**Rapid material perception**: Lavanya's work on the [[http://people.csail.mit.edu/lavanya/rapidcat.html|rapid perception of material properties]] has been accepted to the [[http://www.journalofvision.org/content/14/9/12.full|Journal of Vision]].+**Dr. Shaiyan Keshvari graduates!** Shaiyan defended his thesis, //At the Interface of Materials and Objects in Peripheral Vision//, on July 29th, 2016.
-**Paper accepted to ECCV 2014**: Phillip's work on [[http://web.mit.edu/phillipi/Public/crisp_boundaries/index.html|crisp boundary detection]] has been accepted for an oral presentation.+**Dr. Phillip Isola graduates!** Phil defended thesis, // The Discovery of Perceptual Structure from Visual Co-occurrences in Space and Time//, on August 17th, 2015. He has just started as a postdoc with Alexei (Alyosha) Efros at UC Berkeley. Check out a photo of  Dr. Isola's [[:gallery:defenseparties|photo]] celebratory reception, complete with detective costume.
-**Dr. Derya Akkaynak graduates!**  She successfully defended her thesis, //A computational approach to the quantification of animal camouflage//, on May 23rd, 2014. Congratulations! She is continuing her research with a grant from the [[http://www.stri.si.edu/english/about_stri/index.php|Smithsonian Tropical Research Institution]] in Panama. Here is a [[:gallery:defenseparties|photo]] from the post-defense reception.+**Dr. Rui Li graduates!**  Rui defended thesis, // Touching is Believing: Sensing and Analyzing Touch Information with GelSight//, on April 30th, 2015. He is now working on a startup called  [[http://virtulus.com/|Virtulus]] in Cambridge. Here is a [[:gallery:defenseparties|photo]] from the post-defense reception.
-**Lighting direction and translucency perception**: Bei's work on understanding the role of the lighting direction in translucency perception has been accepted to the [[http://www.journalofvision.org/content/14/3/17.full|Journal of Vision]]+**Paper accepted to IROS 2014**: Rui and Wenzhen's work on adapting the [[http://www.gelsight.com|Gelsight]] sensor for robotic touch has been accepted to IROS 2014. This work was done in collaboration with the [[http://www.ccs.neu.edu/home/rplatt/|Platt]] group at NEU, and it was covered by [[http://newsoffice.mit.edu/2014/fingertip-sensor-gives-robot-dexterity-0919|MIT News]].
- +
-**Color calibration for scientific data acquisition**: Derya & Bei's article on calibrating off-the-shelf digital cameras for accurate color capture has been accepted to [[http://www.opticsinfobase.org/josaa/fulltext.cfm?uri=josaa-31-2-312&id=277090|JOSA A]].+
**Taking a new look at subway map design**: The Rosenholtz lab's Texture Tiling Model was used to evaluate subway maps for the MBTA Map Redesign Contest. Check out the [[http://www.fastcodesign.com/3020708/evidence/the-science-of-a-great-subway-map|FastCompany Design article]], [[http://blog.visual.ly/how-do-our-brains-process-infographics-mit-mongrel-shows-peripheral-vision-at-work/|Visual.ly article]], and the [[http://www.csail.mit.edu/node/2094|CSAIL news article]]. The news was also picked up by a couple other media sources: [[http://blogs.smithsonianmag.com/smartnews/2013/11/how-much-of-a-subway-map-can-one-persons-brain-process/|Smithsonian Magazine]] and [[http://dish.andrewsullivan.com/2013/11/07/building-a-better-subway-map/|The Dish]]. Here's an older article about our research from  [[http://www.sciencedaily.com/releases/2011/02/110202215339.htm|Science Daily]]. **Taking a new look at subway map design**: The Rosenholtz lab's Texture Tiling Model was used to evaluate subway maps for the MBTA Map Redesign Contest. Check out the [[http://www.fastcodesign.com/3020708/evidence/the-science-of-a-great-subway-map|FastCompany Design article]], [[http://blog.visual.ly/how-do-our-brains-process-infographics-mit-mongrel-shows-peripheral-vision-at-work/|Visual.ly article]], and the [[http://www.csail.mit.edu/node/2094|CSAIL news article]]. The news was also picked up by a couple other media sources: [[http://blogs.smithsonianmag.com/smartnews/2013/11/how-much-of-a-subway-map-can-one-persons-brain-process/|Smithsonian Magazine]] and [[http://dish.andrewsullivan.com/2013/11/07/building-a-better-subway-map/|The Dish]]. Here's an older article about our research from  [[http://www.sciencedaily.com/releases/2011/02/110202215339.htm|Science Daily]].
- 
- 
- 
</WRAP> </WRAP>
<WRAP clear></WRAP> <WRAP clear></WRAP>
 
home.1481658148.txt.gz · Last modified: 2016/12/13 14:42 by bwolfe