<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
   <channel>
      <title>Visualization Lab Papers</title>
      <link>http://vis.berkeley.edu/</link>
      <description></description>
      <language>en</language>
      <copyright>Copyright 2016</copyright>
      <lastBuildDate>Sun, 17 Apr 2016 19:08:13 -0800</lastBuildDate>
      <generator>http://www.sixapart.com/movabletype/?v=3.34</generator>
      <docs>http://blogs.law.harvard.edu/tech/rss</docs> 

            <item>
         <title>VidCrit: Video-Based Asynchronous Video Review</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://www.amy-pavel.com">Amy Pavel (UC Berkeley)</a>, <a href="http://www.danbgoldman.com/">Dan B Goldman (Google)</a>, <a href="http://www.cs.berkeley.edu/~bjoern/">Björn Hartmann (UC Berkeley)</a>, <a href="http://vis.berkeley.edu/~maneesh/">Maneesh Agrawala (Stanford)</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">Video production is a collaborative process in which stakeholders regularly review drafts of the edited video to indicate problems and offer suggestions for improvement. Although practitioners prefer in-person feedback, most reviews are conducted asynchronously via email due to scheduling and location constraints. The use of this impoverished medium is challenging for both providers and consumers of feedback. We introduce VidCrit, a system for providing asynchronous feedback on drafts of edited video that incorporates favorable qualities of an in-person review. This system consists of two separate interfaces: (1) A feedback recording interface captures reviewers' spoken comments, mouse interactions, hand gestures and other physical reactions. (2) A feedback viewing interface transcribes and segments the recorded review into topical comments so that the video author can browse the review by either text or timelines. Our system features novel methods to automatically segment a long review session into topical text comments, and to label such comments with additional contextual information. We interviewed practitioners to inform a set of design guidelines for giving and receiving feedback, and based our system's design on these guidelines. Video reviewers using our system preferred our feedback recording interface over email for providing feedback due to the reduction in time and effort. In a fixed amount of time, reviewers provided 10.9 (σ=5.09) more local comments than when using text. All video authors rated our feedback viewing interface preferable to receiving feedback via e-mail.
</p>

<p class="paper-image">
<img src="/papers/vidcrit/feedback-viewing-interface-v3.jpg"/>
</p>

<p class="paper-caption">The VidCrit interface consists of a direct navigation pane for navigating the feedback session using the webcam and source video timelines, and a segmented comments pane for reviewing transcribed and segmented critiques. The direct navigation pane features the source and webcam videos along with a title (A), a feedback session timeline (B), a source video timeline (C) and the source video transcript (D). The segmented comments pane features sorting, filtering and search options (E), along with a list of segmented comments (F).
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="https://people.eecs.berkeley.edu/~amypavel/vidcrit-final.pdf">PDF (10M)</a></p>

<h3 class="paper-header">Video</h3>
<p class="paper-para">
<iframe width="560" height="315" src="https://www.youtube.com/embed/Pp-jDTbzi_4" frameborder="0" allowfullscreen></iframe>
</p>

<div class="line"></div>

<div class="paper-title">VidCrit: Video-Based Asynchronous Video Review</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://www.amy-pavel.com">Amy Pavel</a>, <a href="http://www.danbgoldman.com/">Dan B Goldman</a>, <a href="http://www.cs.berkeley.edu/~bjoern/">Björn Hartmann</a>, <a href="http://vis.berkeley.edu/~maneesh/">Maneesh Agrawala</a></div>
<div class="paper-venue">UIST 2016, October 2016, To Appear.</div>
<div class="paper-links">
<a href="https://people.eecs.berkeley.edu/~amypavel/vidcrit-final.pdf">PDF (10M)</a> | <a href="https://youtu.be/Pp-jDTbzi_4"> YouTube</a>
</div>

]]></description>
         <link>http://vis.berkeley.edu/papers/vidcrit/</link>
         <guid>http://vis.berkeley.edu/papers/vidcrit/</guid>
         <category>papers</category>
         <pubDate>Sun, 17 Apr 2016 19:08:13 -0800</pubDate>
      </item>
            <item>
         <title>QuickCut: An Interactive Tool for Editing Narrated Video</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="https://research.adobe.com/person/anh-truong/">Anh Truong</a>, <a href="http://www.floraine.org/">Floraine Berthouzoz</a>, <a href="https://research.adobe.com/person/wilmot-li/">Wilmot Li</a>, <a href="http://graphics.stanford.edu/~maneesh/">Maneesh Agrawala</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">We present QuickCut, an interactive video editing tool designed to help authors efficiently edit narrated video. QuickCut takes an audio recording of the narration voiceover and a collection of raw video footage as input. Users then review the raw footage and provide spoken annotations describing the relevant actions and objects in the scene. QuickCut time-aligns a transcript of the annotations with the raw footage and a transcript of the narration to the voiceover. These aligned transcripts enable authors to quickly match story events in the narration with semantically relevant video segments and form alignment constraints between them. Given a set of such constraints, QuickCut applies dynamic programming optimization to choose frame-level cut points between the video segments while maintaining alignments with the narration and adhering to low-level film editing guidelines. We demonstrate QuickCut's effectiveness by using it to generate a variety of short (less than 2 minutes) narrated videos. Each result required between 14 and 52 minutes of user time to edit (i.e. between 8 and 31 minutes for each minute of output video), which is far less than typical authoring times with existing video editing workflows.</p>

<p class="paper-image">
<img width="650px" src="/papers/quickcut/real-teaser2.png"/>
</p>

<p class="paper-caption"></p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="#">PDF</a></p>

<h3 class="paper-header">Video</h3>
<p class="paper-para">
<iframe width="560" height="315" src="https://www.youtube.com/embed/ePsyQb2y0lc" frameborder="0" allowfullscreen></iframe>
</p>

<div class="line"></div>

<div class="paper-title">QuickCut: An Interactive Tool for Editing Narrated Video</div>
]]><![CDATA[<div class="paper-authors">
<a href="https://research.adobe.com/person/anh-truong/">Anh Truong</a>, <a href="http://www.floraine.org/">Floraine Berthouzoz</a>, <a href="https://research.adobe.com/person/wilmot-li/">Wilmot Li</a>, <a href="http://graphics.stanford.edu/~maneesh/">Maneesh Agrawala</a></div>
<div class="paper-venue">UIST 2016, October 2016, To Appear.</div>
<div class="paper-links">
<a href="#">PDF</a> | <a href="https://www.youtube.com/embed/ePsyQb2y0lc"> YouTube</a>
</div>
]]></description>
         <link>http://vis.berkeley.edu/papers/quickcut/</link>
         <guid>http://vis.berkeley.edu/papers/quickcut/</guid>
         <category>papers</category>
         <pubDate>Sun, 17 Apr 2016 19:08:13 -0800</pubDate>
      </item>
            <item>
         <title>SceneSkim: Searching and Browsing Movies Using Synchronized Captions, Scripts and Plot Summaries</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://www.amy-pavel.com">Amy Pavel (UC Berkeley)</a>, <a href="http://www.danbgoldman.com/">Dan B Goldman (Adobe Research)</a>, <a href="http://www.cs.berkeley.edu/~bjoern/">Björn Hartmann (UC Berkeley)</a>, <a href="http://vis.berkeley.edu/~maneesh/">Maneesh Agrawala (Stanford)</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">Searching for scenes in movies is a time-consuming but crucial task for film studies scholars, film professionals, and new media artists. Our formative interviews reveal that such users search for a wide variety of entities — actions, props, dialogue phrases, character performances, locations — and they return to particular scenes they have seen in the past. Today, these users find relevant clips by watching the entire movie, scrubbing the video timeline, or navigating with opaque DVD chapter menus. We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries. Our interface integrates information from different documents to allow expressive search at several levels of granularity: Captions provide access to accurate dialogue, scripts describe shot-by-shot actions and settings, and plot summaries contain high-level event descriptions. We propose new algorithms for finding word-level caption to script alignments, parsing text scripts, and aligning plot summaries to scripts. Film studies graduate students evaluating SceneSkim expressed enthusiasm about the usability of the proposed system for their research and teaching.</p>

<p class="paper-image">
<img src="/papers/sceneskim/interface_figure_teaserv5-01.png" width="700px" />
</p>

<p class="paper-caption">The SceneSkim interface consists of a search pane for finding clips matching a query and a movie pane for browsing within movies using synchronized documents. The search pane features a keyword search bar (A), search filters (B) and a search results view (C). The movie pane includes the synchronized summary, script, captions, and movie.</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/sceneskim/sceneskim.pdf">PDF (4.6M)</a></p>

<h3 class="paper-header">Video</h3>
<p class="paper-para">
<iframe width="700" height="394" src="https://www.youtube-nocookie.com/embed/umvD6TGwciE?rel=0" frameborder="0" allowfullscreen></iframe>
</p>

<div class="line"></div>

<div class="paper-title">SceneSkim: Searching and Browsing Movies Using Synchronized Captions, Scripts and Plot Summaries</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://www.amy-pavel.com">Amy Pavel</a>, <a href="http://www.danbgoldman.com/">Dan B Goldman</a>, <a href="http://www.cs.berkeley.edu/~bjoern/">Björn Hartmann</a>, <a href="http://vis.berkeley.edu/~maneesh/">Maneesh Agrawala</a></div>
<div class="paper-venue">UIST 2015, November 2015, To Appear.</div>
<div class="paper-links">
<a href="/papers/sceneskim/sceneskim.pdf">PDF (4.6M)</a> | <a href="https://youtu.be/umvD6TGwciEf"> YouTube</a>
</div>
]]></description>
         <link>http://vis.berkeley.edu/papers/sceneskim/</link>
         <guid>http://vis.berkeley.edu/papers/sceneskim/</guid>
         <category>papers</category>
         <pubDate>Fri, 17 Apr 2015 19:20:13 -0800</pubDate>
      </item>
            <item>
         <title>Capture-Time Feedback for Recording Scripted Narration</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://www.ssrubin.com">Steve Rubin</a>, <a href="http://www.floraine.org">Floraine Berthouzoz</a>, <a href="https://ccrma.stanford.edu/~gautham/Site/Gautham_J._Mysore.html">Gautham J. Mysore</a>, <a href="http://vis.berkeley.edu/~maneesh/">Maneesh Agrawala</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">Well-performed audio narrations are a hallmark of captivating podcasts, explainer videos, radio stories, and movie trailers. To record these narrations, professional voiceover actors follow guidelines that describe how to use low-level vocal components---volume, pitch, timbre, and tempo---to deliver performances that emphasize important words while maintaining variety, flow and diction. Yet, these techniques are not well-known outside the professional voiceover community, especially among hobbyist producers looking to create their own narrations.  We present Narration Coach, an interface that assists novice users in recording scripted narrations. As a user records her narration, our system synchronizes the takes to her script, provides text feedback about how well she is meeting the expert voiceover guidelines, and resynthesizes her recordings to help her hear how she can speak better.</p>

<p class="paper-image">
<img alt="narrationcoach-interface.png" src="/papers/narrationcoach/narrationcoach-interface.png" width="700" height="308" />
</p>

<p class="paper-caption">Caption text goes here.
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/narrationcoach/narrationcoach.pdf">PDF (6.9M)</a></p>

<h3 class="paper-header">Results</h3>
<p class="paper-para"><a href="/papers/narrationcoach/results">Narrations recorded using our tool in a pilot study</a>
</p>

<h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/narrationcoach/narrationcoach.mp4">MP4 (63M)</a>
</p>

<iframe width="700" height="394" src="https://www.youtube-nocookie.com/embed/EdfHTTLBk0A?rel=0" frameborder="0" allowfullscreen></iframe>

</p>

<div class="line"></div>

<div class="paper-title">Capture-Time Feedback for Recording Scripted Narration</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://www.ssrubin.com">Steve Rubin</a>, <a href="http://www.floraine.org">Floraine Berthouzoz</a>, <a href="https://ccrma.stanford.edu/~gautham/Site/Gautham_J._Mysore.html">Gautham J. Mysore</a>, <a href="http://vis.berkeley.edu/~maneesh/">Maneesh Agrawala</a></div>
<div class="paper-venue">UIST 2015, November 2015, To Appear.</div>
<div class="paper-links">
<a href="/papers/narrationcoach/narrationcoach.pdf">PDF (6.9M)</a> | <a href="/papers/narrationcoach/narrationcoach.mp4">MP4 (62M)</a> | <a href="https://www.youtube.com/watch?v=EdfHTTLBk0A">YouTube</a> | <a href="/papers/narrationcoach/results">Results</a></div>
]]></description>
         <link>http://vis.berkeley.edu/papers/narrationcoach/</link>
         <guid>http://vis.berkeley.edu/papers/narrationcoach/</guid>
         <category>papers</category>
         <pubDate>Fri, 17 Apr 2015 19:08:30 -0800</pubDate>
      </item>
            <item>
         <title>Structuring, Aggregating, and Evaluating Crowdsourced Design Critique</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://kurtluther.com/">Kurt Luther</a>, <a href="">Jari-lee Tolentino</a>, <a href="">Wei Wu</a>, <a href="http://www.eecs.berkeley.edu/~amypavel/">Amy Pavel</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://cs.illinois.edu/directory/profile/bpbailey">Brian Bailey</a>, <a href="http://www.cs.berkeley.edu/~bjoern/">Bjoern Hartmann</a>, <a href="http://www.cs.cmu.edu/~spdow/">Steven Dow</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">Feedback is an important component of the design process, but gaining access to high-quality critique outside a classroom or firm is challenging. We present CrowdCrit, a web-based system that allows designers to receive design critiques from non-expert crowd workers. We evaluated CrowdCrit in three studies focusing on the designer’s experience and benefits of the critiques. In the first study, we compared crowd and expert critiques and found evidence that aggregated crowd critique approaches expert critique. In a second study, we found that designers who got crowd feedback perceived that it improved their design process. The third study showed that designers were enthusiastic about crowd critiques and used them to change their designs. We conclude with implications for the design of crowd feedback services.</p>

<p class="paper-image">
<img src="/papers/crowdcrit/crowdcrit.png"/>
</p>

<p class="paper-caption">CrowdCrit allows designers to submit preliminary designs to be critiqued by crowds and clients. The system then aggregates and visualizes the critiques for designers.
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/crowdcrit/luther-crowdcrit-cscw2015.pdf">PDF (10.4M)</a></p>

<div class="line"></div>

<div class="paper-title">Structuring, Aggregating, and Evaluating Crowdsourced Design Critique</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://kurtluther.com/">Kurt Luther</a>, <a href="">Jari-lee Tolentino</a>, <a href="">Wei Wu</a>, <a href="http://www.eecs.berkeley.edu/~amypavel/">Amy Pavel</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://cs.illinois.edu/directory/profile/bpbailey">Brian Bailey</a>, <a href="http://www.cs.berkeley.edu/~bjoern/">Bjoern Hartmann</a>, <a href="http://www.cs.cmu.edu/~spdow/">Steven Dow</a></div>
<div class="paper-venue">CSCW 2015, March 2015, To Appear.</div>
<div class="paper-links">
<a href="/papers/crowdcrit/luther-crowdcrit-cscw2015.pdf">PDF (10.4M)</a></div>
]]></description>
         <link>http://vis.berkeley.edu/papers/crowdcrit/</link>
         <guid>http://vis.berkeley.edu/papers/crowdcrit/</guid>
         <category>papers</category>
         <pubDate>Fri, 17 Apr 2015 19:08:13 -0800</pubDate>
      </item>
            <item>
         <title>Creating Works-Like Prototypes of Mechanical Objects</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://www0.cs.ucl.ac.uk/staff/b.koo/">Bongjin Koo</a>, <a href="http://www.adobe.com/technology/people/san-francisco/wilmot-li.html">Wilmot Li</a>, <a href="">JiaXian Yao</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://www0.cs.ucl.ac.uk/staff/n.mitra/index.html">Niloy Mitra</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">Designers often create physical works-like prototypes early in the product development cycle to explore possible mechanical architectures for a design. Yet, creating functional prototypes requires time and expertise, which discourages rapid design iterations. Designers must carefully specify part and joint parameters to ensure that parts move and fit and together in the intended manner. We present an interactive system that streamlines the process by allowing users to annotate rough 3D models with high-level functional relationships (e.g., part A fits inside part B). Based on these relationships, our system optimizes the model geometry to produce a working design. We demonstrate the versatility of our system by using it to design a variety of works-like prototypes.</p>

<p class="paper-image">
<img src="/papers/workslike/workslikeTeaser.png"/>
</p>

<p class="paper-caption">Creating works-like prototypes. Users start by creating a rough 3D model of a design and then specifying the desired functional relationships between parts (a). Our system optimizes part and joint parameters to generate a working model (b) that can be fabricated as a physical prototype (c).
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/workslike/klyam_worksLike_sigga14.pdf">PDF (10.3M)</a></p>

<h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/workslike/FinalSmall2.mov">MOV (104M)</a></p>

<iframe width="640" height="480" src="//www.youtube.com/embed/3PCnhINL42Q" frameborder="0" allowfullscreen></iframe>
</p>
<div class="line"></div>

<div class="paper-title">Creating Works-Like Prototypes of Mechanical Objects</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://www0.cs.ucl.ac.uk/staff/b.koo/">Bongjin Koo</a>, <a href="http://www.adobe.com/technology/people/san-francisco/wilmot-li.html">Wilmot Li</a>, <a href="">JiaXian Yao</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://www0.cs.ucl.ac.uk/staff/n.mitra/index.html">Niloy Mitra</a></div>
<div class="paper-venue">SIGGRAPH Asia 2014, pp. 217:1-217:9.</div>
<div class="paper-links">
<a href="/papers/workslike/klyam_worksLike_sigga14.pdf">PDF (10.3M)</a>| <a href="/papers/workslike/FinalSmall2.mov">MOV (104M)</a> | <a href="http://youtu.be/3PCnhINL42Q">YouTube</a></div>
]]></description>
         <link>http://vis.berkeley.edu/papers/workslike/</link>
         <guid>http://vis.berkeley.edu/papers/workslike/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:45:13 -0800</pubDate>
      </item>
            <item>
         <title>City Forensics: Using Visual Elements to Predict Non-Visual City Attributes</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://www.eecs.berkeley.edu/~sarietta">Sean Arietta</a>, 
<a href="http://www.eecs.berkeley.edu/~efros">Alexei A. Efros</a>, 
<a href="http://cseweb.ucsd.edu/~ravir/">Ravi Ramamoorthi</a>, 
<a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>
</h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">We present a method for automatically identifying and validating predictive relationships between the visual appearance of a city and its non-visual attributes (e.g. crime statistics, housing prices, population density etc.). Given a set of street-level images and (location, city-attribute-value) pairs of measurements, we first identify visual elements in the images that are discriminative of the attribute. We then train a predictor by learning a set of weights over these elements using non-linear Support Vector Regression. To perform these operations efficiently, we implement a scalable distributed processing framework that speeds up the main computational bottleneck (extracting visual elements) by an order of magnitude. This speedup allows us to investigate a variety of city attributes across 6 different American cities. We find that indeed there is a predictive relationship between visual elements and a number of city attributes including violent crime rates, theft rates, housing prices, population density, tree presence, graffiti presence, and the perception of danger. We also test human performance for predicting theft based on street-level images and show that our predictor outperforms this baseline with 33% higher accuracy on average. Finally, we present three prototype applications that use our system to (1) define the visual boundary of city neighborhoods, (2) generate walking directions that avoid or seek out exposure to city attributes, and (3) validate user-specified visual elements for prediction.
</p>

<p class="paper-image">
<img src="/papers/cityforensics/teaser650.jpg"/>
</p>

<p class="paper-caption">The violent crime rate in San Francisco is an example of a non-visual city attribute that is likely to have a strong relationship to visual
appearance. Our method automatically computes a predictor that models this relationship, allowing us to predict violent crime rates from streetlevel
images of the city. Across the city our predictor achieves 73% accuracy compared to ground truth. (columns 1 and 2, heatmaps run from
red indicating a high violent crime rate to blue indicating a low violent crime rate). Specifically, our predictor models the relationship between
visual elements (column 3), including fire escapes on fronts of buildings, high-density apartment windows, dilapidated convenience store signs,
and unique roof style, relate to increased violent crime rates. Our predictor also identifies street-level images from San Francisco that have an
unsafe visual appearance (column 4). Detections of visual elements are outlined in color.</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/cityforensics/paper.pdf">PDF (12M)</a></p>
<h3 class="paper-header">Supplemental Material</h3>
<p class="paper-para"><a href="/papers/cityforensics/supplemental.pdf">PDF (27M)</a></p>
<div class="line"></div>

<div class="paper-title">City Forensics: Using Visual Elements to Predict Non-Visual City Attributes</div>]]><![CDATA[<div class="paper-authors">
<a href="http://www.eecs.berkeley.edu/~sarietta">Sean Arietta</a>, 
<a href="http://www.eecs.berkeley.edu/~efros">Alexei A. Efros</a>, 
<a href="http://cseweb.ucsd.edu/~ravir/">Ravi Ramamoorthi</a>, 
<a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>
</div>

<div class="paper-venue">IEEE Transactions on Visualization and Computer Graphics [TVCG 2014]. pp. 2624-2633. <b>Honorable Mention for Best Paper </b></div>
<div class="paper-links">
<a href="/papers/cityforensics/paper.pdf">PDF (12M)</a> |
<a href="/papers/cityforensics/supplemental.pdf">PDF (27M)</a>
</div>]]></description>
         <link>http://vis.berkeley.edu/papers/cityforensics/</link>
         <guid>http://vis.berkeley.edu/papers/cityforensics/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:40:13 -0800</pubDate>
      </item>
            <item>
         <title>Deconstructing and Restyling D3 Visualizations</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://eecs.berkeley.edu/~jharper/">Jonathan Harper</a>, 
<a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>
</h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">
The D3 JavaScript library has become a ubiquitous tool for developing visualizations on the Web. Yet, once a D3 visualization is published online its visual style is difficult to change. We present a pair of tools for deconstructing and restyling existing D3 visualizations. Our deconstruction tool analyzes a D3 visualization to extract the data, the marks and the mappings between them. Our restyling tool lets users modify the visual attributes of the marks as well as the mappings from the data to these attributes. Together our tools allow users to easily modify D3 visualizations without examining the underlying code and we show how they can be used to deconstruct and restyle a variety of D3 visualizations. 
</p>

<p class="paper-image">
<img src="/papers/d3decon/d3decon.png"/>
</p>

<p class="paper-caption">
The mappings from a bar chart built using D3 before (left column) and after (right column) deconstruction and restyling with our tools.  Deconstructed mappings are shown to the right of the original visualization, and the mappings after restyling are shown to the right of the restyled result along with any unmapped attributes that were changed in the restyling. We use the arrow notation to indicate linear and categorical mappings. Mappings removed and added during restyling are highlighted in red and and green respectively. Changed mappings and attributes are highlighted in blue. 
<br /><br />
The original bar chart shows the 20 countries with the highest unemployment rates sorted by unemployment rate along the y-axis. We restyle the chart into a dot plot. Original visualization by Leon du Toit (<a href="http://bl.ocks.org/leondutoit/6436923/">http://bl.ocks.org/leondutoit/6436923/</a>).
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/d3decon/d3decon.pdf">PDF (8.6M)</a></p>

<h3 class="paper-header">Additional Materials</h3>
<p class="paper-para"><a href="http://ucbvislab.github.io/d3-deconstructor/">D3 Deconstructor Chrome extension with source code</a></p>

<h3 class="paper-header">Preview Video</h3>
<p class="paper-para"><a href="/papers/d3decon/d3decon.mp4">MP4 (4.3M)</a></p>

<div class="line"></div>

<div class="paper-title">Deconstructing and Restyling D3 Visualizations</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://eecs.berkeley.edu/~jharper/">Jonathan Harper</a>, 
<a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>
</div>

<div class="paper-venue">UIST 2014, October 2014. pp. 253-262.</div>
<div class="paper-links">
<a href="/papers/d3decon/d3decon.pdf">PDF (8.6M)</a> |
<a href="/papers/d3decon/d3decon.mp4">MP4 (4.3M)</a>|
<a href="http://ucbvislab.github.io/d3-deconstructor/">Chrome Extension</a>
</div>
]]></description>
         <link>http://vis.berkeley.edu/papers/d3decon/</link>
         <guid>http://vis.berkeley.edu/papers/d3decon/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:38:52 -0800</pubDate>
      </item>
            <item>
         <title>Video Digests: A Browsable, Skimmable Format for Informational Lecture Videos</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://www.amy-pavel.com">Amy Pavel</a>, 
<a href="http://obphio.us/">Colorado Reed</a>,
<a href="http://www.cs.berkeley.edu/~bjoern/">Bjoern Hartmann</a>,
<a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>
</h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">
Increasingly, authors are publishing long informational talks, lectures, and distance-learning videos online. However, it is difficult to browse and skim the content of such videos using current timeline-based video players. Video digests are a new format for informational videos that afford browsing and skimming by segmenting videos into a chapter/section structure and providing short text summaries and thumbnails for each section. Viewers can navigate by reading the summaries and clicking on sections to access the corresponding point in the video. We present a set of tools to help authors create such digests using transcript-based interactions. With our tools, authors can manually create a video digest from scratch, or they can automatically generate a digest by applying a combination of algorithmic and crowdsourcing techniques and then manually refine it as needed. Feedback from first-time users suggests that our transcript-based authoring tools and automated techniques greatly facilitate video digest creation. In an evaluative crowdsourced study we find that given a short viewing time, video digests support browsing and skimming better than timeline-based or transcript-based video players.
</p>

<p class="paper-image">
<img src="/papers/videodigests/video-digest-browser.png"/>
</p>

<p class="paper-caption">
A video digest affords browsing and skimming through a chapter/section organization of the video content. The chapters are topically coherent segments of the video that contain major themes in the presentation. Each chapter is further subdivided into a set of sections, that each provide a brief text summary of the corresponding video segment as well as a representative keyframe image. Clicking within a section plays the video starting at the beginning of the corresponding video segment. 
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para">
<a href="/papers/videodigests/videodigests.pdf">PDF (49M)</a> |
<a href="/papers/videodigests/videodigests_small.pdf">PDF (11M)</a>
</p>

<!-- <h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/videodigests/videodigests.mp4">MP4 (32.1M)</a></p> -->

<h3 class="paper-header">Results</h3>
<p class="paper-para"><a href="http://vis.berkeley.edu/videodigests">Video digests generated with our system</a></p>
<div class="line"></div>

<div class="paper-title">Video Digests: A Browsable, Skimmable Format for Informational Lecture Videos</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://www.amy-pavel.com">Amy Pavel</a>, 
<a href="http://obphio.us/">Colorado Reed</a>,
<a href="http://www.cs.berkeley.edu/~bjoern/">Bjoern Hartmann</a>,
<a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>
</div>

<div class="paper-venue">UIST 2014, October 2014. pp. 573-582.</div>
<div class="paper-links">
<a href="/papers/videodigests/videodigests.pdf">PDF (49M)</a> |
<a href="/papers/videodigests/videodigests_small.pdf">PDF (11M)</a> |
<!-- <a href="/papers/videodigests/videodigests.mp4">MP4 (30.6M)</a> | -->
<a href="http://vis.berkeley.edu/videodigests">Results</a>
</div>
]]></description>
         <link>http://vis.berkeley.edu/papers/videodigests/</link>
         <guid>http://vis.berkeley.edu/papers/videodigests/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:31:30 -0800</pubDate>
      </item>
            <item>
         <title>Generating Emotionally Relevant Musical Scores for Audio Stories</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://ssrubin.com">Steve Rubin</a>, 
<a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>
</h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">
Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.
</p>

<p class="paper-image">
<img src="/papers/emotionscores/emotionscores.gif"/>
</p>

<p class="paper-caption">
Our algorithm re-sequences the beats (circles) of the input music (bottom row) to match the emotions of the speech (top row). Our algorithm inserts pauses in speech and music, and makes musical transitions that were not in the original music in order to meet these constraints.
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/emotionscores/emotionscores.pdf">PDF (3.1M)</a></p>

<h3 class="paper-header">Results</h3>
<p class="paper-para"><a href="/papers/emotionscores/results">Musical scores generated by our system</a></p>

<h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/emotionscores/emotionscores.mp4">MP4 (31.9M)</a></p>

<iframe width="752" height="423" src="//www.youtube.com/embed/hrHprLYDkN4?rel=0" frameborder="0" allowfullscreen></iframe>

<div class="line"></div>

<div class="paper-title">Generating Emotionally Relevant Musical Scores for Audio Stories</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://ssrubin.com">Steve Rubin</a>, 
<a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>
</div>

<div class="paper-venue">UIST 2014, October 2014. pp. 439-448.</div>
<div class="paper-links">
<a href="/papers/emotionscores/emotionscores.pdf">PDF (3.1M)</a> |
<a href="/papers/emotionscores/emotionscores.mp4">MP4 (31.9M)</a> |
<a href="//www.youtube.com/watch?v=hrHprLYDkN4">YouTube</a> |
<a href="/papers/emotionscores/results">Results</a>
</div>
]]></description>
         <link>http://vis.berkeley.edu/papers/emotionscores/</link>
         <guid>http://vis.berkeley.edu/papers/emotionscores/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:30:13 -0800</pubDate>
      </item>
            <item>
         <title>Vectorising Bitmaps into Semi-Transparent Gradient Layers</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://richardt.name/">Christian Richardt</a>, <a href="http://www.jorg3.com/">Jorge Lopez-Moreno</a>, <a href="http://www-sop.inria.fr/members/Adrien.Bousseau/">Adrien Bousseau</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://www-sop.inria.fr/reves/George.Drettakis">George Drettakis</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">We present an interactive approach for decompositing bitmap drawings and studio photographs into opaque and semi-transparent vector layers. Semi-transparent layers are especially challenging to extract, since they require the inversion of the non-linear compositing equation. We make this problem tractable by exploiting the parametric nature of vector gradients, jointly separating and vectorising semi-transparent regions. Specifically, we constrain the foreground colours to vary according to linear or radial parametric gradients, restricting the number of unknowns and allowing our system to efficiently solve for an editable semi-transparent foreground. We propose a progressive workflow, where the user successively selects a semi-transparent or opaque region in the bitmap, which our algorithm separates as a foreground vector gradient and a background bitmap layer. The user can choose to decompose the background further or vectorise it as an opaque layer. The resulting layered vector representation allows a variety of edits, such as modifying the shape of highlights, adding texture to an object or changing its diffuse colour.</p>

<p class="paper-image">
<img src="/papers/layeredVectors/layeredTease.jpg"/>
</p>

<p class="paper-caption">Our interactive vectorisation technique lets users vectorise an input bitmap (a) into a stack of opaque and semi-transparent vector layers composed of linear or radial colour gradients (b). Users can manipulate the resulting layers using standard tools to quickly produce new looks (c). We outline semi-transparent layers for visualisation; these edges are not part of our result. We rasterised figures to avoid problems with transparency in some PDF viewers. See supplemental material for vector graphics.</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/layeredVectors/LayeredImageVectorisation-paper.pdf">PDF (6.6M)</a></p>
<h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/layeredVectors/LayeredImageVectorisation-video.mp4">MP4 (69.4M)</a></p>
<div style="position: relative; padding-bottom: 55%; padding-top: 15px; height: 0;"><iframe width="640" height="360" frameborder="0" src="http://player.vimeo.com/video/96188947?title=0&amp;byline=0&amp;portrait=0&amp;color=ffffff"></iframe></div>
</div>
<div class="line"></div>

<div class="paper-title">Vectorising Bitmaps into Semi-Transparent Gradient Layers</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://richardt.name/">Christian Richardt</a>, <a href="http://www.jorg3.com/">Jorge Lopez-Moreno</a>, <a href="http://www-sop.inria.fr/members/Adrien.Bousseau/">Adrien Bousseau</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://www-sop.inria.fr/reves/George.Drettakis">George Drettakis</a></div>
<div class="paper-venue">Computer Graphics Forum 33(4) [EGSR 2014]. pp. 11-19.</div>
<div class="paper-links">
<a href="/papers/layeredVectors/LayeredImageVectorisation-paper.pdf">PDF (6.6M)</a> | <a href="/papers/layeredVectors/LayeredImageVectorisation-video.mp4">MP4 (69.4M)</a> | <a href="http://vimeo.com/96188947">Vimeo</a></div>
]]></description>
         <link>http://vis.berkeley.edu/papers/layeredVectors/</link>
         <guid>http://vis.berkeley.edu/papers/layeredVectors/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:20:13 -0800</pubDate>
      </item>
            <item>
         <title>User-Assisted Video Stabilization</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://www.eecs.berkeley.edu/~bjiamin/">Jiamin Bai</a>, <a href="http://www.agarwala.org/">Aseem Agarwala</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://www.cs.berkeley.edu/~ravir/">Ravi Ramamoorthi</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">We present a user-assisted video stabilization algorithm that is able to stabilize challenging videos when state-of-the-art automatic algorithms fail to generate a satisfactory result. Current methods do not give the user any control over the look of the final result. Users either have to accept the stabilized result as is, or discard it should the stabilization fail to generate a smooth output. Our system introduces two new modes of interaction that allow the user to improve the unsatisfactory stabilized video. First, we cluster tracks and visualize them on the warped video. The user ensures that appropriate tracks are selected by clicking on track clusters to include or exclude them. Second, the user can directly specify how regions in the output video should look by drawing quadrilaterals to select and deform parts of the frame. These user-provided deformations reduce undesirable distortions in the video. Our algorithm then computes a stabilized video using the user-selected tracks, while respecting the user-modified regions. The process of interactively removing user-identified artifacts can sometimes introduce new ones, though in most cases there is a net improvement. We demonstrate the effectiveness of our system with a variety of challenging hand held videos.</p>

<p class="paper-image">
<img src="/papers/userStabilization/egsrTease.jpg"/>
</p>

<p class="paper-caption">Automatic video stabilization using the state-of-the-art is unsatisfactory as shown in a) as the background and subjects are heavily skewed. We visualize clusters of tracks used for stabilization c) and the user removes tracks on dynamic objects d) using mouse clicks. Tracks that are not used for the final rewarp are drawn in grey. The green outline in e) and f) shows the original frame boundaries. The distortion of the frame in e) is removed by having the user draw a quadrilateral (white lines) and its desired transformation shown in f). The new track selection and user-drawn transformations are used to re-stabilize the video to obtain the final result as shown in b). Notice that the background is rectified and that the subjects are no longer distorted.</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/userStabilization/egsr.pdf">PDF (47.7M)</a></p>
<h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/userStabilization/egsr_video.mp4">MP4 (83.1M)</a></p>

<div class="line"></div>

<div class="paper-title">User-Assisted Video Stabilization</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://www.eecs.berkeley.edu/~bjiamin/">Jiamin Bai</a>, <a href="http://www.agarwala.org/">Aseem Agarwala</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://www.cs.berkeley.edu/~ravir/">Ravi Ramamoorthi</a></div>
<div class="paper-venue">Computer Graphics Forum 33(4) [EGSR 2014]. pp. 61-70.</div>
<div class="paper-links">
<a href="/papers/userStabilization/egsr.pdf">PDF (47.7M)</a> | <a href="/papers/userStabilization/egsr_video.mp4">MP4 (83.1M)</a></div>
]]></description>
         <link>http://vis.berkeley.edu/papers/userStabilization/</link>
         <guid>http://vis.berkeley.edu/papers/userStabilization/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:18:13 -0800</pubDate>
      </item>
            <item>
         <title>Extracting References Between Text and Charts via Crowdsourcing</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://www.eecs.berkeley.edu/~nkong/">Nicholas Kong</a>, <a href="http://people.ischool.berkeley.edu/~hearst/">Marti A. Hearst</a>, <a href="http://vis.berkeley.edu/~maneesh/">Maneesh Agrawala</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">News articles, reports, blog posts and academic papers often include
graphical charts that serve to visually reinforce arguments presented
in the text. To help readers better understand the relation between the text and
the chart, we present a crowdsourcing pipeline to extract the
references between them. Specifically, we give crowd workers
paragraph-chart pairs and ask them to select text phrases as well as
the corresponding visual marks in the chart.
We then apply automated clustering and merging techniques to unify the 
references generated by multiple workers into a single set.
Comparing the crowdsourced references to a set of gold standard
references using a distance measure based on the F1 score, we find
that the average distance between the raw set of references produced
by a single worker and the gold standard is 0.54 (out of a
max of 1.0).  When we apply clustering and merging techniques the
average distance between the unified set of references and the gold
standard reduces to 0.39; an improvement of 27%.
We conclude with an interactive document viewing application that uses the
extracted references; readers can select phrases in the
text and the system highlights the related marks in the chart.</p>

<p class="paper-image">
<img src="/papers/textref/application_results_descriptor-02.png"/>
</p>

<p class="paper-caption">Examples of references extracted by our system, as shown in our interactive document viewing application. The user can select text (yellow background) and the application highlights the corresponding visual marks in the chart (fully saturated bars). The application also places red underlines beneath related phrases.</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/textref/textref-paper.pdf">PDF (16.6M)</a> | <a href="/papers/textref/supplemental">Interactive Document Viewer</a></p>

<h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/textref/textref_preview.mp4">MP4 (4.0M)</a></p>
<object width="425" height="344"><param name="movie" value="http://www.youtube.com/v/BWwsUkqPEUo&hl=en_US&fs=1&rel=0"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/BWwsUkqPEUo&hl=en_US&fs=1&rel=0" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object>

<div class="line"></div>

<div class="paper-title">Extracting References Between Text and Charts via Crowdsourcing</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://www.eecs.berkeley.edu/~nkong/">Nicholas Kong</a>, <a href="http://people.ischool.berkeley.edu/~hearst/">Marti A. Hearst</a>, <a href="http://vis.berkeley.edu/~maneesh/">Maneesh Agrawala</a></div>
<div class="paper-venue">ACM Human Factors in Computing Systems (CHI), 2014, pp. 31-40.</div>
<div class="paper-links">
<a href="/papers/textref/textref-paper.pdf">PDF (16.6M)</a> | <a href="/papers/textref/textref_preview.mp4">MP4 (4.0M)</a> | <a href="/papers/textref/supplemental">Interactive Document Viewer</a></div>
]]></description>
         <link>http://vis.berkeley.edu/papers/textref/</link>
         <guid>http://vis.berkeley.edu/papers/textref/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:08:13 -0800</pubDate>
      </item>
            <item>
         <title>MotionMontage: A System to Annotate and Composite Motion Takes for 3D Animations</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="">Ankit Gupta</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://homes.cs.washington.edu/~curless/">Brian Curless</a>, <a href="http://research.microsoft.com/en-us/um/people/cohen/">Michael Cohen</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">We present MotionMontage, a system for recording multiple motion takes of a rigid virtual object and compositing them together into a montage. Our system incorporates a Kinect-based performance capture setup that allows animators to create 3D animations by tracking the motion of a rigid physical object and mapping it in realtime onto a virtual object. The animator then temporally annotates the best parts of each take. MotionMontage merges the annotated motions into a single composite montage using a combination of dynamic time warping and optimization of a Semi-Markov Conditional Random Field. Our system also supports the creation of layered animations in which multiple objects are moving at the same time. To aid the animator in coordinating the motions of the objects we provide spatial markers which indicate the positions of previously recorded objects at user-specified points in time. We perform a user study to evaluate the perceived quality of the montages created with our system and find that viewers (including both the original animators and new viewers) generally prefer the animation montage to any individual take.
</p>

<p class="paper-image">
<img src="/papers/motionMontage/MM.png"/>
</p>

<p class="paper-caption">A user working with the MotionMontage system to record multiple takes of a 3D animation.
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/motionMontage/GuptaMotionMontage.pdf">PDF (5.3M)</a></p>

<h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/motionMontage/MotionMontageOverview.mp4">MP4 (83.4M)</a></p>

<iframe width="640" height="360" src="//www.youtube.com/embed/i_0VrORFy4Y" frameborder="0" allowfullscreen></iframe>
</p>

<h3 class="paper-header">Other Videos</h3>
<p class="paper-para"><a href="/papers/motionMontage/MotionMontageTeaser.mp4">Teaser Video MP4 (4.2M)</a></p>
<p class="paper-para"><a href="/papers/motionMontage/MotionMontageComparisonMotionGraphs.mp4">Motion Graphs Comparison MP4 (8.3M)</a></p>

<div class="line"></div>

<div class="paper-title">MotionMontage: A System to Annotate and Composite Motion Takes for 3D Animations</div>
]]><![CDATA[<div class="paper-authors">
<a href="">Ankit Gupta</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a>, <a href="http://homes.cs.washington.edu/~curless/">Brian Curless</a>, <a href="http://research.microsoft.com/en-us/um/people/cohen/">Michael Cohen</a></div>
<div class="paper-venue">ACM Human Factors in Computing Systems (CHI), 2014, pp. 2017-2026.</div>
<div class="paper-links">
<a href="/papers/motionMontage/GuptaMotionMontage.pdf">PDF (5.3M)</a> | <a href="/papers/motionMontage/MotionMontageOverview.mp4">MP4 (83.4M)</a> | <a href="http://youtu.be/i_0VrORFy4Y">YouTube</a> |<a 
href="/papers/motionMontage/MotionMontageTeaser.mp4">Teaser MP4 (4.2M)</a> | <a href="/papers/motionMontage/MotionMontageComparisonMotionGraphs.mp4">Motion Graphs Comparison MP4 (8.3M)</a></div>
]]></description>
         <link>http://vis.berkeley.edu/papers/motionMontage/</link>
         <guid>http://vis.berkeley.edu/papers/motionMontage/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:08:11 -0800</pubDate>
      </item>
            <item>
         <title>History Assisted View Authoring for 3D Models</title>
         <description><![CDATA[<h3 class="paper-authors-large">
<a href="http://ht-timchen.org/">Hsiang-Ting (Tim) Chen</a>, <a href="http://www.tovigrossman.com/">Tovi Grossman</a>, <a href="http://www.liyiwei.org/">Li-Yi Wei</a>, <a href="http://www.dgp.toronto.edu/~rms/">Ryan Schmidt</a>, <a href="http://www.cs.berkeley.edu/~bjoern/">Björn Hartmann</a>, <a href="http://www.autodeskresearch.com/people/george">George Fitzmaurice</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a></h3>

<h3 class="paper-header">Abstract</h3>
<p class="paper-para">3D modelers often wish to showcase their models for sharing or review purposes. This may consist of generating static viewpoints of the model or authoring animated fly-throughs. Manually creating such views is often tedious and few automatic methods are designed to interactively assist the modelers with the view authoring process. We present a view authoring assistance system that supports the creation of informative view points, view paths, and view surfaces, allowing modelers to author the interactive navigation experience of a model. The key concept of our implementation is to analyze the model's workflow history, to infer important regions of the model and representative viewpoints of those areas. An evaluation indicated that the viewpoints generated by our algorithm are comparable to those manually selected by the modeler. In addition, participants of a user study found our system easy to use and effective for authoring viewpoint summaries.</p>

<p class="paper-image">
<img src="/papers/historyAssist/historyAssitedFig1.png"/>
</p>

<p class="paper-caption">Our view authoring environment integrated in a 3D modeling software: (A) the main modeling  window, (B) the authoring panel visualizing authored views in  the spatial context, and (C) the navigation panel showing the authored views in the temporal sequence. 
</p>

<h3 class="paper-header">Research Paper</h3>
<p class="paper-para"><a href="/papers/historyAssist/historyassisted3D.pdf">PDF (2.3M)</a></p>
<h3 class="paper-header">Video</h3>
<p class="paper-para"><a href="/papers/historyAssist/historyassisted3D.mov">MOV (47.9M)</a></p>

<div class="line"></div>

<div class="paper-title">History Assisted View Authoring for 3D Models</div>
]]><![CDATA[<div class="paper-authors">
<a href="http://ht-timchen.org/">Hsiang-Ting (Tim) Chen</a>, <a href="http://www.tovigrossman.com/">Tovi Grossman</a>, <a href="http://www.liyiwei.org/">Li-Yi Wei</a>, <a href="http://www.dgp.toronto.edu/~rms/">Ryan Schmidt</a>, <a href="http://www.cs.berkeley.edu/~bjoern/">Björn Hartmann</a>, <a href="http://www.autodeskresearch.com/people/george">George Fitzmaurice</a>, <a href="http://vis.berkeley.edu/~maneesh">Maneesh Agrawala</a></div>
<div class="paper-venue">ACM Human Factors in Computing Systems (CHI), 2014, pp. 2027-2036.</div>
<div class="paper-links">
<a href="/papers/historyAssist/historyassisted3D.pdf">PDF (2.3M)</a> | <a href="/papers/historyAssist/historyassisted3D.mov">MOV (47.9M)</a></div>
]]></description>
         <link>http://vis.berkeley.edu/papers/historyAssist/</link>
         <guid>http://vis.berkeley.edu/papers/historyAssist/</guid>
         <category>papers</category>
         <pubDate>Thu, 17 Apr 2014 19:00:13 -0800</pubDate>
      </item>
      
   </channel>
</rss>
