Google
Principal Scientist and Director
Snap Inc. Feb 2017 - Jan 2019
Director of Research
Adobe Oct 2005 - Feb 2017
Vice President and Fellow
Microsoft Jan 1999 - Oct 2005
Senior Researcher
Cornell University Aug 1991 - Aug 1992
Visiting Assistant Professor
Education:
Stanford University 1986 - 1991
Doctorates, Doctor of Philosophy, Computer Science, Philosophy
Brown University 1979 - 1983
Skills:
Algorithms Software Engineering Computer Science C++ Software Development User Experience Software Design C Programming
Us Patents
Video-Based Rendering With User-Controlled Movement
Richard S. Szeliski - Bellevue WA David Salesin - Seattle WA Arno Schödl - Berlin, DE
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06T 1570
US Classification:
345473, 345474, 345422, 345421, 345723, 348700
Abstract:
A system and process for generating a video animation from the frames of a video sprite with user-controlled motion is presented. An object is extracted from the frames of an input video and processed to generate a new video sequence or video sprite of that object. In addition, the translation velocity of the object for each frame is computed and associated with each frame in the newly generated video sprite. The system user causes a desired path to be generated for the object featured in the video sprite to follow in the video animation. Frames of the video sprite showing the object of interest are selected and inserted in a background image, or frame of a background video, along the prescribed path. The video sprite frames are selected by comparing a last-selected frame to the other video sprite frames, and selecting a video sprite frame that is identified in the comparison as corresponding to an acceptable transition from the last-selected frame. Each newly selected video sprite frame is inserted at a point along the prescribed path dictated by the velocity associated with the object in the last-inserted frame.
System And Process For Generating 3D Video Textures Using Video-Based Rendering Techniques
A system and process for generating a 3D video animation of an object referred to as a 3D Video Texture is presented. The 3D Video Texture is constructed by first simultaneously videotaping an object from two or more different cameras positioned at different locations. Video from, one of the cameras is used to extract, analyze and synthesize a video sprite of the object of interest. In addition, the first, contemporaneous, frames captured by at least two of the cameras are used to estimate a 3D depth map of the scene. The background of the scene contained within the depth map is then masked out, and a clear shot of the scene background taken before filming of the object began, leaving just the object. To generate each new frame in the 3D video animation, the extracted region making up a âframeâ of the video sprite is mapped onto the previously generated 3D surface. The-resulting image is rendered from a novel viewpoint, and then combined with a flat image of the background which has been warped to the correct location.
David H. Salesin - Seattle WA Charles E. Jacobs - Issaquah WA Adam Finkelstein - Princeton NJ
Assignee:
University of Washington - Seattle WA
International Classification:
H04N 5783
US Classification:
386 68, 386111, 37524008
Abstract:
A representation for encoding time varying image data that allows for varying spatial and temporal resolutions in different parts of a video sequence. The representation, called multiresolution video, is based on a sparse, hierarchical encoding of the video data as multiple streams. Operations are defined for creating, viewing, and editing multiresolution video sequences. These operations support a variety of applications, including multiresolution playback, motion-blurred âfast forwardâ and âreverse,â constant speed display, enhanced video shuttling or searching, and âvideo clip-artâ editing and compositing. The multiresolution representation requires little storage overhead, and the algorithms using the representation are both simple and efficient.
Richard S. Szeliski - Bellevue WA David Salesin - Seattle WA Arno Schödl - Berlin, DE
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06T 1570
US Classification:
345475, 345723, 348700
Abstract:
A system and process for generating a new video sequence from frames taken from an input video clip. Generally, this involves computing a similarity value between each of the frames of the input video clip and each of the other frames. For each frame, the similarity values associated therewith are analyzed to identify potentially acceptable transitions between it and the remaining frames. A transition is considered acceptable if it would appear smooth to a person viewing a video containing the frames, or at least if the transition is one of the best available. A new video sequence is then synthesized using the identified transitions to specify an order in which the frames associated with these transitions are to be played. Finally, the new video sequence is rendered by playing the frames of the input video clip in the order specified in the synthesizing procedure. This rendering procedure can include a smoothing action in which those transitions that were deemed acceptable, but would not appear smooth to a viewer, are smoothed to lessen the discontinuity.
David Salesin - Seattle WA Geraldine Wade - Redmond WA Douglas E. Zongker - Seattle WA
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06T 1100
US Classification:
345469, 345467, 345468, 345471, 345619
Abstract:
Methods and systems for automatically hinting fonts, particularly TrueType fonts, by transferring hints from one font to another are described. In one embodiment, a character or glyph (i. e. a source character) from a first font is selected and provides hints that are to be transferred to a character or glyph of a second font (i. e. a target character). The hints comprise statements defined in terms of control points or knots that define the shape or appearance of a character. A match is found between individual control points on the different characters and then used as the basis for transferring the hints. In one embodiment, hints are transferred by modifying values in a control value table (CVT) that contains entries that are used to constrain the control points of the source character. The CVT values are modified so that they now constrain corresponding control points in the target character.
In one embodiment, a font-hinting system is configured to select a first TrueType font that has been hinted with hints that define constraints between control points associated with individual characters of the font. Individual characters of a second TrueType font that correspond to individual characters of the first TrueType font are identified. The second TrueType font is different from the first TrueType font and individual characters of the second TrueType font are unhinted. Hints are transferred from characters of the first TrueType font to individual corresponding characters of the second TrueType font, and a hint is discarded where it appears inappropriate for a character of the second TrueType font. Further, the system maintains indicia of a discarded hint to indicate where a hint has been discarded.
David Salesin - Seattle WA, US Geraldine Wade - Redmond WA, US Douglas E. Zongker - Seattle WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06T 11/00
US Classification:
345469, 345471, 345619
Abstract:
In one embodiment, a system for providing a hinted TrueType font is configured to provide a source character from a fully hinted TrueType font from which hints are to be transferred. The source character has multiple control points that are constrained by the hints. A target character is provided from a TrueType font to which hints from the source character are to be transferred. The target character has control points that will be constrained by the transferred hints. Hints associated with the source character and that refer to control points on the source character are transferred to hints associated with the target character and that refer to control points on the target character.
David Salesin - Seattle WA, US Geraldine Wade - Redmond WA, US Douglas E. Zongker - Seattle WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06T 11/00
US Classification:
345469, 345471, 345472, 345619
Abstract:
Methods and systems for hinting fonts are described. In one embodiment, a system for providing a hinted font is configured to define hints for a glyph of a first font. The hints are defined by one or more statements that contain multiple values that define constraints for the glyph. At least one of the values reference a table entry that corresponds to a table value that is used to constrain the glyph. An association is established between the glyph of the first font and a glyph of a second font. The second font is different from the first font. One or more statements are translated so that the statement(s) now pertain to and define constraints for the glyph of the second font.