Optumrx at Unitedhealth Group
Director Business Architecture
Kaiser Permanente Jan 2013 - Jul 2013
Program Manager
Kaiser Permanente Nov 2012 - Dec 2012
Operational Processes Consultant
Unitedhealth Group Mar 2011 - Oct 2012
Enterprise Architect
Kaiser Permanente Aug 2010 - Dec 2010
Executive It Consultant
Education:
Technical University of Sofia 1972 - 1977
Master of Science, Masters, Electrical Engineering
American School Guadalajara Jalisco Mexico
Skills:
It Strategy Integration Business Analysis Process Improvement Business Process Enterprise Architecture Project Management Business Requirements Enterprise Software Strategy Management Business Process Improvement Business Architecture Solution Architecture Databases Software Documentation Management Consulting Oracle Leadership Business Integration Risk Management Crm Software Project Management Agile Methodologies
Snap Inc.
3D Software Engineer
Dreamworks Animation Nov 2015 - Sep 2016
Motion Capture Software Engineer
Blizzard Entertainment Aug 1, 2013 - Nov 30, 2015
Software Engineer
Dwa Investments Inc Nov 2011 - Aug 2013
Software Engineer
Dreamworks Animation Jun 2008 - Sep 2008
Research and Development Intern
Education:
Uc Irvine 2008 - 2010
Master of Science, Masters, Computer Science
Uc Irvine 2004 - 2008
Bachelors, Bachelor of Science, Computer Science
Aliso Niguel High School
Skills:
Computer Graphics Opengl Python C++ Qt Rendering Shaders Programming Linux Glsl Lua Objective C Software Engineering Scripting Lighting Computer Vision Os X Xcode Git Game Development Ios Development Object Oriented Design Image Processing
- Santa Monica CA, US Samuel Edward Hare - Los Angeles CA, US Maxim Maximov Lazarov - Culver City CA, US Tony Mathew - Los Angeles CA, US Andrew James McPhee - Culver City CA, US Daniel Moreno - Los Angeles CA, US Wentao Shang - Los Angeles CA, US
International Classification:
H04N 5/262 H04L 51/046 H04N 5/272 H04L 51/10
Abstract:
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by a messaging application, an image from a camera of a user device; receiving input that selects a user-customizable effects option for activating a user-customizable effects mode; in response to receiving the input, displaying an array of a plurality of effect options together with the image proximate to the user-customizable effects option; and applying a first effect associated with a first effect option of the plurality of effect options to the image.
- Santa Monica CA, US Samuel Edward Hare - Los Angeles CA, US Maxim Maximov Lazarov - Culver City CA, US Tony Mathew - Los Angeles CA, US Andrew James McPhee - Culver City CA, US Daniel Moreno - Los Angeles CA, US Dhritiman Sagar - Marina del Rey CA, US Wentao Shang - Los Angeles CA, US
International Classification:
G06T 19/00 G06T 7/194 G06T 7/50 H04L 67/131
Abstract:
The subject technology receives image data and depth data. The subject technology selects an augmented reality content generator corresponding to a three-dimensional (3D) effect. The subject technology applies the 3D effect to the image data and the depth data based at least in part on the selected augmented reality content generator. The subject technology generates, using a processor, a message including information related to the applied 3D effect, the image data, and the depth data.
Rendering 3D Captions Within Real-World Environments
- Santa Monica CA, US Samuel Edward Hare - Los Angeles CA, US Maxim Maximov Lazarov - Culver City CA, US Tony Mathew - Las Angeles CA, US Andrew James McPhee - Culver City CA, US Daniel Moreno - Los Angeles CA, US Wentao Shang - los Angeles CA, US
International Classification:
H04L 5/00 H04L 27/18
Abstract:
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for rendering three-dimensional captions (3D) in real-world environments depicted in image content. An editing interface is displayed on a client device. The editing interface includes an input component displayed with a view of a camera feed. A first input comprising one or more text characters is received. In response to receiving the first input, a two-dimensional (2D) representation of the one or more text characters is displayed. In response to detecting a second input, a preview interface is displayed. Within the preview interface, a 3D caption based on the one or more text characters is rendered at a position in a 3D space captured within the camera feed. A message is generated that includes the 3D caption rendered at the position in the 3D space captured within the camera feed.
Augmented Reality Content Generators Including 3D Data In A Messaging System
- Santa Monica CA, US Samuel Edward Hare - Los Angeles CA, US Maxim Maximov Lazarov - Culver City CA, US Tony Mathew - Los Angeles CA, US Andrew James McPhee - Culver City CA, US Daniel Moreno - Los Angeles CA, US Wentao Shang - Los Angeles CA, US
The subject technology selects a set of augmented reality content generators from available augmented reality content generators, the selected set of augmented reality content generators comprising at least one augmented reality content generator for applying a three-dimensional (3D) effect. The subject technology causes display of a carousel interface including selectable graphical items, each selectable graphical item corresponding to a respective augmented reality content generator. The subject technology receives a selection of a first selectable graphical item from the selectable graphical items, the first selectable graphical item including a first augmented reality content generator for applying a first 3D effect. The subject technology applies, to first image data and first depth data, the first augmented reality content generator corresponding to the selected first selectable graphical item. The subject technology generates a message including the applied first augmented reality content generator to the first image data and the first depth data.
- Santa Monica CA, US Samuel Edward Hare - Los Angeles CA, US Maxim Maximov Lazarov - Culver City CA, US Tony Mathew - Los Angeles CA, US Andrew James McPhee - Culver City CA, US Wentao Shang - Los Angeles CA, US
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for rendering virtual modifications to real-world environments depicted in image content. A reference surface is detected in a three-dimensional (3D) space captured within a camera feed produced by a camera of a computing device. An image mask is applied to the reference surface within the 3D space captured within the camera feed. A visual effect is applied to the image mask corresponding to the reference surface in the 3D space. The application of the visual effect to the image mask causes a modified surface to be rendered in presenting the camera feed on a display of the computing device.
- Santa Monica CA, US Samuel Edward Hare - Los Angeles CA, US Maxim Maximov Lazarov - Culver City CA, US Tony Mathew - Los Angeles CA, US Andrew James McPhee - Culver City CA, US Daniel Moreno - Los Angeles CA, US Dhritiman Sagar - Marina del Rey CA, US Wentao Shang - Los Angeles CA, US
The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology applies, to image data and depth data, the 3D effect based at least in part on the augmented reality content generator, the applying the 3D effect. The subject technology generates a depth map using at least the depth data, generates a segmentation mask based at least on the image data, and performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect.
Providing 3D Data For Messages In A Messaging System
- Santa Monica CA, US Samuel Edward Hare - Los Angeles CA, US Maxim Maximov Lazarov - Culver City CA, US Tony Mathew - Los Angeles CA, US Andrew James McPhee - Culver City CA, US Daniel Moreno - Los Angeles CA, US Dhritiman Sagar - Marina del Rey CA, US Wentao Shang - Los Angeles CA, US
International Classification:
G06T 19/00 G06F 3/04842 G06T 7/50 H04L 51/42
Abstract:
The subject technology generates depth data using a machine learning model based at least in part on captured image data from at least one camera of a client device. The subject technology applies, to the captured image data and the generated depth data, a D effect based at least in part on an augmented reality content generator. The subject technology generates a depth map using at least the depth data. The subject technology generates a packed depth map based at least in part on the depth map, the generating the packed depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels based at least in part on the raw depth map. The subject technology generates a segmentation mask based at least on the captured image data. The subject technology performs background inpainting and blurring of the captured image data using at least the segmentation mask to generate background inpainted image data.
Beautification Techniques For 3D Data In A Messaging System
- Santa Monica CA, US Samuel Edward Hare - Los Angeles CA, US Maxim Maximov Lazarov - Culver City CA, US Tony Mathew - Los Angeles CA, US Andrew James McPhee - Culver City CA, US Daniel Moreno - Los Angeles CA, US Dhritiman Sagar - Marina del Rey CA, US Wentao Shang - Los Angeles CA, US
The subject technology receives a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator for applying a 3D effect, the 3D effect including at least one beautification operation. The subject technology captures image data and depth data using a camera. The subject technology applies, to the image data and the depth data, the 3D effect including the at least one beautification operation based at least in part on the augmented reality content generator, the beautification operation being performed as part of applying the 3D effect. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation. The subject technology renders a view of the 3D message based at least in part on the applied 3D effect including the at least one beautification operation.