Arizona State University 2002 - 2007
Bachelors, Bachelor of Arts, Psychology, Business
Calabasas High School 1998 - 2002
Skills:
Social Media Marketing Public Relations Sales Crm Start Ups Integrated Marketing Social Media Marketing Entrepreneurship Dogs Leadership Account Management Management Social Networking Marketing Strategy New Business Development E Commerce Online Marketing Fundraising Sales Management Advertising Digital Marketing Training Online Advertising Digital Media Sponsorship Strategic Planning Brand Management User Interface Design Online Research Business Development Advertising Sales Small Business Seo Marketing Communications Strategy Email Marketing Lead Generation Customer Service Deal Closure Mobile Technology User Experience Corporate Communications Facebook Sales Presentations Team Building Presenter Seed Capital Supply Chain Management Investments Pets Pet Sitting Dog Walking
Daniel A. Birnbaum - Los Angeles CA, US Jason T. Meltzer - Los Angeles CA, US
Assignee:
Sightcine Inc. - Los Angeles CA
International Classification:
H04N 9/31
US Classification:
348 53, 348 55, 348598
Abstract:
A disclosed projection system includes a display that renders a video representing a sequence of original images each having a corresponding frame interval, and one or more viewing device(s). During each frame interval, multiple subimages are displayed that, in some cases, average together to approximate an original image corresponding to that frame interval. The viewing device(s) attenuate each of the subimages by a respective coefficient to synthesize a target image for each frame interval. The system may include additional viewing device(s) that apply attenuation coefficients to the subimages to synthesize a second, different target image for each frame interval. A described projection method includes displaying multiple subimages in each frame interval, and transmitting attenuation coefficients to the viewing device(s). A disclosed movie customization system includes software that causes processor(s) to process each of multiple original video images to determine the corresponding subimages and weight coefficients.
Simultaneous Localization And Mapping Using Multiple View Feature Descriptors
Rakesh Gupta - Mountain View CA, US Ming-Hsuan Yang - Mountain View CA, US Jason Meltzer - Los Angeles CA, US
International Classification:
G06K009/00 G06K009/62 G06K009/46
US Classification:
382103000, 382159000, 382190000
Abstract:
Simultaneous localization and mapping (SLAM) utilizes multiple view feature descriptors to robustly determine location despite appearance changes that would stifle conventional systems. A SLAM algorithm generates a feature descriptor for a scene from different perspectives using kernel principal component analysis (KPCA). When the SLAM module subsequently receives a recognition image after a wide baseline change, it can refer to correspondences from the feature descriptor to continue map building and/or determine location. Appearance variations can result from, for example, a change in illumination, partial occlusion, a change in scale, a change in orientation, change in distance, warping, and the like. After an appearance variation, a structure-from-motion module uses feature descriptors to reorient itself and continue map building using an extended Kalman Filter. Through the use of a database of comprehensive feature descriptors, the SLAM module is also able to refine a position estimation despite appearance variations.
Systems And Methods For Performing Occlusion Detection
- Bedford MA, US Jason Meltzer - Pasadena CA, US Marc Barnada Rius - La Cellera De Ter, EE
International Classification:
B25J 9/16 G05D 1/02
Abstract:
The present invention provides a mobile robot configured to navigate an operating environment, that includes a machine vision system comprising a camera that captures images of the operating environment using a machine vision system; detects the presence of an occlusion obstructing a portion of the field of view of a camera based on the captured images, and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected, and maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.
Systems And Methods For Performing Occlusion Detection
- Bedford MA, US Jason Meltzer - Pasadena CA, US Marc Barnada Rius - La Cellera de Ter, ES
International Classification:
B25J 9/16 G05D 1/02 B25J 9/16 G05D 1/02
Abstract:
The present invention provides a mobile robot configured to navigate an operating environment, that includes a machine vision system comprising a camera that captures images of the operating environment using a machine vision system; detects the presence of an occlusion obstructing a portion of the field of view of a camera based on the captured images, and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected, and maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.
Robot Management Systems For Determining Docking Station Pose Including Mobile Robots And Methods Using Same
- Bedford MA, US Jason Meltzer - Los Angeles CA, US Jens-Steffen Gutmann - Cupertino CA, US Vazgen Karapetyan - Pasadena CA, US Mario E. Munich - La Canada CA, US
A mobile robot system is provided that includes a docking station having at least two pose-defining fiducial markers. The pose-defining fiducial markers have a predetermined spatial relationship with respect to one another and/or to a reference point on the docking station such that a docking path to the base station can be determined from one or more observations of the at least two pose-defining fiducial markers. A mobile robot in the system includes a pose sensor assembly. A controller is located on the chassis and is configured to analyze an output signal from the pose sensor assembly. The controller is configured to determine a docking station pose, to locate the docking station pose on a map of a surface traversed by the mobile robot and to path plan a docking trajectory.
Method For Object Localization And Pose Estimation For An Object Of Interest
- Detroit MI, US Jason Meltzer - Los Angeles CA, US Jiejun Xu - Chino CA, US Zhichao Chen - Woodland Hills CA, US Rashmi N. Sundareswara - Los Angeles CA, US David W. Payton - Calabasas CA, US Ryan M. Uhlenbrock - Los Angeles CA, US Leandro G. Barajas - Harvest AL, US Kyungnam Kim - Oak Park CA, US
A method for localizing and estimating a pose of a known object in a field of view of a vision system is described, and includes developing a processor-based model of the known object, capturing a bitmap image file including an image of the field of view including the known object, extracting features from the bitmap image file, matching the extracted features with features associated with the model of the known object, localizing an object in the bitmap image file based upon the extracted features, clustering the extracted features of the localized object, merging the clustered extracted features, detecting the known object in the field of view based upon a comparison of the merged clustered extracted features and the processor-based model of the known object, and estimating a pose of the detected known object in the field of view based upon the detecting of the known object.
Robot Management Systems For Determining Docking Station Pose Including Mobile Robots And Methods Using Same
- Bedford MA, US Jason Meltzer - Los Angeles CA, US Jens-Steffen Gutmann - Cupertino CA, US Vazgen Karapetyan - Pasadena CA, US Mario E. Munich - La Canada CA, US
A mobile robot system is provided that includes a docking station having at least two pose-defining fiducial markers. The pose-defining fiducial markers have a predetermined spatial relationship with respect to one another and/or to a reference point on the docking station such that a docking path to the base station can be determined from one or more observations of the at least two pose-defining fiducial markers. A mobile robot in the system includes a pose sensor assembly. A controller is located on the chassis and is configured to analyze an output signal from the pose sensor assembly. The controller is configured to determine a docking station pose, to locate the docking station pose on a map of a surface traversed by the mobile robot and to path plan a docking trajectory.
Systems And Methods For Performing Occlusion Detection
- Bedford MA, US Jason Meltzer - Pasadena CA, US Marc Barnada Rius - La Cellera De Ter, ES
International Classification:
B25J 9/16 B25J 9/00
Abstract:
The present invention provides a mobile robot configured to navigate an operating environment, that includes a machine vision system comprising a camera that captures images of the operating environment using a machine vision system; detects the presence of an occlusion obstructing a portion of the field of view of a camera based on the captured images, and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected, and maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.