Daniel A. Birnbaum - Los Angeles CA, US Jason T. Meltzer - Los Angeles CA, US
Assignee:
Sightcine Inc. - Los Angeles CA
International Classification:
H04N 9/31
US Classification:
348 53, 348 55, 348598
Abstract:
A disclosed projection system includes a display that renders a video representing a sequence of original images each having a corresponding frame interval, and one or more viewing device(s). During each frame interval, multiple subimages are displayed that, in some cases, average together to approximate an original image corresponding to that frame interval. The viewing device(s) attenuate each of the subimages by a respective coefficient to synthesize a target image for each frame interval. The system may include additional viewing device(s) that apply attenuation coefficients to the subimages to synthesize a second, different target image for each frame interval. A described projection method includes displaying multiple subimages in each frame interval, and transmitting attenuation coefficients to the viewing device(s). A disclosed movie customization system includes software that causes processor(s) to process each of multiple original video images to determine the corresponding subimages and weight coefficients.
Supplier Invoice Reconciliation And Payment Using Event Driven Platform
- New York NY, US MARY CATHERINE CALLAHAN - Phoenix AZ, US MOHNISH GORANTLA - New York NY, US SACHIN D. JADHAV - Phoenix AZ, US CHRISTINE A. KNORR - Parks AZ, US JASON MELTZER - Scottsdale AZ, US DOROTHY MILLS - Walnut Creek CA, US AMAR PETLA - New York NY, US ANUPAM SETH - Urbana IL, US RAHUL SHAURYA - Phoenix AZ, US SILAJIT SINGH - Scottsdale AZ, US URVASHI TYAGI - Short Hills NJ, US
Assignee:
AMERICAN EXPRESS TRAVEL RELATED SERVICES COMPANY, INC. - New York NY
International Classification:
G06Q 30/04 G06Q 10/08 G06Q 30/06
Abstract:
A system for automated supplier invoice reconciliation is disclosed. The system may receive an order confirmation associated with a purchase order (PO) from a supplier system. The system may receive the PO associated with the order confirmation from a buyer system. The system may receive a first invoice associated with the PO and the order confirmation from the supplier system. The system may reconcile between the PO, the first invoice, and the order confirmation to generate a second invoice. The system may pass the second invoice to the buyer system.
Systems And Methods For Performing Occlusion Detection
- Bedford MA, US Jason Meltzer - Pasadena CA, US Marc Barnada Rius - La Cellera De Ter, EE
International Classification:
B25J 9/16 G05D 1/02
Abstract:
The present invention provides a mobile robot configured to navigate an operating environment, that includes a machine vision system comprising a camera that captures images of the operating environment using a machine vision system; detects the presence of an occlusion obstructing a portion of the field of view of a camera based on the captured images, and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected, and maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.
Systems And Methods For Performing Occlusion Detection
- Bedford MA, US Jason Meltzer - Pasadena CA, US Marc Barnada Rius - La Cellera de Ter, ES
International Classification:
B25J 9/16 G05D 1/02 B25J 9/16 G05D 1/02
Abstract:
The present invention provides a mobile robot configured to navigate an operating environment, that includes a machine vision system comprising a camera that captures images of the operating environment using a machine vision system; detects the presence of an occlusion obstructing a portion of the field of view of a camera based on the captured images, and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected, and maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.
Robot Management Systems For Determining Docking Station Pose Including Mobile Robots And Methods Using Same
- Bedford MA, US Jason Meltzer - Los Angeles CA, US Jens-Steffen Gutmann - Cupertino CA, US Vazgen Karapetyan - Pasadena CA, US Mario E. Munich - La Canada CA, US
A mobile robot system is provided that includes a docking station having at least two pose-defining fiducial markers. The pose-defining fiducial markers have a predetermined spatial relationship with respect to one another and/or to a reference point on the docking station such that a docking path to the base station can be determined from one or more observations of the at least two pose-defining fiducial markers. A mobile robot in the system includes a pose sensor assembly. A controller is located on the chassis and is configured to analyze an output signal from the pose sensor assembly. The controller is configured to determine a docking station pose, to locate the docking station pose on a map of a surface traversed by the mobile robot and to path plan a docking trajectory.
Method For Object Localization And Pose Estimation For An Object Of Interest
- Detroit MI, US Jason Meltzer - Los Angeles CA, US Jiejun Xu - Chino CA, US Zhichao Chen - Woodland Hills CA, US Rashmi N. Sundareswara - Los Angeles CA, US David W. Payton - Calabasas CA, US Ryan M. Uhlenbrock - Los Angeles CA, US Leandro G. Barajas - Harvest AL, US Kyungnam Kim - Oak Park CA, US
A method for localizing and estimating a pose of a known object in a field of view of a vision system is described, and includes developing a processor-based model of the known object, capturing a bitmap image file including an image of the field of view including the known object, extracting features from the bitmap image file, matching the extracted features with features associated with the model of the known object, localizing an object in the bitmap image file based upon the extracted features, clustering the extracted features of the localized object, merging the clustered extracted features, detecting the known object in the field of view based upon a comparison of the merged clustered extracted features and the processor-based model of the known object, and estimating a pose of the detected known object in the field of view based upon the detecting of the known object.
Robot Management Systems For Determining Docking Station Pose Including Mobile Robots And Methods Using Same
- Bedford MA, US Jason Meltzer - Los Angeles CA, US Jens-Steffen Gutmann - Cupertino CA, US Vazgen Karapetyan - Pasadena CA, US Mario E. Munich - La Canada CA, US
A mobile robot system is provided that includes a docking station having at least two pose-defining fiducial markers. The pose-defining fiducial markers have a predetermined spatial relationship with respect to one another and/or to a reference point on the docking station such that a docking path to the base station can be determined from one or more observations of the at least two pose-defining fiducial markers. A mobile robot in the system includes a pose sensor assembly. A controller is located on the chassis and is configured to analyze an output signal from the pose sensor assembly. The controller is configured to determine a docking station pose, to locate the docking station pose on a map of a surface traversed by the mobile robot and to path plan a docking trajectory.
Systems And Methods For Performing Occlusion Detection
- Bedford MA, US Jason Meltzer - Pasadena CA, US Marc Barnada Rius - La Cellera De Ter, ES
International Classification:
B25J 9/16 B25J 9/00
Abstract:
The present invention provides a mobile robot configured to navigate an operating environment, that includes a machine vision system comprising a camera that captures images of the operating environment using a machine vision system; detects the presence of an occlusion obstructing a portion of the field of view of a camera based on the captured images, and generate a notification when an occlusion obstructing the portion of the field of view of the camera is detected, and maintain occlusion detection data describing occluded and unobstructed portions of images being used by the SLAM application.