- Bloomington MN, US Jason Thomas Kraft - Stillwater MN, US Ryan Douglas Ingvalson - Sewickley PA, US David Arthur LaRose - Pittsburgh PA, US Zachary Irvin Parker - Pittsburgh PA, US Adam Richard Williams - Mountain View CA, US Stephen Paul Elizondo Landers - Pittsburgh PA, US Michael Jason Ramsay - Verona PA, US Brian Daniel Beyer - Pittsburgh PA, US
Autonomous machine navigation techniques may generate a three-dimensional point cloud that represents at least a work region based on feature data and matching data. Pose data associated with points of the three-dimensional point cloud may be generated that represents poses of an autonomous machine. A boundary may be determined using the pose data for subsequent navigation of the autonomous machine in the work region. Non-vision-based sensor data may be used to determine a pose. The pose may be updated based on the vision-based pose data. The autonomous machine may be navigated within the boundary of the work region based on the updated pose. The three-dimensional point cloud may be generated based on data captured during a touring phase. Boundaries may be generated based on data captured during a mapping phase.
Autonomous Machine Navigation And Training Using Vision System
- Bloomington MN, US Jason Thomas Kraft - Stillwater MN, US Ryan Douglas Ingvalson - Loretto MN, US David Arthur LaRose - Pittsburgh PA, US Zachary Irvin Parker - Pittsburgh PA, US Adam Richard Williams - Mountain View CA, US Stephen Paul Elizondo Landers - Pittsburgh PA, US Michael Jason Ramsay - Verona PA, US Brian Daniel Beyer - Pittsburgh PA, US
International Classification:
G05D 1/02 G05D 1/00 G01C 21/32
Abstract:
Autonomous machine navigation techniques may generate a three-dimensional point cloud that represents at least a work region based on feature data and matching data. Pose data associated with points of the three-dimensional point cloud may be generated that represents poses of an autonomous machine. A boundary may be determined using the pose data for subsequent navigation of the autonomous machine in the work region. Non-vision-based sensor data may be used to determine a pose. The pose may be updated based on the vision-based pose data. The autonomous machine may be navigated within the boundary of the work region based on the updated pose. The three-dimensional point cloud may be generated based on data captured during a touring phase. Boundaries may be generated based on data captured during a mapping phase.
For over 10 years, weve protected our customers by combining high-fidelity signals with agentic AI, behavioral analytics, and global threat intelligencedelivering fast, accurate, and high-quality threat detection and response, said Brian Beyer, CEO of Red Canary. As part of Zscaler, we will ele