An integrated circuit for controlling a DC motor is disclosed. The integrated circuit includes at least one digital position and speed circuit (DPS) for providing measurements of speed, position, and direction of the motor, the DPS being in signal communication with the motor for receiving a pair of signals having a quadrature relationship; and at least one programmable gain amplifier (PGA) electrically coupled to the motor, the PGA being configured to receive a feedback signal indicative of current flowing through the motor and to apply a second signal to the motor for adjusting the speed of the motor; and at least two analog-to-digital converters (A/D), one A/D being used to quantize the output of the PGA for an off-chip processor; and another A/D to provide motor reference position from an analog sensor, such as a potentiometer; and at least two digital-to-analog converters (D/A), one D/A used to set the motor voltage; and another D/A used to set the motor current limit. The integrated circuit can be incorporated into a larger motor control loop which further includes a summing amplifier for providing the feedback signal to the motor that is indicative of current flowing through the motor; a buffer amplifier electrically for sensing the output current of the motor, and a processor for providing control signals to the system monolithic module and for receiving the measurements of speed, position, and direction of the motor.
Interactive User Interfaces For Robotic Minimally Invasive Surgical Systems
Simon P. DiMaio - Sunnyvale CA, US Christopher J. Hasser - Los Altos CA, US Russell H. Taylor - Severna Park MD, US David Q. Larkin - Menlo Park CA, US Peter Kazanzides - Towson MD, US Anton Deguet - Baltimore MD, US Joshua Leven - Astoria NY, US
Assignee:
Intuitive Surgical Operations, Inc. - Sunnyvale CA Johns Hopkins University - Baltimore MD
International Classification:
A61B 1/04
US Classification:
600111, 600101, 600166, 3482113, 3482114
Abstract:
In one embodiment of the invention, a method for a minimally invasive surgical system is disclosed. The method includes capturing and displaying camera images of a surgical site on at least one display device at a surgeon console; switching out of a following mode and into a masters-as-mice (MaM) mode; overlaying a graphical user interface (GUI) including an interactive graphical object onto the camera images; and rendering a pointer within the camera images for user interactive control. In the following mode, the input devices of the surgeon console may couple motion into surgical instruments. In the MaM mode, the input devices interact with the GUI and interactive graphical objects. The pointer is manipulated in three dimensions by input devices having at least three degrees of freedom. Interactive graphical objects are related to physical objects in the surgical site or a function thereof and are manipulatable by the input devices.
Interactive User Interfaces For Robotic Minimally Invasive Surgical Systems
The Johns Hopkins University c/o John Hopkins Technology Transfer - , US Russell H. Taylor - Severna Park MD, US David Q. Larkin - Menlo Park CA, US Peter Kazanzides - Towson MD, US Anton Deguet - Baltimore MD, US Balazs Peter Vagvolgyi - Baltimore MD, US Joshua Leven - Astoria NY, US
Assignee:
The Johns Hopkins University c/o John Hopkins Technology Transfer - Baltimore MD Intuitive Surgical Operations, Inc. - Sunnyvale CA
International Classification:
A61B 19/00 A61B 1/00
US Classification:
600111, 600166
Abstract:
In one embodiment of the invention, a method for a minimally invasive surgical system is disclosed. The method includes capturing and displaying camera images of a surgical site on at least one display device at a surgeon console; switching out of a following mode and into a masters-as-mice (MaM) mode; overlaying a graphical user interface (GUI) including an interactive graphical object onto the camera images; and rendering a pointer within the camera images for user interactive control. In the following mode, the input devices of the surgeon console may couple motion into surgical instruments. In the MaM mode, the input devices interact with the GUI and interactive graphical objects. The pointer is manipulated in three dimensions by input devices having at least three degrees of freedom. Interactive graphical objects are related to physical objects in the surgical site or a function thereof and are manipulatable by the input devices.
System And Method For Cavity Generation For Surgical Planning And Initial Placement Of A Bone Prosthesis
Alind Sahay - Sacramento CA Brent Mittelstadt - Placerville CA Willie Williamson - Roseville CA Joel Zuhars - Sacramento CA Peter Kazanzides - Sacramento CA
Assignee:
Integrated Surgical Systems, Inc. - Sacramento CA
International Classification:
A61F 228
US Classification:
623 16
Abstract:
Methods, systems and apparatus for planning the position of a prosthesis in a long bone in orthopaedic surgical procedures, such as hip replacement surgery, knee replacement surgery, long bone osteotomies, and the like. A bone model is generated from a scanned image of a bone, a prosthesis model is selected from a library of prosthesis models and then a cavity model is formed based on the prosthesis model and/or the bone model. The cavity model may then be positioned over the bone model, either interactively by the surgeon or automatically through an algorithm based on clinical parameters, to determine a reasonable location for implantation of a prosthesis within the bone. The cavity model allows the surgeon to optimize placement of the implant within the bone, and it provides important clinical information to the surgeon, such as areas in which press fits are provided, extension areas for possible subsidence and access areas for allowing the surgeon to insert the implant into the cavity.
Methods And Apparatus For Registering Ct-Scan Data To Multiple Fluoroscopic Images
Andre Pierre Gueziec - Mamaroneck NY Peter Kazanzides - Sacramento CA Russell H. Taylor - Severna Park MD
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
A61B 600
US Classification:
600425
Abstract:
A method and system is disclosed for registering two dimensional fluoroscopic images with a three dimensional model of a surgical tissue of interest. The method includes steps of: (a) generating, from CT or MRI data, a three dimensional model of a surgical tissue of interest; (b) obtaining at least two, two dimensional, preferably fluoroscopic, x-ray images representing at least two views of the surgical tissue of interest, the images containing radio-opaque markers for associating an image coordinate system with a surgical (robot) coordinate system; (c) detecting the presence of contours of the surgical tissue of interest in each of the at least two views; (d) deriving bundles of three dimensional lines that pass through the detected contours; and (e) interactively matching three dimensional points on three dimensional silhouette curves obtained from the three dimensional model with the bundles of three dimensional lines until the three dimensional model is registered within the surgical coordinate system to a predetermined level of accuracy. The step of iteratively matching includes steps of: defining a distance between surfaces of the model and a beam of three dimensional lines that approach the surfaces; and finding a pose of the surfaces that minimizes a distance to the lines using, preferably, a statistically robust method, thereby providing a desired registration between a surgical robot and a preoperative treatment plan.
Calibration System And Method To Align A 3D Virtual Scene And A 3D Real World For A Stereoscopic Head-Mounted Display
A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object. Accordingly, based on the obtained measurements, the calibration platform may generate a transformation function to provide a mapping between three-dimensional points in the real-world coordinate system and three-dimensional points in the display coordinate system.
Overlaying Augmented Reality (Ar) Content Within An Ar Headset Coupled To A Magnifying Loupe
A computer-implemented method for displaying augmented reality (AR) content within an AR device coupled to one or more loupe lenses comprising: obtaining calibration parameters defining a magnified display portion within a display of the AR device, wherein the magnified display portion corresponds to boundaries encompassing the one or more loupe lenses; receiving the AR content for display within the AR device; and rendering the AR content within the display, wherein the rendering the AR content comprises: identifying a magnified portion of the AR content to be displayed within the magnified display portion, and rendering the magnified portion of the AR content within the magnified display portion.
Calibration System And Method To Align A 3D Virtual Scene And A 3D Real World For A Stereoscopic Head-Mounted Display
A calibration platform may obtain measurements for aligning a real-world coordinate system and a display coordinate system. For example, the calibration platform may display, via an optical see-through head-mounted display (OST-HMD), a three-dimensional virtual object and receive, from a positional tracking device, information that relates to a current pose of a three-dimensional real-world object to be aligned with the three-dimensional virtual object. The calibration platform may record a three-dimensional position of a plurality of points on the three-dimensional real-world object based on the current pose of the three-dimensional real-world object, based on an indication that the plurality of points on the three-dimensional real-world object respectively corresponds with a plurality of points on the three-dimensional virtual object. Accordingly, based on the obtained measurements, the calibration platform may generate a transformation function to provide a mapping between three-dimensional points in the real-world coordinate system and three-dimensional points in the display coordinate system.
Youtube
Miniature Multi Leaf Collimator (JHU)
JHU Senior Design project. PROJECT MR E Miniature Radiation Escutcheon...