Shankar Thagadur Shivappa

age ~43

from San Diego, CA

Also known as:
  • Shankar T Shivappa
  • Shivappa Shankar Thagadur
  • Shivappa S Thagadur
  • Shivappa Shankar
  • Shankar A
Phone and address:
5049 Frink Ave, San Diego, CA 92117

Shankar Shivappa Phones & Addresses

  • 5049 Frink Ave, San Diego, CA 92117
  • Guatay, CA
  • Beaverton, OR
  • Tualatin, OR
  • Issaquah, WA
  • La Jolla, CA

Work

  • Company:
    Digimarc
    Jul 2010
  • Position:
    Sr. r&d engineer

Education

  • Degree:
    Ph.D
  • School / High School:
    University of California, San Diego
    2004 to 2010
  • Specialities:
    Signal and Image Processing

Skills

Algorithms • Computer Vision • Digital Signal Processing • Machine Learning • Matlab • Signal Processing • Research

Languages

Hindi • Kannada

Industries

Wireless

Resumes

Shankar Shivappa Photo 1

Staff Engineer At Qualcomm

view source
Location:
5049 Frink Ave, San Diego, CA 92117
Industry:
Wireless
Work:
Digimarc since Jul 2010
Sr. R&D Engineer

Oregon Zoo 2010 - 2013
Zooguide - Conservation crew

Microsoft research Jun 2009 - Sep 2009
Research Intern

AT&T Labs - Research Jun 2005 - Sep 2005
Research Intern

Indian Institute of Technology Jun 2004 - May 2005
Teaching Assistant
Education:
University of California, San Diego 2004 - 2010
Ph.D, Signal and Image Processing
Indian Institute of Technology, Madras
Master, Technology; Communication Systems
Indian Institute of Technology, Madras
Bachelor, Technology; Electrical Engineering
Skills:
Algorithms
Computer Vision
Digital Signal Processing
Machine Learning
Matlab
Signal Processing
Research
Languages:
Hindi
Kannada

Us Patents

  • Audio Localization Using Audio Signal Encoding And Recognition

    view source
  • US Patent:
    20120214544, Aug 23, 2012
  • Filed:
    Feb 23, 2011
  • Appl. No.:
    13/033372
  • Inventors:
    Shankar Thagadur Shivappa - Beaverton OR, US
    Tony F. Rodriguez - Portland OR, US
  • International Classification:
    H04R 3/00
    H04M 1/00
    G10L 19/00
    H04R 29/00
  • US Classification:
    4555561, 381 92, 381 58, 704500
  • Abstract:
    A positioning network comprises an array of signal sources that transmit signals with unique characteristics that are detectable in signals captured through a sensor on a mobile device, such as a microphone of a mobile phone handset. Through signal processing of the captured signal, the positioning system distinguishes these characteristics to identify distinct sources and their corresponding coordinates. A position calculator takes these coordinates together with other attributes derived from the received signals from distinct sources, such as time of arrival or signal strength, to calculate coordinates of the mobile device. A layered protocol is used to introduce distinguishing characteristics in the source signals. This approach enables the use of low cost components to integrate a positioning network on equipment used for other functions, such as audio playback equipment at shopping malls and other venues where location based services are desired.
  • Mobile Device Indoor Navigation

    view source
  • US Patent:
    20120214515, Aug 23, 2012
  • Filed:
    Aug 1, 2011
  • Appl. No.:
    13/195715
  • Inventors:
    Bruce L. Davis - Lake Oswego OR, US
    Tony F. Rodriguez - Portland OR, US
    Shankar Thagadur Shivappa - Beaverton OR, US
  • International Classification:
    H04W 4/02
  • US Classification:
    4554563, 4554566
  • Abstract:
    A method for indoor navigation in a venue derives positioning of a mobile device based on sounds captured by the microphone of the mobile device from the ambient environment. It is particularly suited to operate on smartphones, where the sounds are captured using microphone that captures sounds in a frequency range of human hearing. The method determines a position of the mobile device in the venue based on identification of the audio signal, monitors the position of the mobile device, and generates a position based alert on an output device of the mobile device when the position of the mobile device is within a pre-determined position associated with the position based alert.
  • Directional Audio Generation With Multiple Arrangements Of Sound Sources

    view source
  • US Patent:
    20220386059, Dec 1, 2022
  • Filed:
    May 27, 2021
  • Appl. No.:
    17/332813
  • Inventors:
    - San Diego CA, US
    Shankar THAGADUR SHIVAPPA - San Diego CA, US
  • International Classification:
    H04S 7/00
    H04R 1/22
  • Abstract:
    A device includes a memory configured to store instructions. The device also includes a processor configured to execute the instructions to obtain spatial audio data representing audio from one or more sound sources. The processor is also configured to execute the instructions to generate first directional audio data based on the spatial audio data. The first directional audio data corresponds to a first arrangement of the one or more sound sources relative to an audio output device. The processor is further configured to generate second directional audio data based on the spatial audio data. The second directional audio data corresponds to a second arrangement of the one or more sound sources relative to the audio output device. The second arrangement is distinct from the first arrangement. The processor is also configured to generate an output stream based on the first directional audio data and the second directional audio data.
  • Signalling Of Audio Effect Metadata In A Bitstream

    view source
  • US Patent:
    20220386060, Dec 1, 2022
  • Filed:
    Oct 29, 2020
  • Appl. No.:
    17/755578
  • Inventors:
    - San Diego CA, US
    Shankar THAGADUR SHIVAPPA - San Diego CA, US
    Jason FILOS - San Diego CA, US
    Siddhartha Goutham SWAMINATHAN - San Diego CA, US
    Ferdinando OLIVIERI - San Diego CA, US
  • International Classification:
    H04S 7/00
  • Abstract:
    Methods, systems, computer-readable media, and apparatuses for manipulating a soundfield are presented. Some configurations include receiving a bitstream that comprises metadata and a soundfield description; parsing the metadata to obtain an effect identifier and at least one effect parameter value; and applying, to the soundfield description, an effect identified by the effect identifier. The applying may include using the at least one effect parameter value to apply the identified effect to the soundfield description.
  • Multi-Mode Audio Recognition And Auxiliary Data Encoding And Decoding

    view source
  • US Patent:
    20220335959, Oct 20, 2022
  • Filed:
    Nov 22, 2021
  • Appl. No.:
    17/532884
  • Inventors:
    - Beaverton OR, US
    Brett A. Bradley - Portland OR, US
    Yang Bai - Beaverton OR, US
    Shankar Thagadur Shivappa - San Diego CA, US
    Ajith Kamath - Beaverton OR, US
    Aparna Gurijala - Port Coquilam, CN
    Tomas Filler - Tigard OR, US
    David A. Cushman - McMinnville OR, US
  • International Classification:
    G10L 19/018
    G10L 19/02
  • Abstract:
    Audio signal processing enhances audio watermark embedding and detecting processes. Audio signal processes include audio classification and adapting watermark embedding and detecting based on classification. Advances in audio watermark design include adaptive watermark signal structure data protocols, perceptual models, and insertion methods. Perceptual and robustness evaluation is integrated into audio watermark embedding to optimize audio quality relative the original signal, and to optimize robustness or data capacity. These methods are applied to audio segments in audio embedder and detector configurations to support real time operation. Feature extraction and matching are also used to adapt audio watermark embedding and detecting.
  • Systems And Methods Of Handling Speech Audio Stream Interruptions

    view source
  • US Patent:
    20220246133, Aug 4, 2022
  • Filed:
    Feb 3, 2021
  • Appl. No.:
    17/166250
  • Inventors:
    - San Diego CA, US
    Reid WESTBURG - Del Mar CA, US
    Shankar THAGADUR SHIVAPPA - San Diego CA, US
  • International Classification:
    G10L 13/08
    G10L 13/027
    H04N 7/15
    H04N 7/14
  • Abstract:
    A device for communication includes one or more processors configured to receive, during an online meeting, a speech audio stream representing speech of a first user. The one or more processors are also configured to receive a text stream representing the speech of the first user. The one or more processors are further configured to selectively generate an output based on the text stream in response to an interruption in the speech audio stream.
  • Spatial Audio Wind Noise Detection

    view source
  • US Patent:
    20220199100, Jun 23, 2022
  • Filed:
    Dec 21, 2020
  • Appl. No.:
    17/128544
  • Inventors:
    - San Diego CA, US
    Hannes PESSENTHEINER - Graz, AT
    Shuhua ZHANG - San Diego CA, US
    Sanghyun CHI - San Diego CA, US
    Erik VISSER - San Diego CA, US
    Shankar THAGADUR SHIVAPPA - San Diego CA, US
  • International Classification:
    G10L 21/0232
    H04R 1/40
    H04R 3/00
    H04S 7/00
    H04S 3/00
    G10L 25/51
    G10L 21/0324
  • Abstract:
    A device includes one or more processors configured to obtain audio signals representing sound captured by at least three microphones and determine spatial audio data based on the audio signals. The one or more processors are further configured to determine a metric indicative of wind noise in the audio signals. The metric is based on a comparison of a first value and a second value. The first value corresponds to an aggregate signal based on the spatial audio data, and the second value corresponds to a differential signal based on the spatial audio data.
  • Spatial Audio Zoom

    view source
  • US Patent:
    20220201395, Jun 23, 2022
  • Filed:
    Dec 18, 2020
  • Appl. No.:
    17/127421
  • Inventors:
    - San Diego CA, US
    Vasudev NAYAK - San Diego CA, US
    Shankar THAGADUR SHIVAPPA - San Diego CA, US
    Isaac Garcia MUNOZ - San Diego CA, US
    Sanghyun CHI - San Diego CA, US
    Erik VISSER - San Diego CA, US
  • International Classification:
    H04R 3/00
    H03G 3/30
    G02B 27/00
    G06K 9/00
  • Abstract:
    In an aspect, a lens is zoomed in to create a zoomed lens. Lens data associated with the lens includes a direction of the lens relative to an object in a field-of-view of the zoomed lens and a magnification of the object resulting from the zoomed lens. An array of microphones capture audio signals including audio produced by the object and interference produced by other objects. The audio signals are processed to identify a directional component associated with the audio produced by the object and three orthogonal components associated with the interference produced by the other objects. Stereo beamforming is used to increase a magnitude of the directional component (relative to the interference) while retaining a binaural nature of the audio signals. The increase in magnitude of the directional component is based on an amount of the magnification provided by the zoomed lens to the object.

Googleplus

Shankar Shivappa Photo 2

Shankar Shivappa

Lived:
Portland, OR
Work:
Digimarc - Sr. R&D Engineer
Education:
University of California, San Diego

Get Report for Shankar Thagadur Shivappa from San Diego, CA, age ~43
Control profile