Google
Principal Software Engineer
Center For Intelligent Information Retrieval Sep 2003 - Dec 2007
Research Assistant
Ampersandbox Jan 2003 - Dec 2006
President
Symantec 1997 - 2002
Senior Software Engineer
Genx Software Sep 1, 1996 - Jun 1, 1997
Programmer
Education:
University of Massachusetts Amherst 2003 - 2007
Master of Science, Doctorates, Masters, Doctor of Philosophy, Computer Science
California Polytechnic State University - San Luis Obispo 1998 - 2002
Bachelors, Bachelor of Science
Morro Bay High School
California Polytechnic State University - San Luis Obispo
Uc Santa Barbara
Skills:
C++ C Python Algorithms Mapreduce Information Retrieval Machine Learning Distributed Systems Software Engineering Software Design Java
Interests:
Facebook Web Search Npr Imogen Heap What Does It Feel Like To X Cycling Mexican Food Acoustic Guitars Mandolin Deep Learning In N Out Burger (Fast Food Chain) California Why Is X So Popular Google Graphic Design The New Yorker Software Engineering Pig (Software)
Google since Feb 2008
Staff Software Engineer
Center for Intelligent Information Retrieval Sep 2003 - Dec 2007
Research Assistant
Ampersandbox Jan 2003 - Dec 2006
President
Veritas Software 1997 - 2002
Senior Software Engineer
genX Software 1996 - 1997
Programmer
Education:
University of Massachusetts, Amherst 2003 - 2007
University of Massachusetts, Amherst 2003 - 2005
California Polytechnic State University-San Luis Obispo 1998 - 2002
Morro Bay High School
Skills:
C++ C Python Algorithms MapReduce Information Retrieval
Name / Title
Company / Classification
Phones & Addresses
Trevor Strohman CTO
Veritas Software Corp Computer and Software Stores
708 Fiero Ln STE 5, San Luis Obispo, CA 93401 8057824400, 8057824340
Us Patents
Predictive Searching And Associated Cache Management
Robert M. Wyman - New York NY, US Trevor Strohman - Sunnyvale CA, US Paul Haahr - San Francisco CA, US Laramie Leavitt - Kirkland WA, US John Sarapata - New York NY, US
Assignee:
Google Inc. - Mountain View CA
International Classification:
G06F 17/30
US Classification:
707759, 707769
Abstract:
A computer system including instructions stored on a computer-readable medium, may include a query manager configured to manage a query corpus including at least one predictive query, and a document manager configured to receive a plurality of documents from at least one document source, and configured to manage a document corpus including at least one document obtained from the at least one document source. The computer system also may include a predictive result manager configured to associate the at least one document with the at least one predictive query to obtain a predictive search result, and configured to update a predictive cache using the predictive search result, and may include a search engine configured to access the predictive cache to associate a received query with the predictive search result, and configured to provide the predictive search result as a search result of the received query, the search result including the at least one document.
Methods, systems, and apparatus, including computer program products, for serving advertisements responsive to partial queries. In an aspect, a method includes receiving stem bids for words stems, each stem bid being a bid for a corresponding word stem and corresponding to a price an advertiser pays for display of an advertisement targeted to the corresponding word stem, and wherein the targeting to the corresponding word stem is independent of keyword targeting; receiving a query stem from a client device; in response to receiving the query stem: identifying word stems that match the query stem, providing the corresponding stem bids of the matching word stems as bids to an advertisement auction for advertisement slots for displaying advertisements, and receiving selected advertisements that are determined to have won an advertisement slot in the auction; and providing the selected advertisements for display in the advertisement slots on the client device.
Adam Sadovsky - Mountain View CA, US Paul Haahr - San Francisco CA, US Trevor Strohman - Sunnyvale CA, US Per Bjornsson - Sunnyvale CA, US Jun Xu - Sunnyvale CA, US Gabriel Schine - San Francisco CA, US Jay Shrauner - San Francisco CA, US
Assignee:
Google Inc. - Mountain View CA
International Classification:
G06F 7/00
US Classification:
707737, 707706, 707707, 707711, 707741
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for evaluating resource selection processes. One method includes receiving test queries and generating a first group of resources corresponding to a first automated resource selection process and generating a second group of resources corresponding to a second automated resource selection process for each query. Another method includes generating a query results table for use in generating the groups of resources. The query results table maps queries to resources matched to the queries, and maps each resource to a respective score for the resource and the query, and one or more index selection signals for the resource.
Methods And Systems For Reducing Latency In Automated Assistant Interactions
- Mountain View CA, US Rafael Goldfarb - Hadera, IL Dekel Auster - Tel Aviv, IL Dan Rasin - Givatayim, IL Michael Andrew Goodman - Oakland CA, US Trevor Strohman - Sunnyvale CA, US Nino Tasca - San Francisco CA, US Valerie Nygaard - Saratoga CA, US Jaclyn Konzelmann - Mountain View CA, US
Implementations described herein relate to reducing latency in automated assistant interactions. In some implementations, a client device can receive audio data that captures a spoken utterance of a user. The audio data can be processed to determine an assistant command to be performed by an automated assistant. The assistant command can be processed, using a latency prediction model, to generate a predicted latency to fulfill the assistant command. Further, the client device (or the automated assistant) can determine, based on the predicted latency, whether to audibly render pre-cached content for presentation to the user prior to audibly rendering content that is responsive to the spoken utterance. The pre-cached content can be tailored to the assistant command and audibly rendered for presentation to the user while the content is being obtained, and the content can be audibly rendered for presentation to the user subsequent to the pre-cached content.
Efficient Streaming Non-Recurrent On-Device End-To-End Model
- Mountain View CA, US Arun Narayanan - Milpitas CA, US Rami Botros - Mountain View CA, US Ehsan Variani - Mountain View CA, US Cyrill Allauzen - Mountain View CA, US David Rybach - Aachen, DE Trevor Strohman - Mountain View CA, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G10L 15/06 G10L 15/02 G10L 15/30 G10L 15/22
Abstract:
An ASR model includes a first encoder configured to receive a sequence of acoustic frames and generate a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The ASR model also includes a second encoder configured to receive the first higher order feature representation generated by the first encoder at each of the plurality of output steps and generate a second higher order feature representation for a corresponding first higher order feature frame. The ASR model also includes a decoder configured to receive the second higher order feature representation generated by the second encoder at each of the plurality of output steps and generate a first probability distribution over possible speech recognition hypothesis. The ASR model also includes a language model configured to receive the first probability distribution over possible speech hypothesis and generate a rescored probability distribution.
A computer-implemented method includes receiving audio data that corresponds to an utterance spoken by a user and captured by a user device. The method also includes processing the audio data to determine a candidate transcription that includes a sequence of tokens for the spoken utterance. Tor each token in the sequence of tokens, the method includes determining a token embedding for corresponding token, determining a n-gram token embedding for a previous sequence of n-gram tokens, and concatenating the token embedding and the n-gram token embedding to generate a concatenated output for the corresponding token. The method also includes rescoring the candidate transcription for the spoken utterance by processing the concatenated output generated for each corresponding token in the sequence of tokens.
- Mountain View CA, US Ruoming Pang - New York NY, US David Rybach - Mountain View CA, US Yanzhang He - Palo Alto CA, US Rohit Prabhavalkar - Mountain View CA, US Wei Li - Fremont CA, US Mirkó Visontai - Mountain View CA, US Qiao Liang - Redwood City CA, US Trevor Strohman - Sunnyvale CA, US Yonghui Wu - Fremont CA, US Ian C. McGraw - Menlo Park CA, US Chung-Cheng Chiu - Sunnyvale CA, US
International Classification:
G10L 15/16 G10L 15/32 G10L 15/05
Abstract:
Two-pass automatic speech recognition (ASR) models can be used to perform streaming on-device ASR to generate a text representation of an utterance captured in audio data. Various implementations include a first-pass portion of the ASR model used to generate streaming candidate recognition(s) of an utterance captured in audio data. For example, the first-pass portion can include a recurrent neural network transformer (RNN-T) decoder. Various implementations include a second-pass portion of the ASR model used to revise the streaming candidate recognition(s) of the utterance and generate a text representation of the utterance. For example, the second-pass portion can include a listen attend spell (LAS) decoder. Various implementations include a shared encoder shared between the RNN-T decoder and the LAS decoder.
- Mountain View CA, US Rami Botros - Mountain View CA, US Anmol Gulati - Mountain View CA, US Krzysztof Choromanski - Mountain View CA, US Ruoming Pang - New York NY, US Trevor Strohman - Mountain View CA, US Weiran Wang - Mountain View CA, US Jiahui Yu - Mountain View CA, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G10L 15/16 G10L 15/22 G10L 15/06
Abstract:
A computer-implemented method includes receiving a sequence of acoustic frames as input to an automatic speech recognition (ASR) model. Here, the ASR model includes a causal encoder and a decoder. The method also includes generating, by the causal encoder, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by the decoder, a first probability distribution over possible speech recognition hypotheses. Here, the causal encoder includes a stack of causal encoder layers each including a Recurrent Neural Network (RNN) Attention-Performer module that applies linear attention.