Gary W. Tripp - Bainbridge Island WA Michael D. Meadway - Snohomish WA Claude E. Duguay - Seattle WA
Assignee:
Arcessa, Inc. - Seattle WA
International Classification:
G06F 1300
US Classification:
709202, 709217, 709223, 707104
Abstract:
A search engine utilizes a bottom-up approach to index the content of a network instead of relying on atop-down approach as used by conventional search engines. The network being indexed may be any network, including the global computer network, Internet, and the world wide web. Instead of using a central site including spidering software to recursively search all linked web pages and generate a search index of the Internet, independent distributed components or agents are located at each web site and report meta data about objects at the web site to the central server. A central catalog of object references is compiled on the central site from the meta data reported from each web site. One or more brochures file may also be created and stored on each web site to provide conceptual or non-keyword data about the site, such as demographics and categorization information. This conceptual information is then utilized in constructing the central catalog so that more accurate search results may be generated for search queries applied to the catalog.
Michael D. Meadway - Snohomish WA Gary W. Tripp - Bainbridge Island WA
Assignee:
Arcessa, Inc. - Kirkland WA
International Classification:
G06F 1300
US Classification:
709219, 709203, 709329, 707 10
Abstract:
A service on a computer network which performs centralized searches based on index information transmitted by peer systems to a central site using an agent program running on each peer and then directs the peer systems to each other for the purpose of retrieving files. If none of the peer systems known to contain the file is online (and the file is therefore not available), the request is placed in a queue of file requests maintained by the central site. When a system containing the requested file connects to the service, the requested file is retrieved from that system and then distributed to the other systems which had requested the file. Files retrieved for systems not currently online are held in a queue until the user connects or are emailed to the user, usually as an email attachment. Or, when a computer system containing the file connects to the central site, the file is sent by the system containing the file either to the central site or directly to the user who requested the file via email attachment.
Method For Using Agents To Create A Computer Index Corresponding To The Contents Of Networked Computers
Gary W. Tripp - Bainbridge Island WA, US Michael D. Meadway - Snohomish WA, US Claude E. Duguay - Seattle WA, US
Assignee:
Arcessa, Inc. - Seattle WA
International Classification:
G06F017/30 G06F015/16
US Classification:
709202, 707103 Z, 707102
Abstract:
A search engine utilizes a bottom-up approach to index the content of a network with agent programs running on each host computer instead of relying on a top-down approach as used by conventional search engines. A central catalog of object references is compiled on the central site or sites from the meta data reported from each web site. One or more brochure files may also be created and stored on each web site to provide conceptual or non-key-word data about the site, such as target demographics and categorization information. This conceptual information is then utilized in constructing the central catalog so that more accurate search results may be generated for search queries applied to the catalog.
System For Discrete Parallel Processing Of Queries And Updates
Gary W. Tripp - Bainbridge Island WA, US Michael D. Meadway - Snohomish WA, US
Assignee:
Al Acquisitions, Inc. - Kirkland WA
International Classification:
G06F 13/00
US Classification:
709225, 709217, 709226, 707 3, 707 10
Abstract:
A data driven discrete parallel processing computing system for searches with a key-ordered list of data objects distributed over a plurality of servers. The invention is a data-driven architecture for distributed segmented databases consisting of lists of objects. The database is divided into segments based on content and distributed over a multiplicity of servers. Updates and queries are data driven and determine the segment and server to which they must be directed avoiding broadcasting. This is effective for systems such as search engines. Each object in the list of data objects must have a key on which the objects can be sorted relative to each other. Each segment is self-contained and doesn't rely on a schema. Multiple simultaneous queries and simultaneous updates and queries on different segments on different servers result in parallel processing on the database taken as a whole.
A service on a computer network that performs centralized searches based on index information transmitted by peer systems to a central site using an agent program running on each peer. Peer systems are directed to each other for the purpose of retrieving files. If none of the peers systems known to contain the files is online (and the file is therefore not available), the request is placed in a queue of file requests maintained by the central site. When a system containing the requested file connects to the service, the requested file is retrieved from that system and then distributed to the other systems which had requested the file.
Index Cards On Network Hosts For Searching, Rating, And Ranking
Gary W. Tripp - Bainbridge Island WA, US Michael D. Meadway - Snohomish WA, US Claude E. Duguay - Seattle WA, US
Assignee:
Al Acquisitions, Inc. - Kirkland WA
International Classification:
G06F 17/00 G06F 7/00
US Classification:
707 10, 707 5, 707100
Abstract:
Instead of using a central site including spidering software to recursively search all linked web pages and generate a search index of the Internet, index cards files are created and stored on each web site to provide meta data in the form of conceptual or non-keyword information about the site, such as demographics and categorization information. The meta data is periodically collected and aggregated on a central server. A central catalog of object references is compiled on the central site from the meta data reported from each web site. The information from the index cards is utilized in constructing the central catalog so that more accurate search results may be generated for search queries applied to the catalog, so that the search results may be ranked in meaningful ways, and so that web sites may be more accurately rated.
Gary W. Tripp - Bainbridge Island WA, US Michael D. Meadway - Driftwood TX, US
International Classification:
G06F 13/00
US Classification:
709202, 709203, 709219, 707 10
Abstract:
A service on a computer network that performs centralized searches based on index information transmitted by peer systems to a central site using an agent program running on each peer. Peer systems are directed to each other for the purpose of retrieving files. If none of the peers systems known to contain the files is online (and the file is therefore not available), the request is placed in a queue of file requests maintained by the central site. When a system containing the requested file connects to the service, the requested file is retrieved from that system and then distributed to the other systems which had requested the file.