Vinod Jayaraman - San Francisco CA, US Hitoshi Takanashi - Fremont CA, US
Assignee:
NTT Multimedia Communications Laboratories, Inc. - Palo Alto CA
International Classification:
H04Q 7/20
US Classification:
4554561, 4554562, 4554563, 4554142, 455457
Abstract:
A system includes a mobile unit, a client and a remote server. The mobile unit is adapted to acquire information about a region near the mobile unit, determine a location of the mobile unit and transmit an indication of the information and location. The remote server is adapted to communicate with the mobile unit to receive the indication from the mobile unit and communicate at least some of the information to the client.
Mechanisms are provided for efficiently improving a dictionary used for data deduplication. Dictionaries are used to hold hash key and location pairs for deduplicated data. Strong hash keys prevent collisions but weak hash keys are more computation and storage efficient. Mechanisms are provided to use both a weak hash key and a strong hash key. Weak hash keys and corresponding location pairs are stored in an improved dictionary while strong hash keys are maintained with the deduplicated data itself. The need for having uniqueness from a strong hash function is balanced with the deduplication dictionary space savings from a weak hash function.
Vinod Jayaraman - San Francisco CA, US Goutham Rao - Los Altos CA, US Ratna Manoj Bolla - Hyderabad, IN
Assignee:
Dell Products L.P. - Round Rock TX
International Classification:
G06F 17/00
US Classification:
707692, 707802, 707821, 718 1, 711161
Abstract:
Techniques and mechanisms are provided to instantly clone active files including active optimized files. When a new instance of an active file is created, a new stub is generated in the user namespace and a block map file is cloned. The block map file includes the same offsets and location pointers that existed in the original block map file. No user file data needs to be copied. If the cloned file is later modified, the behavior can be same as what happens when a de-duplicated file is modified.
Mechanisms are provided for efficiently detecting segments for deduplication. Data is analyzed to determine file types and file components. File types such as images may have optimal data segment boundaries set at the file boundaries. Other file types such as container files are delayered to extract objects to set optimal data segment boundaries based on file type or based on the boundaries of the individual objects. Storage of unnecessary information is minimized in a deduplication dictionary while allowing for effective deduplication.
Methods And Apparatus For Efficient Compression And Deduplication
Goutham Rao - San Jose CA, US Murali Bashyam - Fremont CA, US Vinod Jayaraman - San Francisco CA, US
Assignee:
Dell Products L.P. - Round Rock TX
International Classification:
G06F 17/30
US Classification:
707693, 707692, 707999101
Abstract:
Mechanisms are provided for performing efficient compression and deduplication of data segments. Compression algorithms are learning algorithms that perform better when data segments are large. Deduplication algorithms, however, perform better when data segments are small, as more duplicate small segments are likely to exist. As an optimizer is processing and storing data segments, the optimizer applies the same compression context to compress multiple individual deduplicated data segments as though they are one segment. By compressing deduplicated data segments together within the same context, data reduction can be improved for both deduplication and compression. Mechanisms are applied to compensate for possible performance degradation.
Mike Wilson - Pleasanton CA, US Parthiban Munusamy - Fremont CA, US Carter George - Portland OR, US Murli Bashyam - Fremont CA, US Vinod Jayaraman - San Francisco CA, US Goutham Rao - San Jose CA, US
Assignee:
Dell Products L.P. - Round Rock TX
International Classification:
G06F 17/00
US Classification:
707692, 707610, 707640, 715234, 715242
Abstract:
A system provides file aware block level deduplication in a system having multiple clients connected to a storage subsystem over a network such as an Internet Protocol (IP) network. The system includes client components and storage subsystem components. Client components include a walker that traverses the namespace looking for files that meet the criteria for optimization, a file system daemon that rehydrates the files, and a filter driver that watches all operations going to the file system. Storage subsystem components include an optimizer resident on the nodes of the storage subsystem. The optimizer can use idle processor cycles to perform optimization. Sub-file compression can be performed at the storage subsystem.
Vinod Jayaraman - San Francisco CA, US Goutham Rao - Los Altos CA, US
Assignee:
Dell Products L.P. - Round Rock TX
International Classification:
G06F 17/00
US Classification:
707692, 707687, 707690, 707698
Abstract:
Mechanisms are provided for accelerated data deduplication. A data stream is received an input interface and maintained in memory. Chunk boundaries are detected and chunk fingerprints are calculated using a deduplication accelerator while a processor maintains a state machine. A deduplication dictionary is accessed using a chunk fingerprint to determine if the associated data chunk has previously been written to persistent memory. If the data chunk has previously been written, reference counts may be updated but the data chunk need not be stored again. Otherwise, datastore suitcases, filemaps, and the deduplication dictionary may be updated to reflect storage of the data chunk. Direct memory access (DMA) addresses are provided to directly transfer a chunk to an output interface as needed.
Mechanisms are provided for efficiently improving a dictionary used for data deduplication. Dictionaries are used to hold hash key and location pairs for deduplicated data. Strong hash keys prevent collisions but weak hash keys are more computation and storage efficient. Mechanisms are provided to use both a weak hash key and a strong hash key. Weak hash keys and corresponding location pairs are stored in an improved dictionary while strong hash keys are maintained with the deduplicated data itself. The need for having uniqueness from a strong hash function is balanced with the deduplication dictionary space savings from a weak hash function.
Portworx
Co-Founder and Chief Architect
Dell Jul 2010 - Jun 2014
Senior Principal Engineer and Architect
Ocarina Networks Mar 2008 - Aug 2010
Principal Engineer and Architect
F5 Networks Mar 2005 - Feb 2008
Principal Software Engineer
Ntt Mcl Jun 2003 - Mar 2005
Software Engineer
Education:
Sorbonne Université 1997 - 1998
Master of Science, Masters, Computer Science
Sorbonne Université 1993 - 1997
Bachelors, Bachelor of Science, Computer Science
Modern School, Vasant Vihar, New Delhi
Skills:
Linux Distributed Systems Tcp/Ip Cloud Computing Virtualization Storage Software Development Networking Unix High Availability System Architecture Deduplication Shell Scripting Software Engineering C Network Security Internet Protocol Suite Embedded Systems Operating Systems Agile Methodologies Wan Optimisation Open Source Device Drivers Storage Area Networks Architectures Architecture San
Are you looking for Vinod Jayaraman? MyLife is happy to assist you on the quest as we dedicate our efforts to streamline to process of finding long-lost ...