Jie Chen - Chappaqua NY, US Timothy John Breault - Huntersville NC, US Fernando Cela Diaz - New York NY, US William Anthony Nobili - Charlotte NC, US Sandi Setiawan - Charlottle NC, US Harsh Singhal - Charlotte NC, US Agus Sudjianto - Charlotte NC, US Andrea Renee Turner - Rock Hill SC, US Bradford Timothy Winkelman - Wilmington DE, US
Assignee:
Bank of America Corporation - Charlotte NC
International Classification:
G06Q 40/00
US Classification:
705 38
Abstract:
Embodiments of the present invention relate to methods and apparatuses for determining leading indicators and/or for modeling one or more time series. For example, in some embodiments, a method is provided that includes: (a) receiving first data indicating the value of a total income amount for a plurality of consumers over a period of time; (b) receiving second data indicating the value of a total debt amount for a plurality of consumers over a period of time; (c) selecting a consumer leverage time series that compares the total income amount to the total debt amount over a period of time; (d) modeling the consumer leverage time series based at least partially on the first and second data; (e) determining, using a processor, the value of the cycle component for a particular time; and (f) outputting an indication of the value of the cycle component for the particular time.
Action Recognition With High-Order Interaction Through Spatial-Temporal Object Tracking
- Princeton NJ, US Asim KADAV - Jersey City NJ, US Jie CHEN - Bellevue WA, US
Assignee:
NEC LABORATORIES AMERICA, INC - Princeton NJ
International Classification:
G06K 9/00
Abstract:
Aspects of the present disclosure describe systems, methods, and structures that provide action recognition with high-order interaction with spatio-temporal object tracking. Image and object features are organized into into tracks, which advantageously facilitates many possible learnable embeddings and intra/inter-track interaction(s). Operationally, our systems, method, and structures according to the present disclosure employ an efficient high-order interaction model to learn embeddings and intra/inter object track interaction across the space and time for AR. Each frame is detected by an object detector to locate visual objects. Those objects are linked through time to form object tracks. The object tracks are then organized and combined with the embeddings as the input to our model. The model is trained to generate representative embeddings and discriminative video features through high-order interaction which is formulated as an efficient matrix operation without iterative processing delay.
2009 to 2000 Associate, Business Law DepartmentAllen & Overy LLP New York, NY 2006 to 2009 Corporate AssociateWorld Wildlife Fund Washington, DC 2002 to 2003 Senior Financial Analyst
Education:
University of Michigan Law School Ann Arbor, MI 2005 J.D.Concord College Athens, WV 1999 B.S. in Finance (Sum Cum Laude)
Center for Computational Biology and Bioinformatics, Columbia University/HHMI New York, NY 2010 to Jul 2012 Research AssociateInstitute for Physical Science and Technology, University of Maryland, College Park College Park, MD 2004 to 2010 Graduate Research AssistantPhysics Department, Nanjing University, China
2002 to 2004 Research AssistantPhysics Department, Nanjing University, China
2002 to 2003 Teaching Assistant
Education:
University of Maryland College Park, MD 2010 Ph.D. in Chemical PhysicsNanjing University 2002 B.S. in Physics
Skills:
Solid background in physics and math. Ten years training in chemical physics and biological physics. Proven record in theoretical and computational modeling of biological systems. Excellent skills in Brownian dynamics simulations, Monte-Carlo simulations, homology modeling, and bioinformatics. Wide range of knowledge in programming: C/C++, Fortran, Perl, and R.
Sep 2011 to May 2012 Senior Business AnalystErnst & Young LLP New York, NY Sep 2004 to May 2006 Internal Auditor, Business Risk Service
Education:
Carnegie Mellon University Pittsburgh, PA 2011 ABD (All But Dissertation) in Accounting PhD programCarnegie Mellon University Pittsburgh, PA 2008 MS in Industrial AdministrationMichigan State University Lansing, MI 2004 Master in Accounting and EconomicsFudan University 2000 BA in Finance
Skills:
Excel, SAS, STATA, Bloomberg, TEJ, CFA Level I
Isbn (Books And Publications)
Ideology in U. S. Foreign Policy: Case Studies in U. S.-China Policy
Zte (USA) Inc Mfg Metal Cutting Type Machine Tools Metalworking Machinery · Whol Communication Equipment Ret Telephone Equipment and Systems · Whol Electronic Parts/Equipment Ret Misc Merchandise · Wholesale Electronic Parts/Equipment Ret Misc Merchandise
33 Wood Ave S, Iselin, NJ 08830 9726718885
Jie Chen Principal
JOY FOOTWEAR INTERNATIONAL INC Whol Footwear
PO Box 520185, Flushing, NY 11352 218 Front St, Brooklyn, NY 11201 13403 35 Ave, Flushing, NY 11354 7188589218
Jie Chen Principal
GLOBAL TAX & BUSINESS CONSULTING, LLC Business Consulting Services
170 Forsyth St #5C, New York, NY 10002
Jie Chen
RED HOUSE ASIAN FUSION, INC
192-03 Un Tpke, Fresh Meadows, NY 11366
135-03A Roosevelt Ave, Flushing, NY 11354
Jie Chen
HENG XIANG CORPORATION
96-03 50 Ave, Corona, NY 11368 96 - 03 50 Ave, Corona, NY 11368
Jie Chen Manager
The Wine Cellarage Whol Groceries
890 Garrison Ave, Bronx, NY 10474 7189915700, 7188385071
University Of Virginia Transplant Center 1300 Jefferson Park Ave FL 4, Charlottesville, VA 22903 4349248604 (phone), 4349240017 (fax)
UVA Medical Center Inpatient Transplant Surgery & Urology 1215 Lee St 5 W, Charlottesville, VA 22908 4349242338 (phone), 4349242355 (fax)
Languages:
English
Description:
Ms. Chen works in Charlottesville, VA and 1 other location and specializes in Transplant Surgery. Ms. Chen is affiliated with University Of Virginia Medical Center.
atusik, a professor of electrical engineering and computer science at MIT who leads the Computational Design and Fabrication Group within the Computer Science and Artificial Intelligence Laboratory (CSAIL); Meng Jiang, associate professor at the University of Notre Dame; and senior author Jie Chen,