{"id":77,"date":"2021-02-08T23:36:18","date_gmt":"2021-02-08T23:36:18","guid":{"rendered":"http:\/\/adaptivecomputation.com\/?page_id=77"},"modified":"2026-01-16T05:32:36","modified_gmt":"2026-01-16T05:32:36","slug":"technology","status":"publish","type":"page","link":"https:\/\/adaptivecomputation.com\/index.php\/technology\/","title":{"rendered":"Technology"},"content":{"rendered":"\r\n<h2 class=\"wp-block-heading\"><strong>ADC Background<\/strong><\/h2>\r\n\r\n\r\n\r\n<p>Work on EViP technology was initiated by ADC\u2019s founder, Dr. Tuan A Duong, while he was working at NASA\u2019s Jet Propulsion Laboratory, in one of three frontier neural network teams in the United States (including AT&amp;T Bell Labs and Bellcore) in 1984.\u00a0\u00a0<\/p>\r\n\r\n\r\n\r\n<p>Some of the relevant projects Dr. Duong researched and developed during his time at JPL include:<\/p>\r\n\r\n\r\n\r\n\r\n\r\n<p><em>*\u00a0Real-Time Mars Landing Site Identification\u00a0 based on real-time color segmentation and adaptation, supported by Self-Evolving Neural Network Architecture namely Cascade Error Projection, to survey and identify a\u00a0<strong>safe<\/strong>\u00a0and\u00a0<strong>productive<\/strong> landing site in real-time;<\/em><\/p>\r\n\r\n\r\n\r\n<p><em>* Self-Evolving Neural Network Architecture Supervised Learning algorithm to identify Amino Acid building blocks for Life Detection Mission.<\/em><\/p>\r\n\r\n\r\n\r\n<p><em>* Space Invariant Independent Component Analysis (SPICA) for recovering the original odorant sources from unknown mixtures for ENose (a multi-element chemical sensor) in an open unknown environment (Caltech patent).<\/em><\/p>\r\n\r\n\r\n\r\n<p><em>* Introductory Extended Visual Pathway Data Flow, a technology which has now been fully developed at ADC (Caltech patent).<\/em><\/p>\r\n\r\n\r\n\r\n<p><em>* Cognitive Computing Architecture that enables a general-purpose neural processor chip to be equipped with a compiler, making low power, compactness, and real-time adaptive operation available in a single package.\u00a0 This set a cornerstone for intelligent perception and recognition in hardware implementation (Caltech patent).<\/em><\/p>\r\n\r\n\r\n\r\n<p><em>* Others.<\/em><\/p>\r\n\r\n\r\n\r\n<p>With his 12 patents with NASA or Caltech Assignee, 5 patents with ADC Assignee, and nine of them are neural networks related technology.<\/p>\r\n\r\n\r\n\r\n<p>These technologies have provided the foundation for the technologies ADC has now brought to fruition.\u00a0 NASA\/JPL-Caltech provided generous support and\u00a0an\u00a0excellent environment for doing this preliminary research.\u00a0 Involvement through licensing and other arrangements continues to be a key to ADC\u2019s success.<\/p>\r\n\r\n\r\n\r\n<h2 class=\"wp-block-heading\"><strong>Our Technology<\/strong><\/h2>\r\n\r\n\r\n\r\n<p>At Adaptive Computation LLC, Dr. Tuan A Duong invented the Extended Visual Pathway (EViP) approach as an unsupervised learning approach to integrate a saccadic eye movement emulator with a bio-inspired visual processing pathway to enable the detection and recognition of generic full\/partial\/low resolution\/ sketched\/degraded objects in open and ambiguous environments.\u00a0 This basic technology is protected by US and international patents.<\/p>\r\n<p>He also invented a new learning architecture to enable the machine to self-learn new objects autonomously and additively in a sequential manner when the objects arrive and appear at different times. Hence the cognitive and perceptive capability can be equipped for machine intelligence.\u00a0<\/p>\r\n\r\n\r\n\r\n\r\n\r\n<h4 class=\"wp-block-heading\">Extended Visual Pathway (EViP)<\/h4>\r\n\r\n\r\n\r\n<p>EViP consists of a saccadic eye movement emulator and visual pathway filters and visual cortex.<\/p>\r\n\r\n\r\n\r\n<figure><img decoding=\"async\" src=\"http:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/02\/EVip3.png\" alt=\"\" \/><\/figure>\r\n<h3>\u00a0<\/h3>\r\n<h3><strong>SOFTWARE\u00a0<\/strong><\/h3>\r\n<h3><strong>Unsupervised learning (EViP.1)<\/strong><\/h3>\r\n<p>Bio-inspired Extended Visual Pathway (EViP) software, which integrates a saccadic eye movement emulator with an advanced model of the human visual pathway, to enable real-time processing for the detection and recognition of <strong>single<\/strong> or <strong>multiple<\/strong> objects in <strong>real-time and on-line<\/strong>, given inputs that are:<\/p>\r\n<ul>\r\n<li>partial-view or full-view<\/li>\r\n<li>low resolution or \u201cnoisy\u201d<\/li>\r\n<li>incomplete or \u201ccollage\u201d style<\/li>\r\n<li>sketches of actual objects.<\/li>\r\n<\/ul>\r\n<p>to search for similar objects in an uncontrolled environments.<\/p>\r\n<p>EViP along with high floating-point computation (32 bit) has demonstrated with 10,000 distractors, EViP outperforms Human Visual Systems, providing 66% versus 24% correct recognition in rank 1.<\/p>\r\n<p>With effective bio-inspired features, it enables real-time adaptive capability to meet the dynamic change, to serve as a short-term memory operations.<\/p>\r\n<p><strong>Face Detection and Recognition in Single Faces Database using OE Sensing Data<\/strong><\/p>\r\n<p><iframe loading=\"lazy\" title=\"Day\/Night Time Face Recognition (Daytime)\" width=\"750\" height=\"422\" src=\"https:\/\/www.youtube.com\/embed\/-pJHr9NZg-k?start=90&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\r\n<p><strong> Face Detection and Recognition in Single Faces Database using IR Sensing Data<\/strong><\/p>\r\n<p><iframe loading=\"lazy\" title=\"Day\/Night Time Face Recognition for Smart Home Security (Nighttime)\" width=\"750\" height=\"422\" src=\"https:\/\/www.youtube.com\/embed\/csER7h9Sw08?start=11&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\r\n<p><strong>Palm Detection, Recognition and Identification<\/strong><\/p>\r\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-410\" src=\"http:\/\/adaptivecomputation.com\/wp-content\/uploads\/2025\/10\/TDpalm-298x300.jpg\" alt=\"\" width=\"298\" height=\"300\" srcset=\"https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2025\/10\/TDpalm-298x300.jpg 298w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2025\/10\/TDpalm-1017x1024.jpg 1017w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2025\/10\/TDpalm-150x150.jpg 150w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2025\/10\/TDpalm-768x773.jpg 768w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2025\/10\/TDpalm.jpg 1261w\" sizes=\"auto, (max-width: 298px) 100vw, 298px\" \/><\/p>\r\n<p>The evaluation of 100-run for Palm recognition and identification<\/p>\r\n<table style=\"height: 740px;\" width=\"749\">\r\n<tbody>\r\n<tr>\r\n<td width=\"378\">\r\n<p><b>\u00a0<\/b><b>Database\u00a0 <\/b><\/p>\r\n<p>9009\u00a0 Palms<\/p>\r\n<\/td>\r\n<td width=\"365\">\r\n<p>\u00a0<b>Subjects<\/b><\/p>\r\n<p>1120 Persons<\/p>\r\n<\/td>\r\n<td width=\"777\">\r\n<p><b>Comments <\/b><\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td width=\"378\">\r\n<p><b>Confidence <\/b><\/p>\r\n<\/td>\r\n<td width=\"365\">\r\n<p>&gt;95%<\/p>\r\n<\/td>\r\n<td width=\"777\">\r\n<p>It can be higher if needed.<\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td width=\"378\">\r\n<p><b>TP@100<\/b><\/p>\r\n<\/td>\r\n<td width=\"365\">\r\n<p>60.3 (Updated)<\/p>\r\n<\/td>\r\n<td width=\"777\">\r\n<p>It is based on a <strong>single palm per person<\/strong>; hence, the improvement can be obtained when more palm photos are added.\u00a0<\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td width=\"378\">\r\n<p><b>FP@100<\/b><\/p>\r\n<\/td>\r\n<td width=\"365\">\r\n<p>0<\/p>\r\n<\/td>\r\n<td width=\"777\">\r\n<p>No risk to be used<\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td width=\"378\">\r\n<p><b>Rank 1<\/b><\/p>\r\n<\/td>\r\n<td width=\"365\">\r\n<p>100%<\/p>\r\n<\/td>\r\n<td width=\"777\">\r\n<p>Exact match<\/p>\r\n<\/td>\r\n<\/tr>\r\n<tr>\r\n<td width=\"378\">\r\n<p><b>Rank 2<\/b><\/p>\r\n<\/td>\r\n<td width=\"365\">\r\n<p>100%<\/p>\r\n<\/td>\r\n<td width=\"777\">\r\n<p>Similar match<\/p>\r\n<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<p>&nbsp;<\/p>\r\n<p class=\"style-scope ytd-watch-metadata\"><strong>Intelligent Search from Assembled Components e.g., Mate Search<\/strong><\/p>\r\n<p><iframe loading=\"lazy\" title=\"Intelligent Search from Assembled Components e.g., Mate Search\" width=\"750\" height=\"563\" src=\"https:\/\/www.youtube.com\/embed\/AYdCcEdEpDY?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\r\n<h3><strong>Self and Dynamic Supervised Learning (EViP.2)<\/strong><\/h3>\r\n<p>Problems with the current supervised learnings: 1) required all training data ready; 2) During the training phase, it is manned in the loop; 3) it cannot update the new objects, new updated data, based on the previous knowledge unless to restart all over again; 4) it is only ML; hence, it is memorized with some interpolation capability, but no intelligence.<\/p>\r\n<p>ADC approach enables:<\/p>\r\n<ul>\r\n<li>Dynamic architecture based on the task, hence it is optimal<\/li>\r\n<li>Self-learning and sequential learning as data arriving<\/li>\r\n<li>New objects and\/or new updated data can be accommodated from the previous learning knowledge<\/li>\r\n<li>It is facilitated for machine intelligence<\/li>\r\n<\/ul>\r\n<p>These features facilitate the capability of on-line learning to capturing and growing knowledge of previous knowledges or initiating and constructing knowledge of cognitive capabilities; hence, auto intelligence can be extracted. \u00a0This can be viewed as long-term memory operations.<\/p>\r\n<p><a href=\"http:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/06\/Dynamic-Supervised-Learning-Algorithm-and-Architecture-DSLAA.pdf\">Dynamic Supervised Learning Algorithm and Architecture (DSLAA)<\/a><\/p>\r\n<p><strong>Dynamic Self-Supervised Learning (DSSL)<\/strong><\/p>\r\n<p>Using short-term memory based EViP.1, data of moving vehicle are obtained and use as training to train DSSL.\u00a0 The test results are shown below:<\/p>\r\n<p>&nbsp;<\/p>\r\n<p><iframe loading=\"lazy\" title=\"Evaluation Video for Autonomous Intelligence\" width=\"750\" height=\"422\" src=\"https:\/\/www.youtube.com\/embed\/VMCOuIA_1Cw?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\r\n<h4>Bio-Inspired Sparse Imager (BISI)<\/h4>\r\n<p>Benefits: reducing color image to sparse gray scale with 61.5x pixel intensity reduction while maintaining the performance in YOLO-x and ResNet-x using BDD100K data set.<\/p>\r\n<h5>Full color video 7-object detection, classification and tracking from BDD100K UCB.<\/h5>\r\n<p><a href=\"https:\/\/youtu.be\/ngml5WEAFsE\">https:\/\/youtu.be\/ngml5WEAFsE<\/a><\/p>\r\n<h5>Sparse video 7-object detection, classification and tracking from converting BDD100K UCB.<\/h5>\r\n<p><a href=\"https:\/\/youtu.be\/3uKjAhO6EcA\">https:\/\/youtu.be\/3uKjAhO6EcA<\/a><\/p>\r\n<h5>Sparse video short-term memory based drone tracking<\/h5>\r\n<p><iframe loading=\"lazy\" title=\"Color to Sparse Video Tracking using ADC Short-Term Memory-Like Approach\" width=\"750\" height=\"422\" src=\"https:\/\/www.youtube.com\/embed\/2e9fVPaA3F0?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\r\n<h4>Complete Autonomous and Adaptive Learning System (CAALS)<\/h4>\r\n<p>The closed loop between short-term and long-term memory operation set off the auto intelligence systems.<\/p>\r\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-305\" src=\"http:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/07\/CAAL-300x168.jpg\" alt=\"\" width=\"429\" height=\"240\" srcset=\"https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/07\/CAAL-300x168.jpg 300w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/07\/CAAL-1024x573.jpg 1024w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/07\/CAAL-150x84.jpg 150w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/07\/CAAL-768x430.jpg 768w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/07\/CAAL-1536x859.jpg 1536w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/07\/CAAL-1568x877.jpg 1568w, https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/07\/CAAL.jpg 1931w\" sizes=\"auto, (max-width: 429px) 100vw, 429px\" \/><\/p>\r\n\r\n\r\n\r\n<h4 class=\"wp-block-heading\"><a href=\"http:\/\/www.youtube.com\/watch?v=fQO5xm9Drsk\">http:\/\/www.youtube.com\/watch?v=fQO5xm9Drsk<\/a><\/h4>\r\n<h3><strong>HARDWARE\u00a0<\/strong><\/h3>\r\n<h4>Massive Parallel In-Memory Learning and Processing Architecture (MPIMLPA)<\/h4>\r\n<p>Benefits:<\/p>\r\n<ul>\r\n<li>Learning fast, at least 5 orders of magnitude (O (5)) for DNNs as compared with software\u00a0<\/li>\r\n<li>Processing improvement at least O (5)<\/li>\r\n<li>Power consumption can be reduced under manageable budgets (e.g. less than a watt, depending on submicron feature size)<\/li>\r\n<\/ul>\r\n<h4>Reconfigurable Intelligent Search Engine (RISE)<\/h4>\r\n\r\n\r\n\r\n<p>Reconfigurable Intelligent Search Engine (RISE) is an implementation of EViP via Real-Time Extraction Engine (ReTEE) and architecture.<\/p>\r\n<p>Benefits: can process 1000 frames\/sec (each frame is 1Kx1K) with ROI included.<\/p>\r\n\r\n\r\n\r\n<h6><img decoding=\"async\" src=\"https:\/\/adaptivecomputation.com\/wp-content\/uploads\/2021\/02\/NN4.png\" alt=\"\" \/>This is an illustration only.<\/h6>\r\n<h3><strong>Low SWaP-C Systems and Technologies<\/strong><\/h3>\r\n<ul>\r\n<li>\r\n<h4><strong>Sparse Imaging&#8211;<\/strong><em>Optimal Input Space<\/em><\/h4>\r\n<\/li>\r\n<\/ul>\r\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0*<em> Software Conversion &#8211;&gt; Ready<\/em><br \/><br \/>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0* <em>CMOS imager &#8211;&gt; Selectable, but not funded Direct-To-Phase II AF (working to find a home for this technology)<\/em><\/p>\r\n<ul>\r\n<li>\r\n<h4 style=\"text-align: left;\"><strong>Hybrid In-Memory Processing with 8-bit Precision Weight&#8211;<\/strong><em>Effective Processing Architecture<\/em> \u00a0<\/h4>\r\n<\/li>\r\n<\/ul>\r\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0* In-Memory Processing Architectures developed at JPL\/NASA since 1992 (<em>Tuan A. Duong, et al. , &#8220;Analog 3-D Neuro-processor for Fast Frame\u00a0 Focal Plane Image Processing,&#8221; The Industrial Electronics Handbook, Chap. 73, Ed.-In-Chief J. David Irwin, CRC PRESS, 1997.<\/em>)<\/p>\r\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0* Hybrid Architecture where current mode sparse image serves as fully parallel input to In-memory-processing weight space, to enable high\u00a0 speed and low power processing capabilities<\/p>\r\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0* 8-bit Precision is investigated to remove the processing variations facing in analog approach<\/p>\r\n<ul>\r\n<li>\r\n<h4>\u00a0<strong>Dynamic Architecture Software\u00a0<\/strong><strong>Based <\/strong><strong>On Bio-Inspired Approaches&#8211;<\/strong><em>Mission Specific Approach (not Blanket DNNs)<\/em><\/h4>\r\n<\/li>\r\n<\/ul>\r\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 * Unsupervised learning based on single or a few sample data.\u00a0 It can act as short-term memory to detect, recognize, track and adapt the dynamic moving objects and self-generate training da for long-term memory knowledge<\/p>\r\n<p>\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0* Dynamic Self-Supervised Learning as long-term memory to grasp training data from short-term memory to self-equipped knowledge\u00a0<\/p>\r\n<p>The feedback loop between them enables to accommodate knowledges of object changes in the dynamic environments, dynamic perception and cognition to facilitate the autonomous intelligence architecture\u00a0<\/p>\r\n<p>Integrating the three sets of cornerstones for low SWaP-C and autonomous intelligent systems.<\/p>\r\n<p><iframe loading=\"lazy\" title=\"ADC 7-min SWaP-C Presentation\" width=\"750\" height=\"422\" src=\"https:\/\/www.youtube.com\/embed\/eujNcHt5LKQ?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\r\n","protected":false},"excerpt":{"rendered":"<p>ADC Background Work on EViP technology was initiated by ADC\u2019s founder, Dr. Tuan A Duong, while he was working at NASA\u2019s Jet Propulsion Laboratory, in one of three frontier neural network teams in the United States (including AT&amp;T Bell Labs and Bellcore) in 1984.\u00a0\u00a0 Some of the relevant projects Dr. Duong researched and developed during&hellip; <a class=\"more-link\" href=\"https:\/\/adaptivecomputation.com\/index.php\/technology\/\">Continue reading <span class=\"screen-reader-text\">Technology<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":99,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-77","page","type-page","status-publish","has-post-thumbnail","hentry","entry"],"_links":{"self":[{"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/pages\/77","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/comments?post=77"}],"version-history":[{"count":48,"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/pages\/77\/revisions"}],"predecessor-version":[{"id":420,"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/pages\/77\/revisions\/420"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/media\/99"}],"wp:attachment":[{"href":"https:\/\/adaptivecomputation.com\/index.php\/wp-json\/wp\/v2\/media?parent=77"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}