About Sharc
Why Sharc?
- Software/hardware co-design lab (Sharc) believes in the power of “1 + 1 > 2” — twice the efforts, 20 to 200 times improvements — isn’t it worth it?
What to do in Sharc?
- We explore the interdisciplinary research opportunities: ML-assisted EDA, accelerators for ML, EDA-assisted accelerators, etc.
- But, we don’t limit ourselves! We always welcome innovative directions and ideas!
Who comes to Sharc?
- Sharc always welcomes excellent students interested in the joint area of hardware design (FPGA, ASIC, etc.) and machine learning (DNNs, GNNs, etc.). We also have a broad interest in computer architecture, graph computation, and electronic design automation (HLS, etc.).
Research Interests
- Software-hardware Co-design: hardware-efficient machine learning, ML/system co-design
- High-performance Reconfigurable Computing: FPGA, embedded system, edge computing
- Graph Neural Network (GNN) and Graph Computing: GNN for EDA, GNN acceleration
- Electronic design automation (EDA): high-level synthesis (HLS), domain-specific HLS
Announcements
- We are NOT offering short-term and remote interns (in principle) — sorry about that!
Prospective Students ★★ Please read this post first ★★
We are always expecting talented and hardworking students to join Sharc Lab. Please contact Dr. Callie Hao if you’re interested. Expected skills include FPGA, Verilog/HLS, GNN, ML, EDA, or compiler.
Recent News
- [2024. 10] [Talk] Callie gave a talk “Agile Hardware Development: Architectures and Tools” at Research Colloquium Lecture Series @ UW ECE. Greatly appreciate Prof. Ang Li’s hosting. [slides]
- [2024. 09] [Paper & Honor] Our paper “HLSFactory: A Framework Empowering High-Level Synthesis Datasets for Machine Learning and Beyond” (Stefan Abi-Karam, Rishov Sarkar, Allison Seigler, Sean Lowe, Zhigang Wei, Hanqiu Chen, Nanditha Rao, Lizy John, Aman Arora, Cong Hao, collaborated with ASU and UT Austin teams), is accepted by MLCAD’24 and also won the Best Paper Award! [paper] [document] [code]
- [2024. 04] [Paper] Our paper “LightningSimV2: Faster and Scalable Simulation for High-Level Synthesis via Graph Compilation and Optimization” (Rishov Sarkar, Rachel Paul, Cong Hao) is accepted by FCCM’24.
- [2024.03] [Honor] Callie is awarded the NSF CAREER Award. All credits go to Callie’s brilliant students, collaborators, and colleagues! [link]
- [2024. 03] [Paper] Our paper “ICGMM: CXL-enabled Memory Expansion with Intelligent Caching Using Gaussian Mixture Model” (collaborated with Samsung and Duke) is accepted by DAC’24.
- [2023. 12] [Talk] Callie gave a keynote talk “Ultra-Low-Latency Graph Neural Networks: Applications and Implementations” at the GTA3 workshop. [slides]
- [2023.09] [Honor] Callie is selected for the Intel® Rising Star Faculty Award (RSA). Thank you to all of Callie’s brilliant students and fantastic collaborators and colleagues! [link] [link]
- [2023.09] [Honor] Our work GNNBuilder, led by Stefan Abi-Karam, is awarded the FPL Community Award at FPL’23! It recognizes major open-source contributions that will affect the FPGA community for years [paper] [code] [link]
- [2023.07] [Paper] Three papers accepted by ICCAD’23.
- “INR-Arch: A Dataflow Architecture and Compiler for Arbitrary-Order Gradient Computations in Implicit Neural Representation Processing“, Stefan Abi-Karam, Rishov Sarkar, Dejia Xu, Zhiwen Fan, Zhangyang Wang, Cong Hao
- “Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture with Task-level Sparsity via Mixture-of-Experts“, Rishov Sarkar, Hanxue Liang, Zhiwen Fan, Zhangyang Wang, Cong Hao [paper] [code]
- “Rapid-INR: Storage Efficient CPU-free DNN Training Using Implicit Neural Representation“, Hanqiu Chen, Hang Yang, Stephen BR Fitzmeyer, Cong Hao [paper]
- [2023. 07] [Talk] Callie gave an invited talk “Smart Reconfigurable Computing for GNNs and Transformers + Smart HLS tool LightningSim” at the ROAD4NN workshop @ DAC’23. [slides]
- [2023. 05] [Paper] Our paper “GNNBuilder: An Automated Framework for Generic Graph Neural Network Accelerator Generation, Simulation, and Optimization” (Stefan Abi-Karam, Cong Hao) is accepted by FPL’23. [paper] [code]
- [2023.05] [Honor] Our paper LightningSim, led by Rishov Sarkar, is awarded Best Paper Runner-up at FCCM’23! [paper] [code] [link]
- [2023.03] [Paper] Two papers accepted by FCCM’23.
- “LightningSim: Fast and Accurate Trace-Based Simulation for High-Level Synthesis“, Rishov Sarkar, Cong Hao
- “DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph Neural Network Inference“, Hanqiu Chen, Cong Hao (short paper)
- [2023.02] [Talk] Callie gave an invited talk “Smart Reconfigurable Computing for GNN and Transformer” at NCSU ECE Colloquia. [slides]
- [2023. 01] [Paper] Our paper “PreAxC: Error Distribution Prediction for Approximate Computing Quality Control using Graph Neural Networks” (Lakshmi Sathidevi, Abhinav Sharma, Nan Wu, Xun Jiao, Cong Hao) has been accepted by ISQED’23.
2022
- [2022. 11] [Paper] Our paper “M5: Multi-modal Multi-Task Model Mapping on Multi-FPGA with Accelerator Configuration Search” (Akshay Karkal Kamath, Stefan Abi-Karam, Ashwin Bhat and Cong Hao) is accepted by DATE’23.
- [2022. 10] [Paper] Our paper “FlowGNN: A Dataflow Architecture for Real-Time Workload-Agnostic Graph Neural Network Inference” (Rishov Sarkar, Stefan Abi-Karam, Yuqi He, Lakshmi Sathidevi, Cong Hao) is accepted by HPCA’23. [code] [paper]
- [2022. 10] [Paper] Our paper “Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and GPU” (Hanqiu Chen, Yihan Jiang, Yahya AlHinai, Eunjee Na, Cong Hao) is accepted by IISWC’22. [code] [paper]
- [2022. 09] [Paper] Two papers accepted by NeurIPS’22.
- “Unsupervised Learning for Combinatorial Optimization with Principled Objective Design“, Haoyu Peter Wang, Nan Wu, Hang Yang, Cong Hao, Pan Li
- “M3ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design“, Hanxue Liang, Zhiwen Fan, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang
- [2022. 09] [Paper] One paper “Data-Model-Circuit Tri-design for Ultra-light Video Intelligence on Edge Devices“ (Yimeng Zhang, Akshay Karkal Kamath, Qiucheng Wu, Zhiwen Fan, Wuyang Chen, Zhangyang Wang, Shiyu Chang, Sijia Liu, Cong Hao) is accepted by ASP-DAC’22.
- [2022. 08] [Honor] Our Ph.D. student Rishov Sarkar is awarded Qualcomm Innovation Fellowship together with Zhiwen Fan (supervised by Prof. Atlas Wang) from UT Austin. There are only 19 awardees from over 100+ teams, and Rishov is the only awardee from Georgia Tech! Big congrats and thanks to our collaborators, Zhiwen and Prof. Wang! [medium]
- [2022. 08] [Honor] Our Ph.D. student Rishov Sarkar is awarded CRNCH Ph.D. Fellowship from Georgia Tech.
- [2022. 07] [Honor + Talk]
- Our Ph.D. student Rishov Sarkar won third place in the “University Demo Best Demonstration” at DAC’22! Big congrats! Rishov delivered a really cool demo for a multi-task vision transformer on FPGA.
- Callie gave a bunch of talks and tutorials (stop bragging!)
- More details can be found here: [medium]
- [2022. 06] [Paper] Our invited paper “Robotic Computing on FPGAs: Current Progress, Research Challenges, and Opportunities” at AICAS’22 is online now. [pdf]. Many thanks to our lead author Zishen Wan and collaborators!
- [2022. 06] [Paper] Our paper “RT-DNAS: Real-time Constrained Differentiable Neural Architecture Search for 3D Cardiac Cine MRI Segmentation” has been accepted by MICCAI’22. Many thanks to our collaborators, Qing Lv and Prof. Yiyu Shi.
- [2022. 05] [Paper] Our preprint “FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming” is online now. [pdf]
- [2022. 05] [Paper] Our journal paper, “IronMan-Pro: Multi-objective Design Space Exploration in HLS via Reinforcement Learning and Graph Neural Network based Modeling“, has been accepted by IEEE TCAD. Congratulations to my collaborators, Nan Wu, and Prof. Yuan Xie.
- [2022. 05] [Paper] Two papers get accepted by ASAP’22.
- “Mask-Net: A Hardware-efficient Object Detection Network with Masked Region Proposals“, Hanqiu Chen, Cong Hao
- “LOSTIN: Logic Optimization via Spatio-Temporal Information with Hybrid Graph Models“, Nan Wu, Jiwon Lee, Yuan Xie, Cong Hao
- [2022. 04] [Talk] Callie gave an invited talk at Salishan Conference on High Speed Computing, “Deep Neural Network and Accelerator Co-design: Present and Future“. [slides] [YouTube]
- [2022. 04] [Talk] Two talks delivered by Sharc students at DOSSA-4 (Fourth International Workshop on Domain-Specific System Architecture)
- [2022. 04] [Service] Callie serves on the TPC chair of ICCAD’22, TPC co-chair of DATE’22, and is the chair of SRC@ICCAD’22. Served on the TPC and is the publicity chair of FCCM’22.
- [2022. 02] [Paper] Two papers get accepted by DAC’22. Thanks to our collaborators, Nan Wu, Hang Yang, Xinyi Zhang, Prof. Pan Li, Prof. Yuan Xie, Prof. Peipei Zhou, Prof. Alex Jones, and Prof. Jingtong Hu.
- “High-Level Synthesis Performance Prediction using GNNs: Benchmarking, Modeling, and Advancing“, Nan Wu et al. [pdf]
- “H2H: Heterogeneous Model to Heterogeneous System Mapping with Computation and Communication Awareness“, Xinyi Zhang et al.
- [2022. 01] [Honor] Callie was appointed to the Sutterfield Family Early Career Professorship. Thanks to the Sutterfield family and the department! [news]
2021
- [2021. 10] [Paper] Our paper “ScaleHLS: A New Scalable High-Level Synthesis Framework on Multi-Level Intermediate Representation” is accepted by HPCA’22. Thanks to my collaborators, Hanchen Ye, Dr. Stephen Neuendorffer, Prof. Jinjun Xiong, and Prof. Deming Chen. [code] [pdf]
- [2021. 10] [Service] Callie serves on the TPC of DATE’22 and DAC’22 and co-chairs Student Research Competition (SRC) @ ICCAD’22.
- [2021. 10] [Talk] Callie gave an invited talk at Stevens Institute of Technology ECE Seminar: “How Powerful are Graph Neural Networks and Reinforcement Learning in EDA: a Case Study in High-Level Synthesis”. [slides]
- [2021. 09] [Paper] Our work, GenNAS, “Generic Neural Architecture Search via Regression” got accepted by NeurIPS’21 as Spotlight (<3% acceptance rate). This is the first regression-based task-agnostic NAS with near-zero training and search cost. Thanks to my awesome collaborators, Yuhong Li, Prof. Pan Li, Prof. Jinjun Xiong, and Prof. Deming Chen. [code] [pdf]
- [2021. 09] [Talk] Callie gave an invited talk at Rutgers EFficient AI (REFAI) Seminar: “How Powerful are Graph Neural Networks and Reinforcement Learning in EDA: a Case Study in High-Level Synthesis“. [YouTube] [slides]
- [2021. 06] [Funding] Our project “Ultra-Light Video Intelligence by Data-Circuit-Model Tri-Design: In-Pixel Filtering, In-Memory Focusing, and In-Loop Optimization” has been funded by the Defense Advanced Research Projects Agency (DARPA). Thanks to my awesome collaborators, Prof. Shimeng Yu, Prof. Shiyu Chang, Prof. Atlas Wang, and Prof. Sijia Liu.
- [2021. 06] [Paper] Our paper “IronMan” wins the best paper award at GLSVLSI’21! Congratulations and thanks to my collaborators, Nan Wu and Prof. Yuan Xie.
- [2021. 06] [Paper] Our paper “WinoCNN: Kernel Sharing Winograd Systolic Array for Efficient Convolutional Neural Network Acceleration on FPGAs” is accepted by ASAP’21. Many thanks to my collaborators, Dr. Yao Chen, Xinheng, and Dr. Deming Chen.
- [2021. 05][Paper] Our two invited papers are online now.
- “Software/Hardware Co-design for Multi-modal Multi-task Learning in Autonomous System” @ IEEE AICAS’21 [pdf],
- “3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low Bitwidth Quantization, and Ultra-Low Latency Acceleration” @ IEEE GLSVLSI’21 [pdf].
- [2021. 05] [Service] Callie serves on the TPC of ICCD’21, ASAP’21, ICCAD’21, and FCCM’22.
- [2021. 04] [Paper & Award] Our paper “IronMan: GNN-assisted Design Space Exploration in High-Level Synthesis via Reinforcement Learning” is accepted by GLSVLSI’21. It reveals the great potential of GNN and RL in solving EDA problems and won the best paper award. [pdf]
- [2021. 03] [Paper] Our paper “ScaleHLS: Achieving Scalable High-Level Synthesis through MLIR” is accepted by LATTE’21, an ASPLOS workshop on applying programming languages and compilers techniques to generate hardware accelerators. ScaleHLS is the first framework to utilize the multi-level intermediate representation (MLIR) for HLS compilation, design space exploration (DSE), and benchmark generation. It scales well to large and complex designs with a hierarchical IR structure and allows direct DSE at the source-code level.
- [2021. 03] [Paper] Our survey paper “Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design” is accepted by IEEE Design & Test magazine. It provides a good overview of the design methods targeting edge AI and fruitful discussions and insights for possible cross-layer opportunities. [pdf]
- [2021. 02] [Paper] Our paper “MeLoPPR: Software/Hardware Co-design for Memory-efficient Low-latency Personalized PageRank” is accepted by DAC’21. Many thanks to my collaborators: Dr. Yao Chen and Dr. Pan Li.
2020
- [2020. 10] [Service] Callie serves on the TPC of DAC’21.
- [2020. 11] [Paper] Our paper “Workload-aware approximate computing configuration” is accepted by DATE’21. Congratulations to my collaborators: Dr. Xun Jiao and his team!
- [2020. 10] [Service] Callie serves on the TPC of ICCD’20, DATE’21, and SRC@ICCAD’20.
- [2020. 07] [Award] Our team iSmart won 3rd place in DAC-SDC’20, a real-time FPGA-based object detection competition. [results]
- [2020. 05] [Paper] Our paper “VecQ: minimal loss DNN model compression with vectorized weight quantization” is accepted by IEEE Transactions on Computers (TC). [pdf]
- [2020. 02] [Paper] Our paper “EDD: Efficient Differentiable DNN architecture and implementation co-search for embedded AI solution” is accepted by DAC’20. [pdf] [slides] [presentation]
- [2020. 01] [Paper] Our paper “SkyNet: a hardware-efficient method for object detection and tracking on embedded systems” is accepted by the 2020 Conference on Machine Learning and Systems (SysML).
2019
- [2019. 06] [Award] Our DNN design strategy (bi-directional co-design approach) won the Best Poster Award at ICML’19 Joint Workshop on On-Device Machine Learning & Compact Deep Neural Network Representations (ODML-CDNNR).
- [2019. 06] [Award] Our team won a double championship in DAC-SDC’19! iSmart3 design won 1st place in the FPGA track, and SkyNet design won 1st place in the GPU track. [Github]
- [2019. 02] [Paper] Our paper “FPGA/DNN co-design: an efficient design methodology for IoT intelligence on the edge” is accepted by DAC’19. [paper]
2018
- [2018. 06] [Award] Our iSmart2 team won 3rd place in DAC-SDC’18 in the FPGA track!
2017
- [2017. 12] I joined the ECE department at the University of Illinois Urbana-Champaign (UIUC) as a postdoc and started to work with Prof. Deming Chen.
- [2017. 07] I started my position at Waseda University as an invited researcher with Prof. Takeshi Yoshimura.
- [2017. 07] I defended my dissertation and got my Ph.D. from Waseda University, Japan. Many thanks to my supervisor, Prof. Takeshi Yoshimura.