- Software/hardware co-design lab (Sharc) believes in the power of “1 + 1 > 2” — twice efforts, 20 to 200 times improvements — isn’t it worth it?
What to do in Sharc?
- We explore the interdisciplinary research opportunities: ML-assisted EDA, accelerators for ML, EDA-assisted accelerators, etc.
- But, don’t limit ourselves! We always welcome innovative directions and ideas!
Who comes to Sharc?
- Sharc always welcomes excellent students interested in the joint area of hardware design (FPGA, ASIC, etc.) and machine learning (DNNs, GNNs, etc.). We also have a broad interest in computer architecture, graph computation, and electronic design automation (HLS, etc.).
- Software-hardware Co-design: hardware-efficient machine learning, neural architecture search (NAS), ML/system co-design, on-device AI
- High-performance Reconfigurable Computing: FPGA, embedded system, IoT, edge computing
- Graph Neural Network (GNN) and Graph Computing: GNN for EDA, GNN acceleration
- Electronic design automation (EDA): high-level synthesis (HLS), domain-specific HLS
Announcements and Openings
- [Want to interact with Sharc or get involved?] We are having a weekly reading group on Friday 3:00 – 4:30 pm. Each week we’ll have 1 to 2 students present research papers for us to learn together. Feel free to join (mostly welcomed if you’re not from Sharc Lab) and/or share this with your friends. Details can be found here. (2022 Spring schedule to be annouced!)
- [Prospective Students — please read this post first] We are always expecting talented and hardworking Ph.D. students to join Sharc Lab. We also welcome self-motived master students and interns. Please contact Dr. Callie Hao if you’re interested. Expected skills include FPGA, Verilog/HLS, GNN, ML, EDA, or compiler.
- [2022. 01] [Honor] Callie was appointed to the Sutterfield Family Early Career Professorship. Thanks for the Sutterfield family and the department!
- [2021. 10] [paper] Our paper “ScaleHLS: A New Scalable High-Level Synthesis Framework on Multi-Level Intermediate Representation” is accepted by HPCA’22. Thanks to my collaborators, Hanchen Ye, Dr. Stephen Neuendorffer, Prof. Jinjun Xiong, and Prof. Deming Chen. [code] [pdf]
- [2021. 10] [Service] Callie serves on the TPC of DATE’22 and DAC’22 and co-chairs Student Research Competition (SRC) @ ICCAD’22.
- [2021. 10] [Talk] Callie gave an invited talk at Stevens Institute of Technology ECE Seminar: “How Powerful are Graph Neural Networks and Reinforcement Learning in EDA: a Case Study in High-Level Synthesis”. [slides]
- [2021. 09] [Paper] Our work, GenNAS, “Generic Neural Architecture Search via Regression” got accepted by NeurIPS’21 as Spotlight (<3% acceptance rate). This is the first regression-based task-agnostic NAS with near-zero training and search cost. Thanks to my awesome collaborators, Yuhong Li, Prof. Pan Li, Prof. Jinjun Xiong, and Prof. Deming Chen. [code] [pdf]
- [2021. 09] [Talk] Callie gave an invited talk at Rutgers EFficient AI (REFAI) Seminar: “How Powerful are Graph Neural Networks and Reinforcement Learning in EDA: a Case Study in High-Level Synthesis”. [YouTube] [slides]
- [2021. 06] [Funding] Our project “Ultra-Light Video Intelligence by Data-Circuit-Model Tri-Design: In-Pixel Filtering, In-Memory Focusing, and In-Loop Optimization” has been funded by the Defense Advanced Research Projects Agency (DARPA). Thanks to my awesome collaborators, Prof. Shimeng Yu, Prof. Shiyu Chang, Prof. Atlas Wang, and Prof. Sijia Liu.
- [2021. 06] [Paper] Our paper “IronMan” wins the best paper award at GLSVLSI’21! Congratulations and thanks to my collaborators, Nan Wu and Prof. Yuan Xie.
- [2021. 06] [Paper] Our paper “WinoCNN: Kernel Sharing Winograd Systolic Array for Efficient Convolutional Neural Network Acceleration on FPGAs” is accepted by ASAP’21. Many thanks to my collaborators, Dr. Yao Chen, Xinheng, and Dr. Deming Chen.
- [2021. 05] [Paper] Our two invited papers are online now. “Software/Hardware Co-design for Multi-modal Multi-task Learning in Autonomous System” @ IEEE AICAS’21 [pdf], and “3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low Bitwidth Quantization, and Ultra-Low Latency Acceleration” @ IEEE GLSVLSI’21 [pdf].
- [2021. 05] [Service] Callie serves on the TPC of ICCD’21, ASAP’21, ICCAD’21, and FCCM’22.
- [2021. 04] [Paper] Our paper “IronMan: GNN-assisted Design Space Exploration in High-Level Synthesis via Reinforcement Learning” is accepted by GLSVLSI’21. It reveals the great potential of GNN and RL in solving EDA problems. [pdf]
- [2021. 03] [Paper] Our paper “ScaleHLS: Achieving Scalable High-Level Synthesis through MLIR” is accepted by LATTE’21, an ASPLOS workshop on applying programming languages and compilers techniques to generate hardware accelerators. ScaleHLS is the first framework to utilize the multi-level intermediate representation (MLIR) for HLS compilation, design space exploration (DSE), and benchmark generation. It scales well to large and complex designs with a hierarchical IR structure and allows direct DSE at the source-code level.
- [2021. 03] [Paper] Our survey paper “Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design” is accepted by IEEE Design & Test magazine. It provides a good overview of the design methods targeting edge AI and fruitful discussions and insights for possible cross-layer opportunities. [pdf]
- [2021. 02] [Paper] Our paper “MeLoPPR: Software/Hardware Co-design for Memory-efficient Low-latency Personalized PageRank” is accepted by DAC’21. Many thanks to my collaborators: Dr. Yao Chen and Dr. Pan Li.
- [2020. 10] [Service] Callie serves on the TPC of DAC’21.
- [2020. 11] [Paper] Our paper “Workload-aware approximate computing configuration” is accepted by DATE’21. Congratulations to my collaborators: Dr. Xun Jiao and his team!
- [2020. 10] [Service] Callie serves on the TPC of ICCD’20, DATE’21, and SRC@ICCAD’20.
- [2020. 07] [Award] Our team iSmart won 3rd place in DAC-SDC’20, a real-time FPGA-based object detection competition. [results]
- [2020. 05] [Paper] Our paper “VecQ: minimal loss DNN model compression with vectorized weight quantization” is accepted by IEEE Transactions on Computers (TC). [pdf]
- [2020. 02] [Paper] Our paper “EDD: Efficient Differentiable DNN architecture and implementation co-search for embedded AI solution” is accepted by DAC’20. [pdf] [slides] [presentation]
- [2020. 01] [Paper] Our paper “SkyNet: a hardware-efficient method for object detection and tracking on embedded systems” is accepted by the 2020 Conference on Machine Learning and Systems (SysML).
- [2019. 06] [Award] Our DNN design strategy (bi-directional co-design approach) won the Best Poster Award at ICML’19 Joint Workshop on On-Device Machine Learning & Compact Deep Neural Network Representations (ODML-CDNNR).
- [2019. 06] [Award] Our team won a double championship in DAC-SDC’19! iSmart3 design won 1st place in the FPGA track, and SkyNet design won 1st place in the GPU track. [Github]
- [2019. 02] [Paper] Our paper “FPGA/DNN co-design: an efficient design methodology for IoT intelligence on the edge” is accepted by DAC’19.
- [2018. 06] [Award] Our iSmart2 team won 3rd place in DAC-SDC’18 in the FPGA track!
- [2017. 12] I joined the ECE department at the University of Illinois Urbana-Champaign (UIUC) as a postdoc and started to work with Prof. Deming Chen.
- [2017. 07] I started my position at Waseda University as an invited researcher with Prof. Takeshi Yoshimura.
- [2017. 07] I defended my dissertation and got my Ph.D. from Waseda University, Japan. Many thanks to my supervisor, Prof. Takeshi Yoshimura.