ACM DL

Reconfigurable Technology and Systems (TRETS)

Menu

Search TRETS
enter search term and/or author name


Social Media

SIGN UP FOR TOC SERVICES :
EMAIL OR RSS

Call for Papers

The editors of the ACM Transactions on Reconfigurable Technology and Systems, a peer-reviewed and archival journal that covers reconfigurable technology, systems and applications on reconfigurable computers, invite the submission of original, full length papers according to the guidelines under the Information for Authors on this website.

TRETS is a new journal focused on research in, on, and with reconfigurable systems and on their underlying technology. The scope, rationale, and coverage by other journals are often limited to particular aspects of reconfigurable technology or reconfigurable systems. TRETS will be a journal that covers reconfigurability in its own right.

Topics that would be appropriate for TRETS would include all levels of reconfigurable system abstractions and all aspects of reconfigurable technology including platforms, programming environments and application successes that support these systems for computing or other applications.

  • The board and systems architectures of a reconfigurable platform.
  • Programming environments of reconfigurable systems, especially those designed for use with reconfigurable systems that will lead to increased programmer productivity.
  • Languages and compilers for reconfigurable systems.
  • Logic synthesis and related tools, as they relate to reconfigurable systems.
  • Applications on which success can be demonstrated.
  • The underlying technology from which reconfigurable systems are developed. (Currently this technology is that of FPGAs, but research on the nature and use of follow-on technologies is appropriate for TRETS.)

In considering whether a paper is suitable for TRETS, the foremost question should be whether reconfigurability has been essential to success. Topics such as architecture, programming languages, compilers, and environments, logic synthesis, and high performance applications are all suitable if the context is appropriate. For example, an architecture for an embedded application that happens to use FPGAs is not necessarily suitable for TRETS, but an architecture using FPGAs for which the reconfigurability of the FPGAs is an inherent part of the specifications (perhaps due to a need for re-use on multiple applications) would be appropriate for TRETS.

---------------------------------------------------------

 

Special Sections

 

Special Section on Deep Learning on FPGAs  (NOW CLOSED)

The rapid advance of Deep Learning (DL), especially via Deep Neural Networks (DNNs), has been shown to compete with and even exceed human capabilities in tasks such as image recognition, playing complex games, and large-scale information retrieval such as web search. However, due to the high computational and power demands of deep neural networks, hardware accelerators are essential to ensure that the computation speed meets the application requirements.

FPGAs have demonstrated great strength in accelerating deep learning inference with high energy efficiency. In this special section of TRETS, we call for the most advanced research results in the architecture of machine learning accelerators for both training and inference, as well as practical solutions related to DL tasks and its implementation on FPGAs. Topics of interest include (but are not limited to) the following:

  • a) Software/Compilers/Tools for targeting DL on FPGAs
  • b) New design methodologies for improving the programmability of DL on FPGAs
  • c) Microarchitectures and Implementations of DL applications on FPGAs
  • d) FPGA Implementations specifically targeting RNNs, such as LSTM, GRU, and MLPs
  • e) Analysis of combining multiple DL techniques on the same task
  • f) Cloud deployments for DL on FPGAs
  • g) DL FPGA implementations targeting constrained environments such as edge computing and IoT
  • h) Modeling, optimizations, and retraining DNNs at reduced precision
  • i) Secure DL applications on FPGAs Neuromorphic computing for DL on FPGAs
  • j) Comparison studies of FPGAs with other DL acceleration architectures (GPUs, TPUs, ASICs, etc)

Submission Deadline: November 15, 2017

Target Publication Time: Summer, 2018

Guest Editors:
Deming Chen, University of Illinois at Urbana-Champaign
Andrew Putnam, Microsoft Research

 
All ACM Journals | See Full Journal Index