Reconfigurable Technology and Systems (TRETS)


Search TRETS
enter search term and/or author name

Social Media


Call for Papers

The editors of the ACM Transactions on Reconfigurable Technology and Systems, a peer-reviewed and archival journal that covers reconfigurable technology, systems and applications on reconfigurable computers, invite the submission of original, full length papers according to the guidelines under the Information for Authors on this website.

TRETS is a new journal focused on research in, on, and with reconfigurable systems and on their underlying technology. The scope, rationale, and coverage by other journals are often limited to particular aspects of reconfigurable technology or reconfigurable systems. TRETS will be a journal that covers reconfigurability in its own right.

Topics that would be appropriate for TRETS would include all levels of reconfigurable system abstractions and all aspects of reconfigurable technology including platforms, programming environments and application successes that support these systems for computing or other applications.

  • The board and systems architectures of a reconfigurable platform.
  • Programming environments of reconfigurable systems, especially those designed for use with reconfigurable systems that will lead to increased programmer productivity.
  • Languages and compilers for reconfigurable systems.
  • Logic synthesis and related tools, as they relate to reconfigurable systems.
  • Applications on which success can be demonstrated.
  • The underlying technology from which reconfigurable systems are developed. (Currently this technology is that of FPGAs, but research on the nature and use of follow-on technologies is appropriate for TRETS.)

In considering whether a paper is suitable for TRETS, the foremost question should be whether reconfigurability has been essential to success. Topics such as architecture, programming languages, compilers, and environments, logic synthesis, and high performance applications are all suitable if the context is appropriate. For example, an architecture for an embedded application that happens to use FPGAs is not necessarily suitable for TRETS, but an architecture using FPGAs for which the reconfigurability of the FPGAs is an inherent part of the specifications (perhaps due to a need for re-use on multiple applications) would be appropriate for TRETS.



Special Issues


Special Issue on Security in FPGA-Accelerated Cloud and Datacenters

Field-Programmable Gate Arrays (FPGAs) are becoming integral components of general purpose heterogeneous cloud computing systems and datacenters due to their ability to serve as energy-efficient domain customizable accelerators. Amazon, Baidu, and Maxeler now expose FPGAs to application developers in their cloud infrastructures. The integration of FPGAs in Microsoft Catapult to accelerate various tasks, including Bing, has led to a 2x performance speed-up versus processor-only implementation with only a 30% increase in energy. Intel recently announced in-package FPGA integration in Xeon multi-core processors.

While FPGA-accelerated cloud and datacenter platforms provide on-demand computational resources with the low energy, high flexibility, and performance benefits of FPGAs, integrating FPGAs into existing cloud software ecosystems introduces new security threats as a result of non-trusted hardware components operating far below the security level of hypervisors and operating systems.
Several researchers have recently demonstrated the possibility of sniffing and denial-of-service attacks in multi-tenancy clouds that operate in an infrastructure as a service paradigm.

The purpose of this special issue is to raise awareness of the growing threats of security in cloud computing systems and present state-of-the-art works that address various security issues in FPGA-accelerated clouds and datacenters. We invite researchers to submit novel and unpublished work that addresses important aspects in securing data center and cloud systems that incorporate FPGAs.
Submissions should follow the guidelines for ACM TRETS regular papers, within the scope of this special issue call.

Submissions should follow the guidelines for ACM TRETS regular papers, within the scope of this special issue call.

Submission Deadline: September 30, 2018 (EXTENDED: OCTOBER 31)
Review first round: November 30, 2018 (EXTENDED: DECEMBER 30)
Second round of review and final notification: January 30, 2019 (EXTENDED: FEB 28)
Publication: 2019

Associate Editors:
Christophe Bobda, University of Arkansas
Russell Tessier, University of Massachusetts Amherst
Ken Eguro, Microsoft Research
Ryan Kastner, University of California San Diego


Special Section on Deep Learning on FPGAs  (NOW CLOSED)

The rapid advance of Deep Learning (DL), especially via Deep Neural Networks (DNNs), has been shown to compete with and even exceed human capabilities in tasks such as image recognition, playing complex games, and large-scale information retrieval such as web search. However, due to the high computational and power demands of deep neural networks, hardware accelerators are essential to ensure that the computation speed meets the application requirements.

FPGAs have demonstrated great strength in accelerating deep learning inference with high energy efficiency. In this special section of TRETS, we call for the most advanced research results in the architecture of machine learning accelerators for both training and inference, as well as practical solutions related to DL tasks and its implementation on FPGAs. Topics of interest include (but are not limited to) the following:

  • a) Software/Compilers/Tools for targeting DL on FPGAs
  • b) New design methodologies for improving the programmability of DL on FPGAs
  • c) Microarchitectures and Implementations of DL applications on FPGAs
  • d) FPGA Implementations specifically targeting RNNs, such as LSTM, GRU, and MLPs
  • e) Analysis of combining multiple DL techniques on the same task
  • f) Cloud deployments for DL on FPGAs
  • g) DL FPGA implementations targeting constrained environments such as edge computing and IoT
  • h) Modeling, optimizations, and retraining DNNs at reduced precision
  • i) Secure DL applications on FPGAs Neuromorphic computing for DL on FPGAs
  • j) Comparison studies of FPGAs with other DL acceleration architectures (GPUs, TPUs, ASICs, etc)

Submission Deadline: November 15, 2017

Target Publication Time: Summer, 2018

Guest Editors:
Deming Chen, University of Illinois at Urbana-Champaign
Andrew Putnam, Microsoft Research

All ACM Journals | See Full Journal Index