enter search term and/or author name
The editors of the ACM Transactions on Reconfigurable Technology and Systems, a peer-reviewed and archival journal that covers reconfigurable technology, systems and applications on reconfigurable computers, invite the submission of original, full length papers according to the guidelines under the Information for Authors on this website.
TRETS is a new journal focused on research in, on, and with reconfigurable systems and on their underlying technology. The scope, rationale, and coverage by other journals are often limited to particular aspects of reconfigurable technology or reconfigurable systems. TRETS will be a journal that covers reconfigurability in its own right.
Topics that would be appropriate for TRETS would include all levels of reconfigurable system abstractions and all aspects of reconfigurable technology including platforms, programming environments and application successes that support these systems for computing or other applications.
In considering whether a paper is suitable for TRETS, the foremost question should be whether reconfigurability has been essential to success. Topics such as architecture, programming languages, compilers, and environments, logic synthesis, and high performance applications are all suitable if the context is appropriate. For example, an architecture for an embedded application that happens to use FPGAs is not necessarily suitable for TRETS, but an architecture using FPGAs for which the reconfigurability of the FPGAs is an inherent part of the specifications (perhaps due to a need for re-use on multiple applications) would be appropriate for TRETS.
Special Section on Deep Learning on FPGAs (NOW CLOSED)
The rapid advance of Deep Learning (DL), especially via Deep Neural Networks (DNNs), has been shown to compete with and even exceed human capabilities in tasks such as image recognition, playing complex games, and large-scale information retrieval such as web search. However, due to the high computational and power demands of deep neural networks, hardware accelerators are essential to ensure that the computation speed meets the application requirements.
FPGAs have demonstrated great strength in accelerating deep learning inference with high energy efficiency. In this special section of TRETS, we call for the most advanced research results in the architecture of machine learning accelerators for both training and inference, as well as practical solutions related to DL tasks and its implementation on FPGAs. Topics of interest include (but are not limited to) the following:
Submission Deadline: November 15, 2017
Target Publication Time: Summer, 2018
Deming Chen, University of Illinois at Urbana-Champaign
Andrew Putnam, Microsoft Research