Feb
26
Participation in EDCC 2025
February 26, 2025 | | Comments Off on Participation in EDCC 2025
During next EDCC (20th European Dependable Computing Conference), that will be held in Lisbon (Portugal) in April 2025, the GSTF research group will present three extended fast-abstracts, showing our current research lines.
These works are:
Title: Towards a Novel 8-bit Floating-point Format to Increase Robustness in Convolutional Neural Networks
Authors: Luis-J. Saiz Adalid, Juan-Carlos Ruiz-García, Joaquín Gracia-Morán, David de Andrés, J.-Carlos Baraza-Calvo, Daniel Gil-Tomás, Pedro Gil-Vicente
Abstract: Convolutional Neural Networks (CNNs) are widely adopted in Artificial Intelligence applications, particularly in computer vision and other deep learning tasks. Their performance relies on millions of parameters, including weights and biases, which are optimized during training, stored, and utilized during inference. Traditionally, these parameters are represented using the 32-bit IEEE-754 single-precision floating-point format. However, research has shown that excess precision in this format is not always required to maintain accuracy, motivating the adoption of reduced-precision 16-bit formats. A natural progression of this trend is representing real numbers using 8-bit formats. However, existing proposals often suffer from precision loss, negatively impacting CNN accuracy. In this extended abstract, we propose a novel 8-bit floating-point format designed to enhance reliability in CNNs, due to its reduced memory footprint, enough precision, and well-fitted range. We evaluate its advantages and limitations through comparative analysis. Initial findings suggest that our format improves computational efficiency while preserving accuracy comparable to 32-bit networks, increasing reliability. However, further experimental validation is necessary.
Title: Towards SW-based Robustness Assessment of HW Accelerators for Quantized CNNs
Authors: Juan Carlos Ruiz, David de Andrés, Juan-Carlos Baraza-Calvo, Luis-José Saiz-Adalid,
Joaquín Gracia-Morán, Daniel Gil-Tomás, Pedro Gil-Vicente
Abstract: Quantized Convolutional Neural Networks (QCNNs) are widely adopted in resource-limited environments due to their reduced memory footprint, lower power consumption, and faster execution. Hardware (HW) accelerators further enhance these benefits by optimizing QCNN execution for available resources, a crucial aspect in embedded systems. Typically, the development cycle of neural networks begins with a software (SW) model, which is later refined and implemented in HW. Leveraging this initial SW model for early robustness assessments is an attractive approach, as it allows fault tolerance evaluation before committing to HW implementation. However, for such assessments to be meaningful, they must accurately reflect the fault behavior that would occur in HW. This paper examines the limitations of naive fault injection approaches in SW-based QCNN models and demonstrates how they can lead to misleading conclusions about HW robustness. The key issue arises from the internal multi-component representation of quantized parameters, which differs from direct floating-point storage. Our analysis highlights that simplistic bit-flip injections in SW-based QCNNs do not necessarily translate to equivalent faults in their HW implementations, leading to inaccuracies in robustness evaluation. While we do not propose a specific fault injection methodology, we identify critical challenges that must be addressed to ensure that early-stage SW evaluations on QCNNs have potential to provide valid, cost-effective insights into the resilience of their HW implementations.
Title: Initial insights into synthesis overheads caused by C-based Error Correction Codes implementations
Authors: Joaquín Gracia-Morán, David de Andrés, Luis-J. Saiz-Adalid, Juan Carlos Ruiz, J.-Carlos Baraza-Calvo, Daniel Gil-Tomás, Pedro J. Gil-Vicente
Abstract: Error Correction Codes (ECCs) are increasingly used in safety-critical systems, such as hardware accelerators for cryptographic computations and neural network inference. These systems require high reliability, making ECCs essential for mitigating soft errors and improving fault tolerance. Thus, the demand for efficient ECC implementations is rising, necessitating faster design and deployment processes. Traditional hardware design approaches, such as Register-Transfer Level (RTL) development, can be time-consuming and very complex. High-Level Synthesis (HLS) enables the automatic transformation of C-based ECC models into hardware descriptions, reducing development effort while allowing design-space exploration. This methodology facilitates rapid prototyping and optimization, enabling the evaluation of different architectural choices without manually modifying the RTL code. However, coding styles, algorithmic transformations, and optimization strategies in C-based can directly affect the synthesized hardware’s performance metrics, including area utilization, power consumption, and latency.
This work provides initial insights into how different C-based ECC design choices influence the final hardware implementation. To do this, we have analyzed synthesis results under various ECC configurations.
Comments