DeepPerform: An Efficient Approach for Performance Testing of Resource-Constrained Neural Networks

Abstract

Today, an increasing number of Adaptive Deep Neural Networks (AdNNs) are being used to make decisions on resource-constrained embedded devices. We observe that, similar to traditional software, redundant computations exist in AdNNs, resulting in considerable performance degradation. The performance degradation in AdNNs is dependent on the input workloads, and is referred to as input-dependent performance bottlenecks (IDPBs). To ensure an AdNN satisfies the performance requirements of real-time applications, it is essential to conduct performance testing to detect IDPBs in the AdNN. Existing neural network testing methods are primarily concerned with correctness testing, which does not involve performance testing. To fill this gap, we propose \tool, a scalable approach to generate test samples to detect the IDPB of AdNNs. We first demonstrate how the problem of generating performance test samples detecting IDPBs can be formulated as an optimization problem. Following that, we demonstrate how tool efficiently handles the optimization problem by learning and estimating the distribution of AdNNs’ computational consumption. We evaluate \tool on three widely used datasets against five popular AdNN models. The results show that \tool generates test samples that cause more severe performance degradation~(FLOPs increase up to 552%). Furthermore, \tool is substantially more efficient than the baseline methods in terms of generating test inputs (runtime overhead only 6–10 milliseconds).

Publication
Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering.
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Create your slides in Markdown - click the Slides button to check out the example.

Supplementary notes can be added here, including code, math, and images.