The correctness of debug information included in optimized binaries has been the subject of recent attention by the research community. Indeed, it represents a practically important problem, as most of the software running in production is produced by an optimizing compiler. Current solutions rely on invariants, human-defined rules that embed the desired behavior, whose violation may indicate the presence of a bug. Although this approach proved to be effective in discovering several bugs, it is unable to identify bugs that do not trigger invariants. In this paper, we investigate the feasibility of using Deep Neural Networks (DNNs) to discover incorrect debug information. We trained a set of different models borrowed from the NLP community in an unsupervised way on a large dataset of debug traces and tested their performance on two novel datasets that we propose. Our results are positive and show that DNNs are capable of discovering bugs in both synthetic and real datasets. More interestingly, we performed a live analysis of our models by using them as bug detectors in a fuzzing system. We show that they were able to report 12 unknown bugs in the latest version of the widely used LLVM toolchain, 2 of which have been confirmed.
Dettaglio pubblicazione
2022, IEEE ACCESS, Pages 54136-54148 (volume: 10)
Debugging Debug Information with Neural Networks (01a Articolo in rivista)
Artuso F., Di Luna G. A., Querzoni L.
Gruppo di ricerca: Cybersecurity
keywords