I am a member of ACEE (Advanced Honor Class for Engineering Education) in Chu Kochen Honors College.
My research interests lie in the areas of Software Engineering, Software Security and Machine Learning, especially in leveraging machine learning for program analysis, code generation and improving the reliability of software systems.
To date, my work has uncovered more than 120 previously unknown bugs in different open-source projects, including Apache Druid and Netty, as well as 63 bugs in the Linux kernel.
While the automated detection of cryptographic API misuses has progressed significantly, its precision diminishes for intricate targets due to the reliance on manually defined patterns. Large Language Models (LLMs), renowned for their contextual understanding, offer a promising avenue to address existing shortcomings. However, applying LLMs in this security-critical domain presents challenges, particularly due to the unreliability stemming from LLMs’ stochastic nature and the well-known issue of hallucination. To explore the prevalence of LLMs’ unreliable analysis and potential solutions, this paper introduces a systematic evaluation framework to assess LLMs in detecting cryptographic misuses, utilizing a comprehensive dataset encompassing both manually-crafted samples and real-world projects. Our in-depth analysis of 11,940 LLM-generated reports highlights that the inherent instabilities in LLMs can lead to over half of the reports being false positives. Nevertheless, we demonstrate how a constrained problem scope, coupled with LLMs’ self-correction capability, significantly enhances the reliability of the detection. The optimized approach achieves a remarkable detection rate of nearly 90%, surpassing traditional methods and uncovering previously unknown misuses in established benchmarks. Moreover, we identify the failure patterns that persistently hinder LLMs’ reliability, including both cryptographic knowledge deficiency and code semantics misinterpretation. Guided by these insights, we develop an LLM-based workflow to examine open-source repositories, leading to the discovery of 63 real-world cryptographic misuses. Of these, 46 have been acknowledged by the development community, with 23 currently being addressed and 6 resolved. Reflecting on developers’ feedback, we offer recommendations for future research and the development of LLM-based security tools.