LLM related research:

Large language model (LLM) can effectively analyze software vulnerability when the target code fit within its token space. In real world application, the manifestation of a vulnerability can span across large code size (distance) and across different files. CSAFA labs is investigating the utilization of LLM for vulnerability analysis on large code, where the total code size is larger than that of the LLM token-space. We collaborate with Purdue University in this effort.

The practice of using LLM to assist coding may need a trusted infrastructure. CSAFA labs is studying the potential vulnerability and provide cautionary examples of potential exploits of cyber environemnt with regard to LLM coding assist.