Tool that scans desktop software for vulnerabilities finds nearly 100 in Word and Acrobat

Security researchers have developed a tool to scan popular desktop software for security vulnerabilities and have already found over 100 vulnerabilities in Microsoft Word, Adobe Acrobat and Foxit Reader.

The tool, known as Cooper, approaches vulnerability scanning by examining how desktop software integrates programming languages ​​such as JavaScript and Python to perform automated functions such as file manipulation.

The research co-authored by Peng Xu, Yanhao Wang, Hong Hu, and Purui Su from the Chinese Academy of Sciences University School of Cybersecurity, introduced the tool and highlighted the vulnerabilities caused by the interaction of high and low level languages. .

In a research paper detailing the Cooper tool, the researchers said that a “binding layer” is needed to essentially translate script actions, written in high-level languages ​​such as JavaScript and Python, into code that can be interpreted by low-level languages. (C/C++) used to implement script actions in the software itself.

This binding layer is likely to produce inconsistent representations of scripts and can sometimes also overlook crucial security checks, leading to the discovery of “serious security vulnerabilities” in the software.

After running Cooper on Adobe Acrobat, Microsoft Word and Foxit Reader, researchers were able to find a total of 134 new bugs – 60 for Adobe Acrobat, 56 in Foxit Reader and 18 in Microsoft Word.

Most of the bugs Cooper found in the search (103) have been confirmed, and 59 of them have already been fixed, earning the researchers $22,000 in bug bounties.

A total of 33 CVEs (official and traceable vulnerability codes) were also issued, including CVE-2021-21028 and CVE-2021-21035 – a pair of bugs in Adobe Acrobat, each with a rating of 8.8 on the CVSSv3 severity scale.

The researchers used fuzzing to test programs for vulnerabilities – a technique commonly used in such research that involves randomly generating a large number of inputs that are fed into the program to highlight behavioral anomalies, have said the researchers.

There were limitations to using the technique, and researchers developed “new techniques”: object clustering, statistical relationship inference, and relationship-guided mutation to address these.

The limits of fuzzing lie in the way it explores the mutation of code. Fuzzing is one-dimensional, in that it only changes instructions in high-level code, but binding instructions receive input from two dimensions – high-level code in scripts and low-level code in the system underlying.

This restriction means that every bug in the binding code cannot be discovered in a single dimension.

This was demonstrated by the researchers who also used the existing Domato JavaScript fuzzer in the experiment, which found significantly fewer bugs than Cooper.

The researchers plan to release Cooper’s open-source code through their GitHub page so the community can help build it and further improve the security of the link layers.

Featured Resources

How to Run More Productive Meetings

Tips and tricks to get the most out of your meetings

Free download

Enable the future of work with integrated, real-time communication

A new dimension of human interaction is coming to digital work

Free download

How to do a hybrid job well

Overcoming the Challenges of Transitioning to Hybrid Working

look now

HPE HCI 2.0: How it can help your business thrive

Why SMBs Need to Accelerate Digital Transformation with HCI

Free download

Comments are closed.