Snowden Used A Web Crawler to Gather Leaked NSA Documents

The New York Times has THIS new article up saying that the facility in Hawaii where Snowden did most of his snooping did not have basic protections against low-tech web-crawler bots if they were introduced within the NSA system, and that he made slight code revisions to a basic crawler bot that then targeted the thousands of documents he subsequently leaked.

From the article, two key bits:

Using “web crawler” software designed to search, index and back up a website, Mr. Snowden “scraped data out of our systems” while he went about his day job, according to a senior intelligence official.  “We do not believe this was an individual sitting at a machine and downloading this much material in sequence,” the official said.  The process, he added, was “quite automated.”

and, utilizing the passwords he had duped other legitimate users out of as well as his own,

Mr. Snowden appears to have set the parameters for the searches, including which subjects to look for and how deeply to follow links to documents and other data on the NSA’s internal networks.  Intelligence officials told a House hearing last week that he accessed roughly 1.7 million files.