Loading HuntDB...

on the implications of permitting procedural culling

Low
C
curl
Submitted None
Reported by lyb_unaffiliated

Vulnerability Details

Technical details and impact analysis

Use of Insufficiently Random Values
Good day. My name is Lorentso Youriévitch Bogdanov. It has come to my attention that you are in need of higher-quality code review. Rest assured that you are not alone in noticing a certain degree of brain-drain in this field. As you can perhaps imagine, the recent shortage of qualified hackers and emergence of charlatans are very much related, and at least in part manufactured by circumstance; be it legal or technological. I myself am here chiefly out of my own personal interest, although my accession to the European Council for Cybersecurity could technically mandate this intervention. I would like to apologize in advance for not providing you with a concrete solution to the problem I've identified. It's unlikely to be an actual violation of my exclusivity contract, but I will adhere to the principle in good faith. As such, I also hereby waive any bounty you would award to this report. My source review was only cursory. I followed up on several items, few of which made it to reproduction. What little I found is beneath mention, for the most part. However, something particular did catch my eye. A "vulnerability", if you'll forgive the air-quotes. Few people would call it that, and you have no doubt grown skeptical of the word yourself as of late. When we start doubting the language we once relied on, something has gone very wrong. It did not look serious at first. A degree of excursion is to be expected for a project of this magnitude, and most people do not afford emergent streams of random data a second look - even if it is produced without apparent intent, so-called run-off or garbage code. It is brushed aside like virtual dust. There's the often-heard argument of statistical security, a sort of monkeys-and-typewriters justification for overlooking small risks in favor of larger ones. I cannot disagree, of course. The danger posed by run-off data excursions does not amount to much in the actual lifetime of any program; or of our entire species, in fact. It is too close to zero. Finding no concrete problem, I turned to theory. By what means could random code become dangerous? Is there a way to turn harmless code harmful, without infinite monkeys? The obvious answer to that would be spoofing. It is one of my preferred approaches, as you can perhaps imagine. But have you ever considered what "spoofing" actually means? Pardon the tangent, but I promise it is crucial. Verification is the alpha and omega of all matters of security. When it is breached, people use such terms as spoofing, attack, intrusion, unauthorized access. The emotional load of these words is obvious, and yet they do not reflect what actually transpired. The truth is, if you are in possession of a physical key, or a password, or otherwise succeed at verification, then you are authorized. That is all there is to it. As such, there is no material or virtual difference between spoofed and "unspoofed" data. The only difference is affectual, human. Consider the Ship of Theseus, or the Philosophical Zombie. You may be wondering how these spiritual trappings relate to cURL. To put it crudely, it means that your random run-off data excursions are, by definition, impossible to spoof. And even if it were possible, there would be no discernible difference for you to act on. Or so I thought. What I found next occupied me for longer than I wish to admit. Naturally, the problem with random generation is repetition. I vividly remember the moments of my youth spent scrawling four-digit combinations onto paper as I crossed out more and more possibilities. Today, brute-force is far more powerful, but the principle has not changed - repeat combinations must be avoided. It's a matter that I hesitate to describe as mere efficiency, owing to its scale. I do not wish to drown you in equations, but rest assured that calling it "exponential" is an exponential understatement. Once you eliminate repetition, the impossible becomes plausible. Perhaps you understand my concern now. What if a stream of "random" garbage code were to cull its repetitions, creating only new, unique outcomes? Disaster. Eventually. Very eventually, unlike our abstract monkey friends. Within your architecture, it is surprisingly simple. Any vector that achieves sufficient privileges could couple an internal register of combinations to the relevant framework, updated in real-time. The stress on storage would be unprecedented, but by culling repetition it becomes feasible. Utilizing conservative assumptions of clock speed and uptime, I estimate it in merely gigabytes per hour, down from numbers so large I did not expect them to be classified. It turns out they were in fact named, by a colleague I can only imagine was very bored indeed. Even with a thousand-fold error margin, would you notice it in time? There is certainly no automated measure in place that would prevent this approach. Forgive me for beating you over the head with this hint, but as I have mentioned, I am not permitted to contribute actual code at this time. ## Impact So, we now know the problem. The only question remaining is "when". I suspect the answer would require enough fallible assumptions to make Frank Drake spin in his grave with sufficient velocity to induce a magnetic reversal of the poles. I would advise you to take this matter very seriously - not in spite of, but precisely BECAUSE of these circumstances. Look back on the history of computing, and humanity in general. You will find ample suffering caused by problems considered theoretical at the time, or completely unknown. Only relatively recently have we gained the ability to scientifically confront disaster before it occurs. This is a privilege, not a right; let us not be complicit in its disuse. -LYB

Report Details

Additional information and metadata

State

Closed

Substate

Not-Applicable

Submitted

Weakness

Use of Insufficiently Random Values