Classification Headache: Remote vs Local

[This was originally published on the OSVDB blog.]

From: Derek Martin (code[at]
Date: Thu Jul 14 2005 – 21:39:30 CDT

The issue has come up on bugtraq before, but I think it is worth raising it again. The question is how to classify attacks against users’ client programs which come from the Internet, e.g. an e-mail carrying a malicious trojan horse payload. The reason this is important is because we judge how serious a particular vulnerability is based on how it is classified.


From: Bryan McAninch (bryan[at]
Date: Fri Jul 15 2005 – 10:58:47 CDT

I merely skimmed your post, so I apologize if the link I’m providing is not what you’re looking for. From what I read, it soundsas if you might be looking for an attack taxonomy, or something of that nature. An entire chapter of the Computer Security Handbook is devoted to this topic, written by John D. Howard and Thomas A. Longstaff. This document can also be found at CERT’s website –

Full thread:

While this debate is very important to VDBs, the person who started the thread chose an extremely bad example. The real debate comes in for vulnerabilities that don’t require user interaction (ie: not having to click an attachment) such as image processing overflows. It is easy to argue this either way; the overflow exists in the local application, the content comes from a network resource.

Either way, every existing classification system (including OSVDB’d) fall back onto remote vs local, when it is becoming painfully obvious that it needs to evolve. Steven Christey (CVE) has made comments regarding this (before and during this thread), suggesting that we take note of attacks that are “user-complicit” vs “automatic”. This is certainly a large step in the right direction, but is it enough? Will this classification scheme last us a few more years?

Leave a Reply

%d bloggers like this: