The abuse of (automated) abuse reports

Abuse reports abound. So do false positives in antivirus (AV) products. Worst of all, false positives in AV products spread within the industry, reports (and corrective action) about them don’t. Try to get rid of a false positive that affects your own software and you know what I mean. There is no mechanism to spread the message that a certain executable is not malicious other than sending a message to every single one of the AV vendors.

In the past two years I have gotten several abuse reports sent to Hetzner (my hoster) against my website ( The reason? One of my programs – yes, one of those that come with source code and all – was detected by one or multiple AV engines and thus automatically classified as malicious by wannabe “security experts” … *cough* *cough* … uhm by an automated system taking the results of the AV for given.

None – and I mean not a single one of those “security experts” – seemed to have any notable know-how of their own, such as being able to analyze the files in question. Instead they blindly relied on the results of some multi-scanner such as Virustotal or Jotti – I’ll come to why this is bad in a moment. Anyway. Of course there was a trigger to posting this: I got another one of these abuse reports sent to me on Wednesday.

For starters those abuse reports come with a deadline from my hoster, Hetzner, which is kind of an inconvenience, given that at the end of the deadline stands the potential disconnection of my server from the net. In some jurisdictions this would be considered coercion or worse. Now, I realize that Hetzner is responsible for their network, but what really bugs me about this procedure is that it is nothing less than the reversal of the burden of proof. From this point on I am supposed to prove that the software is not malicious. Against the judgment of x-many AV scanners. Oh, and let’s not mention that my domain has an abuse alias as well, in full compliance with the respective RFC.

Now don’t get me wrong. I suppose it is a good thing for people to care about a “clean” internet and such. The problem with those self-proclaimed internet-cops is that they have no standards against which to measure their evidence – obviously. Would an abuse report such as those stand trial in front of a proper court of law? Definitely not. In order for real cops to go to the prosecutor, they first need a case. Preferably water-proof. That is the main difference. Not to mention that here the cops and the prosecutor and the judge are the same person/institution – Hetzner obligingly assuming my guilt by default and putting the burden of proof on me. Heck, they don’t even have the option for me to say this was a false alarm. Instead it is assumed that they1 are right and I am at fault.

Our self-proclaimed internet-cops and “security experts”2 wouldn’t stand a chance bringing this before a proper court of law. But the knowledge gap works to their advantage. Anyone but people who do possess the required know-how needed will falter and take down the detected program, try to recompile it3 or do whatever it takes to make the problem go away. The default assumption seems to be: clearly the server got hacked, it’s only just that the admins spend countless hours to fix it.

A real-world example

But let us not take the most recent abuse of automated abuse reports but rather the most unpleasant one I’ve had: Clean MX.

In April last year they sent the abuse report to Hetzner about, a collection of programs. The program in question, RunAsSYS, won’t even function on anything more recent than XP, including Server 2003. It attempts to use the so-called Debploit to get system-privileges. Clearly a gray area and potentially4 a security risk. Not a trojan or virus or anything along those lines, however. So I wasn’t particularly surprised by the detection, but I was by the reaction from Clean MX.

Since this seemed serious enough – after all my reputation was at stake5 – I decided to prove that the program was no different from the accompanying source code. So I loaded it into IDA and did my job. Sure enough the program hadn’t been tampered with. The program was sufficiently small to prove that the assembler code matched the accompanying source code.

While Hetzner quickly dropped the “charges”, Clean MX was reluctant to follow suit. So I decided to send them a letter in which I made it clear and known that I was going to sue them in case they kept claiming it was malware.

First response:

  • why do I think that this is a false positive? – Again, reversal of the burden of proof and that despite having sent an analysis complete enough to convince a malware researcher new to the job.
  • concerning the complaint that the given fax and land line numbers where not reachable he acquitted himself by saying that his cell phone was always reachable, which evidently it wasn’t as the automated female voice assured me several times.
  • the wording of my complaint apparently wasn’t helpful. But neither was their self-proclamation as internet-cops nor the shallow “evidence” they had in store.
  • apparently the email I sent regarding the case to their email address was never received. How surprising, it also never bounced 🙄 …

Next came the triumphant remark that the company I work for is also detecting it – oh, and of course that I should fix that first. Followed by:

your legal announcements in your pdf are not really stunning…

Obviously someone hadn’t heard of the difference between felony and misdemeanor. Not so much my problem, though. I then tried to make my point clear:

Who verified it then? If you read something in the yellow press you also take it for granted and spread the word? It’s called slander. Just because you are not the ultimate source of some gossip doesn’t mean you can’t be held liable. Again, it says “verified”. By whom? When? Using what methods? Where can I find the analysis – or to put it differently: the hard facts?

You spread false accusations about my programs and ultimately me, I am (still) giving you the chance to correct that.


Most false positives are detected as such after only a few days (at most) and don’t even make it into wide detection.

For something that is not malware I find the result quite respectable. Kaspersky managed to pull something similar last year with 20 decoy samples (which were not malicious, but went into detection by the majority of AVs over time nevertheless).

But please, what does the VirusTotal results tell you? You must be trying to say something with it, right? That the code is malicious? Is it? Have *you* or your employees verified that?

Have you actually looked into the binary as per static analysis methods? Have you looked into the accompanying source code? Have you tried to execute it inside a safe environment/sandbox of any kind?


What does an outdated link prove to you? What does a link to VirusTotal prove anyway, outdated or not?

Aside from that, this issue is between me as an individual and your company6.


Because this is not in the least malicious. But why do I have to prove my innocence – which I, by the way, did with my mail yesterday. Again, I’m the author. Accusing me that this is malware is slanderous. Even more so because I am a malware fighter myself.

What I had to check was whether the binary had been manipulated compared to the source code. This is not the case. The binary is genuine. It’s on you to provide details why you even classify it as malware, not on me to prove otherwise (although I did).

Please consult the source code for further questions as to why this is not “malware”, a “virus” or one of the colorful names given by other scanners like “Backdoor” and “Trojan”. My favorite is “IRC Trojan” as the binary does not even include *any* networking functions (or “secretly” calls these through hashed imports or so), so I’m amazed by how far off those detection names are from even describing the functionality. If it was an IRC client, sure … a false pos as “IRC Bot” or so would make some remote sense. But this way?

Loooong emails, as you can see.

Long story short. The contact person at Clean MX came to senses concerning the false positive and contacted his tech contact at another firm7. This contact could confirm within less than an hour that the file was indeed genuine and harmless and no malware.

The amazing thing is that there wasn’t a shred of guiltiness on part of the Clean MX contact person. The assumption that AVs are infallible and that one need not have the expertise to prove AVs right or wrong couldn’t be shattered. Amazing. Wild west on the internet, with my hoster being a willing lackey of those self-proclaimed internet-cops.

One can almost hear those internet-cops shout: “Stuff that ei incumbit probatio qui dicit, non qui negat up your …, Romans.”

How is this a problem?

False positives spread because detections spread within the industry because of the sheer amount of malware variants that appear every day which even a sizable company can hardly tackle. So samples of detected malware (including false positives) get shared between AV vendors. Since those AV engines use different techniques and different algorithms it is clear that they aren’t all detecting a sample by the exact same means. So a set of files detected by one AV engine may overlap with the detections of another, but may not – and in most cases will not – be identical to the set of files detected with that other “signature”8.

This is a problem. As we can see, the detections propagate within the AV industry. Not so with the false positive reports. The false positives themselves spread along with the detections, until a vendor discovers that the detection is a false positive. But that information does not propagate. There is no automatism in place for that.

Multi-scanners to the rescue?

Multi-scanners are a powerful tool for malware fighters and writers alike. The malware writers use them to check whether their creations are being detected already and refine their work. Malware fighters such as those “security experts” mentioned above also use it to classify a program as good or bad. Unfortunately they often don’t even consider that the individual AV scanners have gradual detection levels. Something can be a potential risk or it can be outright malicious. Programs such as netcat can be used for malicious purposes, but that’s clearly not their main purpose. It’s like a kitchen knife: it can be used for murder or for chopping your veggies. Ban all kitchen knives!!! 😆

With the knowledge that false positives spread automatically, it is merely a matter of time that a file makes it into detection with more and more vendors. Fair enough.

However, this means one has to be very very cautious to assume a file is malicious, just because it is in detection by multiple AV scanners. This may be a good default assumption for the unexperienced end-user, but it’s not a good one for security experts, self-proclaimed or not …


I am aware of some more or less public test projects run by the multi-scanner websites as well as that try to notify the makers of programs whenever their programs (or downloads on their websites) start to be detected (hopefully erroneously) and try to create a notification mechanism for the vendors respectively. What would be needed, though, is for the AV vendors to sit down at a table at one of the many industry conferences and join efforts in establishing a false positive reporting mechanism that works industry-wide.

Too much asked? I think not. Given the wide-spread misconception of wannabe security experts that AV engines are infallible enough to sent out automated abuse reports based on their detections, it is on us, the AV industry, to step forward and offer a remedy. Ultimately this will create loopholes, sure. Standards between AV vendors for what to classify as malware or as grayware or security risk differ, sure. Still the consensus cannot be to let software vendors jump through hoops when it is on us to correct our own errors and take their files out of detection – that is what false positives are, after all. Erroneous detections.

// Oliver

PS: please discuss below …

  1. the senders of the abuse report []
  2. without the know-how to reverse-engineer and analyze the claimed malicious code themselves []
  3. a method that will only work with old-style signature-based AVs, whereas heuristics-based scanners won’t easily get fooled by this []
  4. albeit inert with any newer OS version as mentioned []
  5. I work for an AV company []
  6. He was trying hard to somehow connect the fact that I as an individual obviously possess the expertise to do static analysis of executables with my role at FRISK, but my website is my private thing and the company has nothing to do with it. I even acquired most the skills in question before I joined the company. In short: it was a straw man … []
  7. Quite frankly I was surprised that there was any expert knowledge involved after all. []
  8. Signatures and/or fingerprints were the classic means of detection, but most AVs these days have more effective means to detect malware. []
This entry was posted in EN, IT Security. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *