Sunday, May 28, 2006

My prediction for the next two years: Apple, Symantec, McAffee, Oracle etc. will get pounded into the ground by lots of bugs being found and disclosed through security researchers that are looking for easier targets than the current MS codebase. And the abovementioned companies won't have monopoly revenue to throw around and fix the issues.

This is a big opportunity for MS to move into all their markets :-) and sell their products as superior on the security side.

While I am in "evil" mood: The german train system is about to be IPO'ed, and there's a lot of debate going on here about details of the contract. What is most interesting but not being debated:
All real estate owned by the Deutsche Bahn AG (the privatized version of the german train system that is going to be floated) is in the books with it's value upon acquisition -- meaning it's value in 1935. The real estate in possession of the DB is, by today's value, worth several times more than the total money they expect to get out of the IPO.

If I was an investment banker, I'd gang up with a bunch of private equity folks, buy the majority in the DB AG once it is IPO'd, and then sell of the real estate. Other countries (USA, Britain) survive without a decent train system, too, and I wouldn't care as I'd have a Rolls and a driver.

Allright, enough of the devil's advocate mode. It was fun seeing my brother the last weekend,
and we always come up with good ideas ;)

Tuesday, May 23, 2006

MSASN1 is hard to read these days -- the code makes heavy use of carry-flag-dependent arithmetic (adc, rlc etc) to check for integer overflows.

Saturday, May 20, 2006

The Vodafone virus dropped by today and brought us some mobile viruses to play with - thanks ! :-)

So cross-platform diffing can be fun -- Rolf ran a diff of Commwarrior.B against Commwarrior.C today, and while B is compiled for standard ARM, C is compiled in 'thumb mode', which is pretty much the same as being compiled for a different CPU (thumb means that all instructions are different).

The amusing result is that even though the compilation is for a different platform, we still get roughly 61% of the functions matched. And the functions, which are clearly the same on the 'structural' (e.g. flowgraph) - level, have completely different instructions, and manual inspection will confirm that these differing instructions end up doing the same.

For those of you that want to verify things manually, click here.
Quote from Lock, Stock and Two Smoking Barrels: "I don't care who you use as long as they are not complete muppets".

Having MSOffice 0day is not terribly hard, but one should not burn it by making it drop standard, off-the-shelf, poorly-written bot software. The stealth advantage that one has by sending .DOC files into an organisation should not be given up by creating empty SYS files or dropping DLLs.
Also, registry key adding for getting control on reboot is kinda suboptimal.

I am kinda curious to know how they got caught, but my guess is that the bad QA on the internet explorer injection raised enough crashes to make people investigate.

On a side note, this highlights a few common problems people face when doing client side attacks:
  • One-shot-ness -- any exploit you write is a one-shot and should work reliably
  • Process recovery -- any exploit you write needs to be able to recover and have the exploited application resume as if nothing happened. This is a tad hard if you've written 200 megs of garbage to the heap.
  • Lack of complete pre-attack intel on the target environment -- I don't know what went wrong when they injected into iexplore, but they must've been confident that their code was good enough. This means they tested it on a testbed which didn't reflect the actual target.
  • Lack of attack focus -- I am quite convinced that they could've had a simpler, stealthier, and more stable bot component if they had thought more thoroughly about what their goal in this attack was
Enough ranting for today.

Friday, May 19, 2006

For those that are into malware classification, here's some code that one
can include in a piece of malware to skew the Levenshtein distance described
in the recently published MS paper.

int j, i = random_integer_in_range(0, 50000);
FILE *f;
for( j = 0; j < i; j++ ){
f = fopen("c:\\test.txt", "rt");
flose(f);
}

Tuesday, May 16, 2006

Behavioural classification of malware

Today is a good day: I got my math homework done before it has to be handed in, and that leaves me some free time to blog :-)

Dana Epp has a post referring to an entry by Tony Lee referencing an EICAR paper on automated malware classification using behavioural analysis. I am not totally unbiased on this as we at SABRE have been working a lot on structural classification of malware recently, so take my following criticism with a grain of salt.

I personally think the approach in the paper is suboptimal for the following reasons:
  1. By using behavioural data, we can only classify an executable based on things it does in the observed timeframe. Any time-delayed trigger (that e.g. triggers two months from now) is hard to see, and the application might just sleep until then. How do we classify something that just came into our networks ? We can't classify it until it starts becoming "active".
  2. It is trivial even for somebody who knows only rudimentary programming to modify a program so that the modifed program only has a few (~4 ?) lines of code more than the original program, yet it's Levenshtein distance as measure in the paper is arbitrarily large. As it stands, adding file writes in a loop should be enough, and the Levenshtein distance can be arbitrarily increased by more loop iterations.
  3. The paper cites on-access deobfuscation as a principal problem that static analysis cannot easily deal with -- but from (2) it follows that on-access deobfuscation can be coupled with Levenstein-distance-maximizing code in a trivial manner, breaking the approach that was proposed as superior in the paper. The claim that runtime analysis can effectively bypass the need to deal with obfuscation is simply not true if the application ever targets the event collection by 'junking' it with bogus events.
Taken together this means that the approach presented in the paper can be trivially foiled with very minor high-level-language modifications in the source of the program, whereas the static structural comparisons we use need to be foiled via the use of a strong obfuscation component, which if done moderately cleverly, would also foil the approach from the paper.

I'm not saying the paper isn't good or doesn't touch valid points, but behaviour is so trivially randomized even from a high-level-language level that the approach in the paper is next to pointless once malware authors target it.

On to something kinda different:

A more general question we have to ask ourselves is: Do we really want to measure the accuracy of new, automated malware classification algorithms by comparing them to the results of the manual classification done by AV-vendors so far, which had neither efficient information sharing nor any clear methodology as to how to name malware ? Using any sort of machine learning based on the AV-industry provided datasets needs to be very resilient to partially incorrect input data, as a good number of bots seem to be more or less arbitrarily named.

Anyhow, time to go to sleep and read Courtois eprint paper

Friday, May 12, 2006

Microsofts built-in firewall has some really annoying things to it. I am running a laptop connected to an untrusted network and an instance of VMWare connected on a different interface. If I disable the firewall on the VMWare interface, it automatically gets disabled on the global interface. Very cute. Can we get this fixed ?

Monday, May 08, 2006

Important German saying:
"Wer keine Probleme hat macht sich welche"

"Those that do not have any problems will create some for themselves"

Saturday, April 29, 2006

More on automated malware classification and naming

So after having posted some graphs without further explanation yesterday, I think it is a good idea to actually explain what these graphs were all about.

We at SABRE have worked on automated classification of malware over the last few weeks. Essentially, thanks to Rolf's relentless optimization efforts and a few algorithmic tricks, the BinDiff2 engine is blazingly fast. As an example, we can diff a 30000+ functions router image in under 8 minutes on my laptop now, and this includes reading the data from the IDA database. That means that we can afford to run a couple of hundred thousand diffs on a collection of malware samples, and to then work with the results -- malware is rarely of a size exceeding 1000 functions, and anything of that size is diffed in a few seconds.

So we were provided with a few thousand samples of bots that Thorsten Holz had collected. The samples themselves were only marked with their MD5sum.

We ran the first few hundred through a very cheap unpacking tool and then disassembled the results. We than diffed each sample against each other. Afterwards, we ran a phylogenetic clustering algorithm on top of the results, and ended up with this graph:

The connected components have been colored, and a hi-res variant of this image can be looked at here.
The labels on the edges are measures of graph-theoretic similarity -- a value of 1.000 means that the executables are identical, lower values give percentage-wise similarity. We have decided in this graph to keep everything with a similarity of 50% or greater in one family, and cut off everything else.

So what can we read from this graph ? First of all, it is quite obvious that although we have ~200 samples, we only have two large families, three small families, two pairs of siblings and a few isolated samples. Secondly, even the most "distant relatives" in the cyan-colored cluster are 75% similar, the most "distant relatives" in the green cluster on the right are still 58% similar. If we cut the green cluster on the right into two subclusters, the most distant relatives are 90% similar.

Now, in order to double-check our results with what AV folks already know, Thorsten Holz provided us with the results of a run of ClamAV on the samples, and we re-generated the graphs with the names of those samples that were recognized by ClamAV.

The result looks like this:

(check this graphic for a hi-res version which you will need to make sense of the following ;)

What can we learn from this graph ? First of all, we see that the various members of the GoBot family are so similar to the GhostBot branch that we should probably consider GoBot and GhostBot to be of the same family. The same holds for the IrcBot and GoBot-3 samples and for the Gobot.R and Downloader.Delf-35 samples -- why do we have such heavily differing names when the software in question is so similar ?

We seem to consider Sasser.B and Sasser.D to be of the same family with a similarity of 69%, but Gobot.R and Downloader.Delf-35, which are a lot more similar, have their own families ?

What we can also see is that we have two samples in the green cluster (GoBot, IRCBot, GhostBot, Downloader.Delf) that have not been recognized by ClamAV and that seem to have their own branch in the family tree:

Apparently, these two specimen are new veriants of the above family tree.

Miscallenous other things we can learn from this (admittedly small) graph:

  • We have 6 variants of Trojan.Crypt.B here that go undetected by ClamAV
  • We have an undetected variant of Worm.Korgo.Z
  • PadoBot and Korgo.H are very closely related, whereas Korgo.Z does not appear to be very similar. Generally, the PadoBot and the Korgo.H/Korgo.Y/Korgo.AJ family seem to be just one family.

So much fun.
Anyhow, if you guys find this stuff interesting, drop me mail about this at halvar.flake@sabre-security.com ...

Friday, April 28, 2006

Also, Rolf has created a shockwave where he shows how to use BinDiff to find GPL violations.

Check it here...
Automated classification of malware is easier than it seems once you have the right infrastructure -- in our case consisting of the BinDiff2 engine, a few generic unpacking tools and a graph layout program.

I have uploaded some graphics:

This is just a collection of arbitrary malware whose members were identified by use of AV programs. We then BinDiff'ed all samples and used some phylogenetics algorithm to draw this diagram. The results are quite neat, although we did not filter library functions, so some very simple viruses have high similarity due to the fact that 95% of their code is statically linked library code.

This is a collection of a few hundred bots. They were collected on Thorsten Holz's Honeynet, and we auto-unpacked them and then did the BinDiffing/tree generation. This time, we did filter libraries as good as we could. The 184 samples here all have different MD5sums, but the
largest majority belongs to essentially two families. All in all, we have ~5 "families", two pairs
of "siblings" and 9 isolated species here. Fun.

Wednesday, April 26, 2006

I am in a state of brainfry. Which I guess is good. Exhaustion can be fun. I think I haven't worked as hard as I am working at the moment in quite a while, and while the results move slowly (due to fighting on many fronts), they move, and in a good direction. Now I will sleep. Ah, for those into this sorta stuff, here is what claims to be a significant improvement of the FMS attack on RC4. Good read, although nothing worldmoving, and I haven't verified the results fully yet.

Wednesday, April 19, 2006

Quote from the Matasano Blog:
"We are so funded now"
^^ Classic :)
Language in thought and action or "what's correct vs. what's right"

Everybody should IMO, at some point of his life, have read Hayakawa's "Language in thought and action". It is a book about semantics, of words and their meanings. Semantics pop up in very amusing situations, for example in code analysis/review, and a few other places.

A small thing to take out of that book is that the entry in a dictionary is not the definitive meaning of a word, but that the meaning of the word is (somewhat circularly) defined as what the collective mind thinks this word means.

There's currently an bugtraq about the way that GCC treats certain 'invalid language' constructs. While I find very few things more amusing than compiler changes breaking formerly working code, I also find the reaction of everybody except fefe on that list to be very amusing: Instead of relying on what everybody thinks the meaning of that line of code is, they refer to the written standard. In essence, they look up the meaning of a word in a dictionary.

Yes, standards are important. But there is a difference between the standard on paper and the 'standard', meaning the way everybody perceives (and writes in) that language. And due to the fact that C's standard has been in some ways unenforced for 20+ years there are lots of existing 'standard' idioms (such as Fefe's int wrap check) that are not valid by the written standard.

What we're trying to do here is the equivalent of not correcting a child's grammar for 20 years, allowing it to play with other kids with broken grammar and always understanding what the child wanted to say when it used broken grammar. And then, once it turned 20, we cross out the constructs with bad grammar from everything he says and interpret the rest.

If the construct is so invalid, have the compiler warn at least :)
This sounds like a sure recipe for disaster to me :)

I had a discussion with Rolf yesterday about GCC's ghastlyness, and the fact that we're spending a good bit of our time fighting compiler woes instead of coding. Then again, we're doing something rather odd: We're trying to write C++ code with GCC, which frankly, doesn't work. I am very inclined to switch to Intel's CPP.

Oh, while I am at ranting: On which CPU is GCC's weird optimization of passing arguments by subtracting from ESP and then issuing several 'mov [esp+const], reg' faster than push/call ? Sure, on a cycle-by-cycle basis each instruction is faster, but has anyone ever considered the impact of a 3*size increase in the code on CPU caching ?

I'll shut up now before this deteriorates into more ranting.

Monday, April 17, 2006

http://teh-win.blogspot.com/ has (as usual) an amusing read up, which at one step harps on a point that I can't support enough: 0days != hacking. Almost all "real" hacking is done via transitive trust (thus the same goes for pentests). 0days allow you to more quickly get _some_ trust to exploit transitively, but the "real" work is done on transitive trust. And transitive trust and "real" hacking gets too little credit at security conferences, mainly because any "real" research here is by direct implication illegal ("... I wrote this worm that exploits transitive trust ... and I have some empirical data on it's spreading capabilities *cough* ...").

Now I just need to find a dictionary that explains me what "branler la nouille en mode noyau" means ;)
Publication Economics and Cryptography Research

Something I cannot cease to wonder is why historically there has been so little published research on the cryptanalysis of block ciphers. There seem to be millions of articles describing "turning some math guy's favourite mathematical problem into an asymetric crypto algorithm" and a similar flood of "fair coin flipping if all participants are drunk cats and the coin is a ball of yarn"-sort of papers. All in all, there have been perhabs less than 20 REALLY important papers in the analysis of symetric crypto in ... uhm ... the last 10 years (I count hashes as symetric crypto here).

What's the reason for this ?

First of all, symetric crypto tends to not have a "nice" mathematical structure. This changed somewhat with AES, but almost everything on the symetric side is rather ugly to look at. Sure, everything can be written as large multivariate polynomials over GF(2), but that's just a prettier way of writing a large boolean formulae. So it's hard for anybody in a math department to justify working on something that is "like a ring, but not quite, or like a group, but not quite".

Secondly, starting to build a protocol or proposing a new asymetric cipher is something that a sane researcher (that has not earned tenure yet) can do in a "short" window of time. Setting out to break a significant crypto algorithm could very easily lead to "10+ years in the wilderness and a botched academic career due to a lack of publications". The result: If you haven't earned tenure yet, and want to work in crypto, you work on the constructive side.

I find this to be a bit frustrating. I'd like to work on ciphers, specifically on BREAKING ciphers. I seriously could never get myself excited about defense. I wouldn't mind spending a few years of my life on one cipher. But academically, and career wise, this is clear suicide.

Perhabs we should value "destructive" research more. From my personal viewpoint, a break in a significant cipher is worth more than 20 papers on fair coin flipping in the absence of gravity. But when it comes to giving out tenure, it almost seems that the 20 papers outweigh the one.

Ahwell. Enough ranting, back to work.

Friday, March 31, 2006

http://metasploit.blogspot.com/2006/03/few-msrt-graph-illustrations.html

This is cool.

Wednesday, March 15, 2006

My latest travel madness is over, and after 4 weeks on the road I am home again. The last week of that travelling I spent mostly sick -- that means I went to BlueHat for all the work and then had to skip all the fun. Furthermore, I learnt a bunch of interesting things concerning sinusitis and pressure changes during intercontinental flights.

Those that know me know I am a graph nerd (and by extension an OBDD nerd), so check
this paper for some more OBDD fun.
While you are doing your brain some good, you might as well read this paper -- it is fascinating and very useful at the same time.

Now, I am just enjoying the second day of my first "weekend" in a month -- I decided not to work for two days, and spent the first 34 hours of these two days in bed. Now I am cleaning my flat and will be reading some fun shit lateron (compact riemann surfaces perhabs?). Or I will call Ero and see if he is up for a round of Go.

I saw that Ilfak reads my BH presentations -- I am flattered and thus will need to try harder with them the next time.

One of the highlights of BlueHat was meeting a bunch of MS folks that I respect A LOT: Jon, Brandon, Alex, Pete, Jan -- thanks for being awesome.

I read many good things about OCaml recently, and a bunch of very interesting programs (ASTREE and SLAM/SDV) seem to be written in it. Can someone offer some comments on it ?

Sunday, February 26, 2006

My travel schedule remains insane, and I notice that thinking about 'hard' problems is very difficult while travelling. Sigh. Anyhow, those that had beer with me in the last year and a half or so know that I have a severe interest in OBDD-like structures and their applications to cryptanalysis.
So I just wanted to put a reference here to
http://eprint.iacr.org/2006/072.pdf

Saturday, February 18, 2006

We at SABRE will be giving a 3-day advanced reverse engineering course this fall in Frankfurt. Check this link if you are interested in that sorta stuff.