Monday, April 28, 2008

There's a lot of hoopla in German media about the german SIGINT folks having to admit that they trojanized Afghanistan's Ministry of Commerce and Industry.

The entire situation is hilarious, as Mrs. Merkel criticized the chinese for having sponsored hacking sprees into German government institutions last year - I guess she is not overly happy about all this stuff hitting the press now.

The first article is actually quite interesting. It is terribly hard to get any information about InfoSec stuff in Europe (we'd need a Mr. Bamford around here I fear), so the article is really amongst the only data points to be found.
In 2006, Division 2 consisted of 13 specialist departments and a management team (Department 20A), employing about 1,000 people. The departments are known by their German acronyms, like MOFA (mobile and operational telecommunications intelligence gathering), FAKT (cable telecommunications intelligence gathering) and OPUS (operational support and wiretapping technology).
So there are people working on this sort of stuff in Germany after all. I wonder why one never meets any at any security conferences - they either have excellent covers or no budget to travel to any conferences.

Another amusing tidbit:
Perhaps it will never be fully clear why the BND chose this particular ministry and whether other government agencies in Kabul were also affected -- most of the files relating to the case have apparently been destroyed.
I find the regularity with which important files regarding espionage or KSK misbehavior are destroyed or lost a little bit ... peculiar.

There's a bit in the article about emails that have a .de domain ending being automatically discarded by their surveillance tools. Hilarious.

The issue came to light because during the surveillance a German reporter had her email read, too (she was communicating with an Afghan official whose emails were being read). This is a violation of the freedom of the press here in Germany, and normally, the BND should've dealt with this by reporting their breach to the parliamentary subcommittee for intelligence oversight, which they somehow didn't. A whistleblower inside the BND then sent a letter to a bunch of politicians, making the situation public.

It's always hard to make any judgements in cases as these, as the public information is prone to being unreliable, but it is encouraging that a whistleblower had the guts to send a letter out. I am a big fan of the notion that everyone is personally responsible for his democracy.

The topic of intelligence and democracies is always difficult: If one accepts the necessity of intelligence services (which, by their nature, operate in dodgy terrain, and which, due to their requirements for secrecy, are difficult to control democratically), then one has to make sure that parliamentary oversight works well. This implies that the intelligence agencies properly inform the parliamentary committee, and it also implies that the parliamentary committee keeps the information provided confidential.

There seem to be only two ways to construct parliamentary oversight in a democracy: Pre-operation or post-operation. Pre-operation would have the committee approve of any potentially problematic operation ahead of it being performed. If things go spectacularly wrong, the fault is to be blamed on the committee. The problem with this is secrecy: Such a committee is big, and for operational security it seems dangerous to disseminate any information this widely.

This appears to be the reason why most democracies seem to opt for a "post-operation" model: The services have in-house legal experts, and these legal experts judge on the 'legality' of a certain operation. The the operation takes place, and the committee is notified after the fact if something goes spectacularly wrong.

The trouble with this model appears to be that the intelligence service doesn't have much incentive to report any problems: They can always hope the problem goes away by itself. It is the higher-ups in the hierarchy that have to report to the committee, and they are the ones whose heads will roll if things go wrong.

It appears to be an organisational problem: Information is supposed to flow upwards in the organisational hierarchy, but at the same time, the messenger might be shot. This is almost certain to lead to a situation where important information is withheld.

I guess it's any managers nightmare that his "subordinates" (horrible word -- this should mean "the guys doing the work and understanding the issues") in the organisation start feeding him misinformation. Organisations start rotting quickly if the bottom-up flow of information is disrupted. The way things are set up here in Germany seems to encourage such disruptions. And if mid-level management is a failure but blocks this information from upper management, the guys in the trenches have not only the right, but the duty to send a letter to upper management.

I have no clue if there is any country that has these things organized in a better way -- it seems these problems haunt most democracies.

Anyhow, if anyone happens to stumble across the particular software used in this case, I think it would make for a terribly interesting weekend of reverse engineering -- I am terribly nosy to what sort of stuff the tool was capable of :)

Cheers,
Halvar

Friday, April 25, 2008

Patch obfuscation etc.

So it seems the APEG paper is getting a lot of attention these days, and some of the conclusions that are (IMO falsely) drawn from it are:
  • patch time to exploit is approaching zero
  • patches should be obfuscated
Before I go into details, a short summary of the paper:
  1. BinDiff-style algorithms are used to find changes between the patched and unpatched version
  2. The vulnerable locations are identified.
  3. Constraint formulas are generated from the code via three different methods:
    1. Static: A graph of all basic blocks on code paths between the vulnerability and the data input into the application is generated, and a constraint formula is generated from this graph.
    2. Dynamic: An execution trace is taken, and if the vulnerability occurs on a program path that one can already execute. Constraints are generated from this path.
    3. Dynamic/Static: Instead of going from data input to target vulnerability (as in the static approach), one can use an existing path that comes "close" to the vulnerability as starting point from which to proceed with the static approach.
  4. The (very powerful) solver STP is used for solving these constraint systems, generating inputs that exercise a particular code path that triggers the vulnerability.
  5. A number of vulnerabilities are discussed which were successfully triggered using the methods described in the paper
  6. The conclusion is drawn that within minutes of receiving a patch, attackers can use automatically generated exploits to compromise systems.
In essence, the paper implements automated input crafting. The desire to do this has been described before -- Sherri Sparks' talk on "Sidewinder" (using genetic algorithms to generate inputs to exercise a particular path) comes to mind, and many discussions about generating a SAT problem from a particular program path to be fed into a SAT solver (or any other solver for that matter).

What the APEG paper describes is impressive -- using STP is definitely a step forwards, as it appears that STP is a much superior solver to pretty much everything else that's publically available.

It is equally important to keep the limitations of this approach in mind - people are reacting in a panicked manner without necessarily understanding what this can and cannot do.
  1. Possible NP-hardness of the problem. Solving for a particular path is essentially an instance of SAT, and we know that this can be NP-hard. It doesn't have to be, but the paper indicates many formulas STP cannot solve in reasonable time. While this doesn't imply that these formulas are in fact hard to solve, it shows how much this depends on the quality of your solver and the complexity of the formulas that are generated.
  2. The method described in the paper does not generate exploits. It triggers vulnerabilities. Anyone who has worked on even a moderately complex issue in the past knows that there is often a long and painful path between triggering an overflow and making use of it. The paper implies that the results of APEG are immediately available to compromise systems. This is, plainly, not correct. If APEG is successful, the results can be used to cause a crash of a process, and I refuse to call this a "compromise". Shooting a foreign politician is not equal to having your intelligence agency compromise him.
  3. Semantic issues. All vulnerabilities for which this method worked were extremely simple. The actual interesting IGMP overflow Alex Wheeler had discovered, for example, would not be easily dealt with by these methods -- because program state has to be modified for that exploit in a non-trivial way. In essence, a patch can tell you that "this value YY must not exceed XX", but if YY is not direct user data but indirectly calculated through other program events, it is not (yet) possible to automatically set YY.
So in short one could say that APEG will succeed in triggering a vulnerability if the following conditions are met:
  1. The program path between the vulnerability and code that one already knows how to execute is comparatively simple
  2. The generated equation systems are not too complex for the solver
  3. The bug is "linear" in the sense that no complicated manipulation of program state is required to trigger the vulnerability
This is still very impressive stuff, but it reads a lot less dramatic than "one can generate an exploit automatically from an arbitrary patch". All in all, great work, and I do not cease to be amazed by the results that STP has brought to code analysis in general. It confirms that better solvers ==> better code analysis.

What the paper gets wrong IMO are the conclusions about what should be done in the patching process. It argues that because "exploits can be generated automatically, the patching process needs fixing". This is a flawed argument, as ... uhm ... useful exploits can't (yet) be generated automatically. Triggering a vulnerability is not the same as exploiting it, especially under modern operating systems (due to ASLR/DEP/Pax/GrSec).

The paper proposes a number of ways of fixing the problems with the current patching process:

1. Patch obfuscation. The proposal that zombie-like comes back every few years: Let's obfuscate security patches, and all will be good. The problems with this are multifold, and quite scary:
    1. Obfuscated executables make debugging for MS ... uhm ... horrible, unless they can undo it themselves
    2. Obfuscated patches remove an essential liberty for the user: The liberty to have a look at a patch and make sure that the patch isn't in fact a malicious backdoor.
    3. We don't have good obfuscation methods that do not carry a horrible performance impact.
    4. Obfuscation methods have the property that they need to be modified whenever attackers break them automatically. The trouble is: Nobody would know if the attackers have broken them. It is thus safe to assume that after a while, the obfuscation would be broken, but nobody would be aware of it.
    5. Summary: Obfuscation would probably a) impact the user by making his code slower and b) impact the user by disallowing him from verifying that a patch is not malicious and c) create support nightmares for MS because they will have to debug obfuscated code. At the same time, it will not provide long-term security.
2. Patch encryption: Distributing encrypted patches, and then finally distributing the encryption key so all systems update at once. This proposal seems to assume that bandwidth is the limiting factor in patch installation, which, as far as I can tell, it is not. This proposal does less damage than obfuscation though -- instead of creating certain disaster with questionable benefit, this proposal just "does nothing" with questionable benefit.

3. Faster patch distribution. A laudable goal, nothing wrong with this.

Anyhow, long post, short summary: The APEG paper is really good, but it uses confusing terminology (exploit ~= vulnerability trigger) which leads to it's impact on patch distribution being significantly overstated. It's good work, but the sky isn't falling, and we are far away from generating reliable exploits automatically from arbitrary patches. APEG does generate usable vulnerability triggers for vulnerabilities of a certain form. And STP-style solvers are important.
I have not been blogging nor following the news much in recent months, as I am frantically trying to get all my university work sorted. While I have been unsuccessful at getting everything sorted at the schedule I had set myself, I am making progress, and expect to be more visibly active again in fall.

Today, I found out that my blog entry on the BlueHat blog drew more feedback than I had thought. I am consistently surprised that people read the things that I write.

Reading my blog post again, I find it so terse I feel I have to apologize for it and explain how it ended up this way. It was the last day of Bluehat, and I was very tired. Those that know me know me well know that my sense of humor is difficult at the best of times. I have a great talent of sounding bitter and sarcastic when in fact I am trying to be funny and friendly (this had lead to many unfortunate situations in my life :-). So I sat down and tried to write a funny blog post. I was quite happy with it when it was done.

In an attack of unexpected sanity, I decided that someone else should read over the post, so I asked Nitin, a very smart (and outrageously polite) MS engineer. He read it, and told me (in his usual very polite manner) ... that the post sucked. I have to be eternally thankful to him, because truly, it did. Thanks Nitin !

So I deleted it, and decided that writing down just the core points of the first post. I removed all ill-conceived attempts at humor, which made the post almost readable. It also limited the room for potential misunderstandings.

I would like to clarify a few things that seem to have been misunderstood though:

I did not say "hackers have to" move to greener pastures. I said "hackers will move to greener pastures for a while". This is a very important distinction. In order to clarify this, I will have to draw a bit of a larger arc:

Attackers are, at their heart, opportunists. Attacks go by the old basketball saying about jumpshot technique: "Whoever scores is right". There is no "wrong" way of compromising a system. Success counts, and very little else.

When attackers pick targets, they consider the following dimensions:
  • Strategic position of the target. I will not go into this (albeit important) point too deeply. Let's just assume that, since we're discussing Vista (a desktop OS), the attacker has made up his mind and wishes to compromise a client machine.
  • Impact by market share: The more people you can hack, the better. A widely-installed piece of software beats a non-widely installed piece of software in most cases. There's many ways of doing this (Personal estimates, Gartner reports, internet-wide scans etc.).
  • Wiggle Room: How many ways are there for the attacker to interact with the software ? How much functionality does the software have that operates on potentially attacker-supplied data ? If there are many ways to interact with the application, the odds of being able to turn a bug into a usable attack are greatly increased, and the odds of being able to reach vulnerable code locations are greatly increased. Perhabs the more widely used term is "attack surface", but that term fails to convey the importance of "wiggle room" for exploit reliability. Any interaction with the program is useful.
  • Estimated quality of code: Finding useful bugs is actually quite time consuming. With some experience, a few glances at the code will give an experienced attacker some sort of "gut feeling" about the overall quality of the code.
From these four points, it is clear why IE and MSRPC got hammered so badly in the past: They pretty much had optimal scores on Impact -- they were everywhere. They provided plenty of "Wiggle Room": IE with client-side scripting (yay!), MSRPC through the sheer number of different RPC calls available. The code quality was favourable to the attacker up until WinXP SP2, too.

MS has put more money into SDL than most other software vendors. This holds true both in absolute and in relative terms. MS is in a very strong position economically, so they can afford things other vendors (who, contrastingly, are exposed to market forces) cannot.

The code quality has improved markedly, decreasing the score on the 4th dimension. Likewise, there has been some reduction in attack surface, decreasing the score on the 3rd dimension. This is enough to convince attackers that their time is better spent on 'weaker' targets. The old chestnut about "you don't have to outrun the bear, you just have to outrun your co-hikers" holds true in security more than anywhere else.

In the end, it is much more attractive to attack Flash (maximum score on all dimensions) or any other browser plugins that are widely used.

I stand by my quote that "Vista is arguably the most secure closed-source OS available on the market".

This doesn't mean it's flawless. It just means it's more secure than previous versions of Windows, and more secure than OS X.

There was a second part to my blog post, where I mentioned that attackers are waiting for MS to become complacent again. I have read that many people inside Microsoft cannot imagine becoming complacent on security again. While I think this is true on the engineering level, it is imaginable that security might be scaled down by management.

The sluggish adoption of Vista by end-users is a clear sign that security does not necessarily sell. People buy features, and they cannot judge the relative security of the system. It is thus imaginable that people concerned with the bottom line decide to emphasize features over security again -- in the end, MS is a business, and the business benefits of investing in making code more secure have yet to materialize.

We'll see how this all plays out :-)

Anyhow, the next BlueHat is coming up. I won't attend this time, but I am certain that it will be an interesting event.

Wednesday, April 02, 2008

My valued coworker, SP, has just released his "pet project", Hexer. Hexer is a platform-independent Java-based extendible hex editor and can be downloaded under http://www.zynamics.com/files/Hexer-1_0_0.rar

It's also a good idea to visit his blog where he'll write more about it's features and capabilities.

Tuesday, April 01, 2008

Oh, before I forget: Ero & me will be presenting on our work on structural malware classification at RSA next week. If anyone wishes to schedule a meeting/demo of any of our things (VxClass/BinDiff/BinNavi), please do not hesitate to contact info@zynamics.com.


Some small eye candy: The screenshot shows BinNavi with our intermediate representation (REIL) made visible. While REIL is still very beta-ish, it should be a standard (and accessible) part of BinNavi at some point later this year.

Having a good IR which properly models side effects is a really useful thing to have: The guys over at the BitBlazer project in Berkeley have shown some really useful things that can be done using a good IR and a good constraint solver :-). I am positively impressed by several papers they have put out.

I also can't wait to have more of this sort of stuff in BinNavi :-).
Conspiracy theory of the day:

As everyone, I am following the US primaries, and occasionally discussing with my brother on the implications of the developments for the wider world. My brother is usually good for quite some counter-intuitive insights into things, and described to me a "conspiracy theory" that I find amusing/interesting enough to post here.

Please be aware that the following is non-partisan: I do not really have an idea on whether I'd prefer Mrs Clinton, Mr Obama or Mr McCain in the white house, and this is not a post that is intended to weigh in on either side.

I was a bit puzzled on why Mrs Clinton is still in the primary race even though her mathematical odds on winning the democratic nomination seem slim. The conspiracy theory explaining this is the following:

The true goal now for Mrs Clinton is now 2012, not 2008. If Mr Obama wins the nomination _and_ the presidency, Mrs Clinton will very likely not become president in her lifetime. On the other hand: If she manages to damage Mr Obama bad enough so that Mr McCain enters the white house, she has good cards to win the democratic nomination in 2012, and Mr McCain is unlikely to stay a second term (given his age).

It's an interesting hypothesis. Anyhow, I should really get to sleep.

Tuesday, March 11, 2008

A short real-life story on why cryptography breaks:

One of the machines that I am using is a vhost hosted at a german hosting provider called "1und1". Clearly, I am accessing this machine using ssh. So a few weeks ago, to my surprise, my ssh warned me about the host key having changed.

Honored by the thought that someone might take the effort to mount a man-in-the-middle attack for this particular box, my rational brain told me that I should call the tech support of the hosting provider first and ask if any event might've lead to a change in keys.

After a rather lengthy interaction with the tech support (who first tried to brush me off by telling me to "just accept the new key"), I finally got them to tell me that they upgraded the OS and that the key had changed. After about 20 minutes of discussion, I finally got them to read the new key to me over the phone, and all was good.

Then, today, the warning cropped up again. I called tech support, a bit annoyed by these frequent changes. My experience was less than stellar - the advice I received was:
  1. "Just accept the new key"
  2. "The key is likely going to change all the time due to frequent relocations of the vhost so you should always accept it"
  3. "No, there is no way that they can notify me over the phone or in a signed email when the key changes"
  4. "It is highly unlikely that any change that would notify you would be implemented"
  5. "If I am concerned about security, I should really buy an SSL certificate from them" (wtf ??)
  6. "No, it is not possible to read me the key fingerprint over the phone"
The situation got better by the minute. After I told them that last time the helpful support had at least read me the fingerprint over the phone, the support person asked how I could be sure that my telephone call hadn't been man-in-the-middled...

I started becoming slightly agitated at this point. I will speak with them again tomorrow, perhabs I'll be lucky enough to get to 3rd-level-support instead of 2nd level. Hrm. As if "customer service" is a computer game, with increasingly difficult levels.

So. Summary: 1und1 seems to think crypto is useless and we should all use telnet. Excellent :-/

Friday, March 07, 2008


Hey all,

we have released BinNavi v1.5 last week. Normally, I'd write a lot of stuff here about the new features and all, but this will have to wait for a few days -- I am very tied up with some other work.

With the v1.5 release, we have added disassembly exporters that export from both OllyDbg and ImmunityDbg to our database format -- this means that Navi can now use disassemblies generated from those two debuggers, too. The screenshot above is BinNavi running on Ubuntu with a disassembly exported from the Windows VW into which we are debugging.

Anyhow, the real reason for this post is something completely different: We don't advertise this much on our website, but our tools are available in a sort of 'academic program':

If you are currently enrolled as a full-time-student at a university and have an interesting problem you'd like to use our tools for, you can get a license of our tools (Diff/Navi) for a very moderate amount of money. All you have to do is:
  • Contact us (info@zynamics.com) with your name/address/university etc.
  • Explain what project you'd like to work on with our tools
  • Sign an agreement that you will write a paper about your work (after it's done) that we can put on our website
Oh, and you of course have to do the work then and write the paper :-)
Anyhow, I have to get back to work. Expect more posts from me later this year -- things are very busy for me at the moment.

Cheers,
Halvar

Tuesday, February 12, 2008

Hey all,

We will be releasing BinNavi v1.5 next week -- and I can happily say that we will have
many cool improvements that I will blog about next week, once it is out.

Pictures often speak louder than words, so I'll post some of them here:

http://www.zynamics.com/files/navi15.1.png
http://www.zynamics.com/files/navi15.2.png
http://www.zynamics.com/files/navi15.3.png
http://www.zynamics.com/files/tree_lookup.jpg

A more detailed list of new features will be posted next week.

VxClass is making progress as well -- but more on this next week.

If there's anyone interested in our products (BinDiff, BinNavi, VxClass)
in the DC area, I should be free to meet & do a presentation on the products
next week.

Cheers,
Halvar

Tuesday, January 08, 2008

Happy new year everyone.

In June 2006 Dave Aitel wrote on Dailydave that "wormable bugs" are getting rarer. I think he is right, but this month's patch tuesday brings us a particularly cute bug.

I have created a small shockwave film and uploaded it to
http://www.zynamics.com/files/ms08001.swf

Enjoy ! :-)

On other news: We'll be posting screenshots of BinNavi v1.5 (due out in February) and the current VxClass version in the next two weeks - they are coming along nicely.

Cheers,
Halvar

Sunday, October 07, 2007

Our trainings class in Frankfurt is over, and I think I can safely say that it was a resounding success. I guess the coolest thing about SABRE is our customers. I hope to see you all again someplace again.

PS: I forgot to distribute the python code from the last day, it will be mailed to all participants on monday.

Monday, September 24, 2007

Blackhat Japan

After the immigration SNAFU in summer, I am scheduled to give my trainings class at Blackhat Japan this November - so if anyone wants to come, sign up now :-)

Cheers,
Halvar

Tuesday, September 04, 2007

BinDiff v2.0 finally released !

This is "blog-spam":

After a long wait, SABRE Security GmbH is proud to announce
the official release of BinDiff v2.0. This biggest improvements are:
  • Higher comparison speeds
  • Greater accuracy for functions which change only in the structure of the graph, not in the number of nodes/edges
  • Much greater accuracy on the instruction level comparison
  • The arguably prettiest UI of all binary comparison tools around
The many detail improvements are too numerous to mention here.
Check the screenshots:





Contact info@sabre-security.com for an evaluation version !

-- SABRE Security Team

Saturday, August 04, 2007

I am quite famous for botching every marketing effort that we try to undertake at SABRE -- a prime example of my ineptitude is the fact that we released BinNavi v1.2 in ... uh ... January, with a ton of new stuff, and I still hadn't updated the website to show some nice pictures.

Similarly for BinDiff -- v2.0 beta has been used by many customers without a hitch, and is a big improvement on the UI front. So I finally got around to adding some nice pictures today.

Also, for those that are into the entire idea of malware classification, you can see some screenshots of VxClass, our unpacker-and-classifier (Disclosure: Before Spender writes a comment ;) about our unpacker's inability to handle TheMida and similar emulating packers, I will do so myself: We do not handle emulating packers at the moment! We do not reconstruct PEs ! But if you have a cool unpacker you can just upload the unpacked file to our classifier :)

So with this blog post it's confirmed: I am not only a failure at marketing, I am also a failure at attempting to pass off marketing as a regular blog post. Have a good weekend everyone !

Thursday, August 02, 2007

I have reached the intellectual level of the sports spectator in an armchair: Comment first, read and understand later. After the last Blog comment, I actually went to read the slides of Joanna's presentation. To summarize: I find the slides informative and well-thought-out. I found that the empirical bits appear plausible and well-researched. The stuff following slide 90 was very informative. It is one of the most substantial slide decks I have read in recent times.

Some points to take home though: Whoever writes a rootkit puts himself in a defending positions. Defending positions against all known attacks is possible given perfection on the side of the defender. That is bloody hard to achieve. There is no doubt that for any given attack one can think of a counter attack, but it's a difficult game to play that doesn't allow for errors.

I think the core point that we should clarify is that rootkits should not fall into an adversary's hand to be analyzed. Once they are known, they fall into a defending position. Defending positions are not long-term substainable, as software has a hard time automatically adapting to new threats.

Once you accept that the key to a good rootkit is to use methods unknown to the victim, one might also be tempted to draw the conclusion that perhabs the virtualisation stuff is too obvious a place to attempt to hide in. But that is certainly open to discussion.

Enough high-level blah blah. I am so looking forwards to my vacation, it's not funny.
Post veröffentlichen
So it appears the entire Rutkowska-Matasano thing is not over yet. I probably should not harp on about this in my current mood, but since I am missing out on the fun in Vegas, I'll be an armchair athlete and toss some unqualified comments from the sidelines. Just think of me as the grumpy old man with a big gut and a can of beer yelling at some football players on television that they should quit being lazy and run faster.

First point: The blue chicken defense outlined in the linked article is not a valid defense for a rootkit. The purpose of a rootkit is to hide data on the machine from someone looking for it. If a rootkit de-installs itself to hide from timing attacks, the data it used to hide either has to be removed or is no longer hidden. This defeats the purpose of the rootkit: To hide data and provide access to the compromised machine.

Second point: What would happen if a boxer who claims the ability to defeat anyone in the world would reject any challengers unless they pay 250 million for him to fight ? Could he claim victory by telling the press that he "tried out all his opponents punches, and they don't work, because you can duck them like this and parry them like that" ?
I think not.

I am not saying it's impossible to build a rootkit that goes undetected by Matasano's methods. But given access to the code of a rootkit and sufficient time, it will be possible to build a detection for it. Of course you can then change the rootkit again. And then the other side changes the detection. And this goes on for a few decades.

Could we please move on to more fruitful fields of discussion already ?

Tuesday, July 31, 2007

Some people in the comments of my blog have hinted that I should have just "followed the rules" and nothing would have happened. This is incorrect -- I did follow the rules. It is perfectly legal for an independent contractor to be contracted to perform a task in the US, come in, do it, and leave. That is (amongst other things) what the "business" checkbox on the I94W is for.

What landed me in this trouble is that the immigration agent decided that even though I am CEO of a company in Germany and have no employment contract with Blackhat (just a contract as an independent contractor), that the status of "independent contractor" does not apply to me - his interpretation was that I was an "employee" of Blackhat without an H1B visa.

This is not a case of me screwing up my paperwork. This is a case of an immigration agent that did not understand my attempts at explaining that I am not a Blackhat employee, and me not knowing the subtleties of being interviewed by DHS/INS agents.

I hope I will be able to clarify the misunderstanding on Thursday morning at the consulate.
=============================
Small addition to clarify: It is perfectly legitimate to come to the US to hold lectures and trainings of the kind that I am holding at Blackhat. To reiterate: The problem originated solely from a misunderstanding where it was presumed I was an "employee" of a US company, which is not correct.

Sunday, July 29, 2007

Short update: I have managed to schedule a hearing for a regular visa. The first available date was the 24th of August *cough*.

While this is clearly too late for Blackhat, but once you have a "regular" meeting scheduled you can ask to have an "urgent" meeting scheduled, too. Wether I am eligible will become clear when the embassy opens at 7am on monday morning.

The current plan is to call them and explain them why the entire thing might've gone haywire in the first place:

There's a special provision in the german tax code that allows for people with certain qualifications to act as special 'freelancers', essentially giving them a status very similar to one-person-companies ("Freiberufler"). It is not totally trivial to obtain this status - for example, you cannot simply be a 'Freiberuf'-programmer if you write "regular" software.

My agreement with Blackhat and all transactions were taxed in Germany under this status.

Personally, I think the fundamental issue in this tragic comedy is that the US doesn't really have such a special status for freelancers, and that therefore the US customs inspector did not understand that there is a distinction between a "regular Joe" and a "single-person company/Freiberufler". Hence the customs officer assumed that this entire thing must be some devious way to bypass getting an H1B visa for someone that would not normally qualified to get one. The frequent repetition of the question "why is your course not given by an American Citizen ?" points to something like that.

I hope that I can clear up this misunderstanding tomorrow morning, but right now, I am not terribly optimistic.
I've been denied entry to the US essentially for carrying my trainings material. Wow.

It appears I can't attend Blackhat this year. I was denied entry to the US for carrying trainings materials for the Blackhat trainings, and intending to hold these trainings as a private citizen instead of as a company.

After a 9-hour flight and a 4 1/2 hour interview I was put onto the next 9-hour flight back to Germany. Future trips to the US will be significantly more complicated as I can no longer go to the US on the visa waiver program.

A little background: For the last 7 years, I have attended / presented at the 'Blackhat Briefings', a security conference in the US. Prior to the conference itself, Blackhat conducts a trainings session, and for the past 6 years, I have given two days of trainings at these events. The largest part of the attendees of the trainings are US-Government related folks, mostly working on US National Security in some form. I have trained people from the DoD, DoE, DHS and most other agencies that come to mind.

Each time I came to the US, I told immigration that I was coming to the US to present at a conference and hold a trainings class. I was never stopped before.

This time, I had printed the materials for the trainings class in Germany and put them into my suitcase. Upon arrival in the US, I passed immigration, but was stopped in customs. My suitcase was searched, and I was asked about the trainings materials.
After answering that these are for the trainings I am conducting, an immigration officer was called, and I was put in an interview room.
For the next 4 1/2 hours I was interviewed about who exactly I am, why I am coming to the US, what the nature of my contract with Blackhat is, and why my trainings class is not performed by an American citizien. After 4 hours, it became clear that a decision had been reached that I was to be denied entry to the US, on the ground that since I am a private person conducting the trainings for Blackhat, I was essentially a Blackhat employee and would require an H1B visa to perform two days of trainings in the US.

Now, I am a full-time employee (and CEO) of a German company (startup with 5 people, self-financed), and the only reason why the agreement is between Blackhat and me instead of Blackhat and my company is that I founded the company long after I had started training for Blackhat and we never got around to changing it.

Had there been an agreement between my company and Blackhat, then my entry to the US would've been "German-company-sends-guy-to-US-to-perform-services", and everything would've been fine. The real problem is that the agreement was still between me as a person
and Blackhat.

After the situation became clear (around the 4th hour of being interviewed), I offered that the agreement between Blackhat and my company could be set up more or less instantaneously - as a CEO, I can sign an agreement on behalf of my company, and Blackhat would've signed immediately, too.
This would've spared each party of us a lot of hassle and paperwork. But apparently, since I had just tried to enter as a 'normal citizen' instead as an 'employee of a company', I could now not change my application. They would have to put me on the next flight back to Germany.

Ok, I thought, perhabs I will have to fly back to Germany, set up the agreement, and immediately fly back to the states - that would've still allowed me to hold the trainings and attend the conference, at the cost of crossing the Atlantic three times instead of once. But no such luck: Since I have been denied entry under the visa waiver programme, I can now never use this programme again. Instead I need to wait until the American consulate opens, and then apply for a business visa. I have not been able to determine how long this might take -- estimates from customs officials ranged from "4 days" to "more than 6 weeks".

All this seems pretty crazy to me. From the point that 2 days of trainings constitute work that requires an H1B visa, via the issue that everything could've been avoided if I had been allowed to set up the agreement with Blackhat immediately, to the fact that setting up the agreement once I am back in Germany and flying in again is not sufficient, all reeks of a bureacracy creating work for itself, at the expense of (US-)taxpayer money.

I will now begin the Quixotic quest to get a business visa to the US. Sigh. This sucks.

Thursday, July 12, 2007

The Core guys have published a paper on a very cute heap visualisation tool.

What shall I say ? I like it, and we'll play a lot more chess with memory in the future.