Sunday, March 02, 2025

The German debt brake is stupid!

Welcome to one of my political posts. This blog post should rightfully be titled "the German debt brake is stupid, and if you support it, so are you (at least in the domain of economics)". Given that a nontrivial number of Germans agree with the debt brake, and given that there is a limit on the sensible number of characters in the title, I chose a shorter title - for brevity and to reduce offense. I nonetheless think that support for the debt brake, and supporters of the debt brake, are stupid.

In the following, I will list the reasons why I think the debt brake is stupid, and talk about a few arguments I have heard in favor of the debt brake, and why I don't buy any of them.

Reason 1: The debt brake is uniquely German, and I think the odds that Germany has somehow uncovered a deeper economic truth than anyone else is not high.

If you engage with economists a bit, you'll hear non-German economists make statements such as "there is economics, and there is German economics, and they have little in common" or "the problem with German economics is that it's really a branch of moral philosophy and not an empirical science". Pretty much the entire world stares in bewilderment at the debt brake law, and I have yet to find a non-German economist of any repute that says the German debt brake is a sensible construct.

The Wikipedia page is pretty blatant in showing that pretty much the only group supporting the debt brake are ... 48% of a sample of 187 German university professors for economics, in a poll conducted by an economic research think tank historically associated with the debt brake.

Now, I am not generally someone that blindly advocates for going with the mainstream majority opinion, but if the path you have chosen is described by pretty much the entire world as bizarre, unempirical, and based on moral vs. scientific judgement, one should possibly interrogate one's beliefs carefully.

If the German debt brake is a sensible construct, then pretty much every other country in the world is wrong by not having it, and the German government has enacted something unique that should convey a tangible advantage. It should also lead to other countries looking at these advantages and thinking about enacting their own, similar, legislation.

The closest equivalent to the German debt brake is the Swiss debt brake - but Switzerland has a lot of basis-democratic institutions that allow a democratic majority to change the constitution; in particular, a simple double-majority - majority of voters in the majority of cantons - is sufficient to remove the debt brake again. Switzerland can still act in times of crisis provided most voters in most cantons want to.

Germany, with the 2/3rds parliamentary majority required for a constitutional change, cannot. As such, the German debt brake is the most stringent and least flexible such rule in the world.

I don't see any evidence that the debt brake is providing any benefits to either Germans or the world. I see no other country itching to implement a similarly harsh law. Do we really believe that Germany has uncovered a deeper economic truth nobody else can see?

Reason 2: The debt brake is anti-market, and prevents a mutually beneficial market activity

While I am politically center-left, I am fiercely pro-market. I think markets are splendid allocation instruments, decentralized decision-making systems, information processors, and by-and-large the primary reason why the West out-competed the USSR when it came to producing goods. Markets allow the many actors in the economy to find ways how they can obtain mutual advantage by trading with each other, and interfering with markets should be done carefully, usually to correct some form of severe market failure (natural monopolies, tragedy-of-the-common, market for lemons etc. -- these are well-documented).

The market for government debt is a market like any other. Investors that believe that the government provides the best risk-adjusted return when compared to all other investment opportunities wish to lend the government money to invest it and provide the return. The government pays interest rate to these investors, based on the risk-free rate plus a risk premium.

Capital markets exist in order to facilitate decentralized resource allocation. If investors think that the best risk-adjusted returns are to be had by loaning the government money to invest in infrastructure or spend on other things, they should be allowed to offer lower and lower risk premia.

The debt brake interferes in this market by artificially constraining the government demand for debt. Even if investors were willing to pay the German government money to please please invest it in the broader economy, the German government wouldn't be allowed to do it.

In some sense, this is a deep intervention in the natural signaling of debt markets, and the flow of goods. It is unclear what market failure is being addressed here. 

Reason 3: The debt brake prevents investments with positive expected value

Assuming an opportunity arises where the government can invest sensibly in basic research or other infrastructure investments with strongly positive expected value for GDP growth and hence governmental income. Why should an arbitrary debt brake prohibit investments that are going to be net good for the whole of society?

Reason 4: The debt brake is partially responsible for the poor handling of the migration spike in 2015

Former Chancellor Merkel is often criticised for her "Wir schaffen das" ("We can do it") during the 2015 migration crisis. My main criticism, even back then, was that a sudden influx of young refugees has the potential for providing a demographic dividend, *provided* one manages to integrate the refugees into the society, the work force, and the greater economy rapidly. This necessitates investment, though: German language lessons, housing in economically non-deprived areas, German culture lessons, and much more -- and that sticking to the debt brake in an exceptional situation such as the 2015 migrant crisis is a terrible idea, because a sudden influx of refugees can have a destabilizing and economically harmful effect if the integration is botched. Successfully integrated people pay taxes and strengthen society, failure of integration leads to unemployment, potentially crime, and social disorder.

My view is that Merkel dropped the entire weight of the integration work on German civil society (which performed as best as they could, and admirably) because she was entirely committed to a stupid and arbitrary rule. I also ascribe some of the strength of Germany's far right on the disappointment that came from this mishandling of a crisis-that-was-also-an-opportunity.

Reason 5: The debt brake is based on numbers that economists agree are near-impossible to estimate correctly

It is extremely challenging to estimate the "structural deficit" of a given government, and most economists agree that there's no proper objective measurement of it, particularly when not done in retrospect. A law that prohibits governments from acting based on an unknowable quantity appears to be a bad law to me.

Reason 6: The debt brake is fundamentally based on a fear that politicians act too much in their own interest - but does not provide a democratic remedy

The underlying assumption of the debt brake is that politicians will act with their own best interest in mind, running long-term structural deficits that eventually bankrupt a country. In some sense, the notion is that "elected representatives cannot be trusted to handle the purse string, because they will use it to bribe the electorate to re-elect them".

We can discuss the extent to which this is true, but in the end a democracy should adhere to the sovereign, which is the voters. If we are afraid of a political caste abusing their position as representatives to pilfer the public's coffers, we should give the public more direct voting rights in budgetary matters, not artificially constrain what may be legitimate and good investments.

There is a deep anti-democratic undercurrent in the debt brake discussion: Either that the politicians cannot be trusted to behave in a fiscally responsible manner, or that the voters cannot be trusted to behave in a fiscally responsible manner, or that the view of politicians, voters and markets about what constitutes fiscal responsibility are somehow incorrect.

Reason 7: A German debt brake would be terrible policies for any business, why is it a good idea for a country?

Imagine for a second a company would pass bylaws that prevent issuing any additional debt, only to be bypassable by a shareholder meeting where 2/3rds of all shareholders agree that the debt can be issued. This would essentially give minority shareholders a fantastic way of taking the company hostage and demand concessions because taking on debt is a standard part of doing business. If we don't think that a majority of elected politicians can be trusted to not abuse the purse strings to extract benefits for themselves, why do we think it's a good idea to give a smaller group of elected politicians the right to block the governments ability to react in a crisis?

Reason 8: A lot of debt-brake advocacy is based in the theory of "starving the beast"

Debt-brake advocates are often simultaneous advocates of lower taxes. The theory is that by lowering taxes (and hence revenues) while creating a hard fiscal wall (the debt brake) one can force the government to cut popular programs to shrink the government - in other situations, cutting popular programs would be difficult as voters would not support it.

This idea was called "starving the beast" among US conservatives in the past. There's plenty of criticism of the approach, and all empirical evidence points to it being a terrible idea. It's undemocratic, too, as one is trying to create a situation of crisis to achieve a goal that would - assuming no crisis and democracy - not achievable.

Reason 9: Germany has let it's infrastructure decay to a point where the association of German industry is begging for infrastructure investments

The BDI is hardly a left-leaning tax-and-spend-happy group. They're historically very conservative, anti-union etc. - yet in recent years the decay of German infrastructure, from roads to bridges to the train system, has sufficiently unsettled them that we now have an alliance of German Unions and the German Employer Association call for much-needed infrastructure investments and modernisation.

The empirical evidence seems to be "when presented with a debt brake, politicians make necessary investments, and instead prefer to hollow out existing infrastructure".

Reason 10: Europe needs rearmament now, which requires long-time commitments to defense spending, but also investment in R&D etc.

The post-1945 rules-based order has been dying, first slowly in the GWOT, then it convulsed with the first Trump term; it looked like it might survive when Biden got elected, but with the second Trump term it is clear that it is dead. Europeans have for 20 years ignored that this is coming, in spite of everybody that made regular trips to Washington DC having seen it. The debt brake now risks paralyzing the biggest Eurozone economy by handing control over increased defense spending to radical fringe parties that are financed and supported by hostile adversaries.

Imagine a German parliament where the AfD and BSW jointly hold 1/3rd of the seats, and a war breaks out. Do we really want an adversary to be able to decide how much debt we can issue for national defense?

But the debt brake reassures investors and hence drives down Germany's interest rate payments!

Now, this is probably the only argument I have heard in favor of the debt brake that may merit some deeper discussion or investigation. There is an argument to be made that if investors perceive the risk of a default or the risk of inflation to be lower, they will demand a lesser coupon on the debt they provide. And I'm willing to entertain that thought. Something either I or someone that reads it should do is:

1. Calculate the risk premium that Germany had to pay over the risk-free rate in the past.
2. Observe to what extent the introduction of the debt brake, or the introduction of the COVID spending bills etc. impacted the spread between the risk-free rate and the yield on German government debt.

There are some complications with this (some people argue that the yield on Bunds *is* the risk-free rate, or at least the closest approximation thereof), and one would still have to quantify what GDP shortfall was caused by excessive austerity, so the outcome of this would be a pretty broad spectrum of estimates. But I will concede that this is worth thinking about and investigating.

At the same time, we are in a very special situation: The world order we all grew up in is largely over. The 1990s belief that we will all just trade, that big countries don't get to invade & pillage small countries, and that Europe can just disarm because the world is peaceful now is dead, and only a fool would cling to it.

I know that people would like to see a more efficient administration, and a leaner budget. These are good goals, and should be pursued - but not by hemming in your own government to be unable to react to crises, be captured by an aggressive minority, and reduce democratic choice.

Apologies for this rant, but given the fact that Europe has squandered the last 20 years, and that I perceive the German approach to debt and austerity to be a huge factor in this, it is hard for me to not show some of my frustration.

Thursday, December 05, 2024

What I want for Christmas for the EU startup ecosystem

Hey all,

I have written about the various drags on the European tech industry in the past, and recently been involved in discussions on both X and BlueSky about what Europe needs.

In this post, I will not make a wishlist of what concrete policy reforms I want, but rather start "product centric" -- e.g. what "user experience" would I want as a founder? Once it is clear what experience you want as a founder, it becomes easier to reverse-engineer what policy changes will be needed.

What would Europe need to make starting a company smoother, easier, and better?

Let's jointly imagine a bit what the world could look like.

Imagine a website where the following tasks can be performed:

  1. Incorporation of a limited liability company with shares. The website offers a number of standardized company bylaws that cover the basics, and allows the incorporation of a limited liability company on-line (after identity verification etc.).
  2. Management of simple early-stage funding rounds on-line: Standardized SAFE-like instruments, or even a standardized Series A agreement, and the ability to sign these instruments on-line, and verify receipt of funds.
  3. Management of the cap table (at least up to and including the Series A).
  4. Ability to employ anyone in the Eurozone, and run their payroll, social security contributions, and employer-side healthcare payments. Possibly integrated with online payment.
  5. Ability to grant employee shares and manage the share grants integrated with the above, with the share grants taxed in a reasonable way (e.g. only tax them on liquidity event, accept the shares themselves as tax while they are illiquid, or something similar to the US where you can have a lightweight 409a valuation to assign a value to the shares).
  6. Integration with a basic accounting workflow that can be managed either personally or by an external accountant, with the ability to file simplified basic taxes provided overall revenue is below a certain threshold.
  7. Ways of dealing with all the other paperwork involved in running a company on-line.
This is a strange mixture of Carta, Rippling, Docusign, Cloud Atlas, a Notary, and Intuit -- but it would make the process of starting and running a company much less daunting and costly.

Ideally, I could sign up to the site, verify my identity, incorporate a basic company with standardized bylaws, raise seed funding, employ people, run their payroll, and file basic taxes and paperwork.

In the above dream, what am I missing?

My suspicion is that building and running such a website would actually be not difficult (if the political will in Europe existed), and would have a measurable impact on company formation and GDP. If we want economic growth like the US, Europe needs to become a place where building and growing a business is easier and has less friction than in the US.

So assuming the gaps that I am missing are filled in, the next step is asking: What policy reforms are necessary to reach this ideal?

Wednesday, July 10, 2024

Someone is wrong on the internet (AGI Doom edition)

The last few years have seen a wave of hysteria about LLMs becoming conscious and then suddenly attempting to kill humanity. This hysteria, often expressed in scientific-sounding pseudo-bayesian language typical of the „lesswrong“ forums, has seeped into the media and from there into politics, where it has influenced legislation.

This hysteria arises from the claim that there is an existential risk to humanity posed by the sudden emergence of an AGI that then proceeds to wipe out humanity through a rapid series of steps that cannot be prevented.

Much of it is entirely wrong, and I will try to collect my views on the topic in this article - focusing on the „fast takeoff scenario“.

I had encountered strange forms of seemingly irrational views about AI progress before, and I made some critical tweets about the messianic tech-pseudo-religion I dubbed "Kurzweilianism" in 2014, 2016 and 2017 - my objection at the time was that believing in an exponential speed-up of all forms of technological progress looked too much like a traditional messianic religion, e.g. "the end days are coming, if we are good and sacrifice the right things, God will bring us to paradise, if not He will destroy us", dressed in techno-garb. I could never quite understand why people chose to believe Kurzweil, who, in my view, has largely had an abysmal track record predicting the future.

Apparently, the Kurzweilian ideas have mutated over time, and seem to have taken root in a group of folks associated with a forum called "LessWrong", a more high-brow version of 4chan where mostly young men try to impress each other by their command of mathematical vocabulary (not of actual math). One of the founders of this forum, Eliezer Yudkowsky, has become one of the most outspoken proponents of the hypothesis that "the end is nigh".

I have heard a lot of of secondary reporting about the claims that are advocated, and none of them ever made any sense to me - but I am also a proponent of reading original sources to form an opinion. This blog post is like a blog-post-version of a (nonexistent) YouTube reaction video of me reading original sources and commenting on them.

I will begin with the interview published at https://intelligence.org/2023/03/14/yudkowsky-on-agi-risk-on-the-bankless-podcast/

The proposed sequence of events that would lead to humanity being killed by an AGI is approximately the following:

  1. Assume that humanity manages to build an AGI, which is a computational system that for any decision "outperforms" the best decision of humans. The examples used are all zero-sum games with fixed rule sets (chess etc.).
  2. After managing this, humanity sets this AGI to work on improving itself, e.g. writing a better AGI.
  3. This is somehow successful and the AGI obtains an "immense technological advantage".
  4. The AGI also decides that it is in conflict with humanity.
  5. The AGI then coaxes a bunch of humans to carry out physical actions that enable it to then build something that kills all of humanity, in case of this interview via a "diamondoid bacteria that replicates using carbon, hydrogen, oxygen, nitrogen, and sunlight", that then kills all of humanity.
This is a fun work of fiction, but it is not even science fiction. In the following, a few thoughts:

Incorrectness and incompleteness of human writing


Human writing is full of lies that are difficult to disprove theoretically

As a mathematician with an applied bent, I once got drunk with another mathematician, a stack of coins, and a pair of pliers and some tape. The goal of the session was „how can we deform an existing coin as to create a coin with a bias significant enough to measure“. Biased coins are a staple of probability theory exercises, and exist in writing in large quantities (much more than loaded dice).

It turns out that it is very complicated and very difficult to modify an existing coin to exhibit even a reliable 0.52:0.48 bias. Modifying the shape needs to be done so aggressively that the resulting object no longer resembles a coin, and gluing two discs of uneven weight together so that they achieve nontrivial bias creates an object that has a very hard time balancing on its edge.

An AI model trained on human text will never be able to understand the difficulties in making a biased coin. It needs to be equipped with actual sensing, and it will need to perform actual real experiments. For an AI, a thought experiment and a real experiment are indistinguishable.

As a result, any world model that is learnt through the analysis of text is going to be a very poor approximation of reality. 

Practical world-knowledge is rarely put in writing

Pretty much all economies and organisations that are any good at producing something tangible have an (explicit or implicit) system of apprenticeship. The majority of important practical tasks cannot be learnt from a written description. There has never been a chef that became a good chef by reading sufficiently many cookbooks, or a woodworker that became a good woodworker by reading a lot about woodworking.

Any skill that affects the real world has a significant amount of real-world trial-and-error involved. And almost all skills that affect the real world involve large quantities of knowledge that has never been written down, but which is nonetheless essential to performing the task.

The inaccuracy and incompleteness of written language to describe the world leads to the next point:

No progress without experiments

No superintelligence can reason itself to progress without doing basic science

One of the most bizarre assumptions in the fast takeoff scenarios is that somehow once a super-intelligence has been achieved, it will be able to create all sorts of novel inventions with fantastic capabilities, simply by reasoning about them abstractly, and without performing any basic science (e.g. real-world experiments that validate hypotheses or check consistency of a theory or simulation with reality).

Perhaps this is unsurprising, as few people involved in the LessWrong forums and X-Risk discussions seem to have any experience in manufacturing or actual materials science or even basic woodworking.

The reality, though, is that while we have made great strides in areas such as computational fluid dynamics (CFD), crash test simulation etc. in recent decades, obviating the need for many physical experiments in certain areas, reality does not seem to support the thesis that technological innovations are feasible „on paper“ without extensive and painstaking experimental science.

Concrete examples:
  1. To this day, CFD simulations of the air resistance that a train is exposed to when hit by wind at an angle need to be experimentally validated - simulations have the tendency to get important details wrong.
  2. It is safe to assume that the state-supported hackers of the PRCs intelligence services have stolen every last document that was ever put into a computer at all the major chipmakers. Having all this knowledge, and the ability to direct a lot of manpower at analyzing these documents, have not yielded the knowledge necessary to make cutting-edge chips. What is missing is process knowledge, e.g. the details of how to actually make the chips.
  3. Producing ballpoint pen tips is hard. There are few nations that can reliably produce cheap, high-quality ballpoint pen tips. China famously celebrated in 2017 that they reached that level of manufacturing excellence.
Producing anything real requires a painstaking process of theory/hypothesis formation, experiment design, experiment execution, and slow iterative improvement. Many physical and chemical processes cannot be accelerated artificially. There is a reason why it takes 5-8 weeks or longer to make a wafer of chips.

The success of of systems such as AlphaGo depend on the fact that all the rules of the game of Go are fixed in time, and known, and the fact that evaluating the quality of a position is cheap and many different future games can be simulated cheaply and efficiently.

None of this is true for reality: 
  1. Simulating reality accurately and cheaply is not a thing. We cannot simulate even simple parts of reality to a high degree of accuracy (think of a water faucet with turbulent flow splashing into a sink). 
  2. The rules for reality are not known in advance. Humanity has created some good approximations of many rules, but both humanity and a superintelligence still need to create new approximations of the rules by careful experimentation and step-wise refinement.
  3. The rules for adversarial and competitive games (such as a conflict with humanity) are not stable in time.
  4. Evaluating any experiment in reality has significant cost, particularly to an AI.
A thought experiment I often use for this is: 

Let us assume that scaling is all you need for greater intelligence. If that is the case, Orcas or Sperm Whales are already much more intelligent than the most intelligent human, so perhaps an Orca or a Sperm Whale is already a superintelligence. Now imagine an Orca or Sperm Whale equipped with all written knowledge of humanity and a keyboard with which to email people. How quickly could this Orca or Sperm Whale devise and execute a plot to kill all of humanity?

People that focus on fast takeoff scenarios seem to think that humanity has achieved the place it has by virtue of intelligence alone. Personally, I think there are at least three things that came together: Bipedalism with opposable thumbs, an environment where you can have fire, and intelligence.

If we lacked any of the three, we would not have built any of our tech. Orcas and Sperm Whales lack thumbs and fire, and you can’t think yourself to world domination.


Superintelligence will also be bound by fundamental information-theoretic limits

The assumption that superintelligences can somehow simulate reality to arbitrary degrees of precision runs counter to what we know about thermodynamics, computational irreducibility, and information theory.

A lot of the narratives seem to assume that a superintelligence will somehow free itself from constraints like „cost of compute“, „cost of storing information“, „cost of acquiring information“ etc. - but if I assume that I assume an omniscient being with infinite calculation powers and deterministically computational physics, I can build a hardcore version of Maxwells Demon that incinerates half of the earth by playing extremely clever billards with all atoms in the atmosphere. No diamandoid bacteria (whatever that was supposed to mean) necessary.

The reason we cannot build Maxwells Demon, and no perpetuum mobile, is that there is a relationship between information theory and thermodynamics, and nobody, including no superintelligence, will be able to break it.

Irrespective of whether you are a believer or an atheist, you cannot accidentally create capital-G God, even if you can build a program that beats all primates on earth at chess. Cue reference to the Landauer principle here.

Conflicts (such as an attempt to kill humanity) have no zero-risk moves

Traditional wargaming makes extensive use of random numbers - units have a kill probability (usually determined empirically), and using random numbers to model random events is part and parcel for real-world wargaming. This means that a move “not working”, something going horrendously wrong is the norm in any conflict. There are usually no gainful zero-risk moves; e.g. every move you make does open an opportunity for the opponent.

I find it somewhat baffling that in all the X-risk scenarios, the superintelligence somehow finds a sequence of zero-risk or near-zero risk moves that somehow yield the desired outcome, without humanity finding even a shred of evidence before it happens.

A more realistic scenario (if we take the far-fetched and unrealistic idea of an actual synthetic superintelligence that decides on causing humans harm for granted) involves that AI making moves that incur risk to the AI based on highly uncertain data. A conflict would therefore not be brief, and have multiple interaction points between humanity and the superintelligence.


Next-token prediction cannot handle Kuhnian paradigm shifts

Some folks have argued that next-token prediction will lead to superintelligence. I do not buy it, largely because it is unclear to me how predicting the next token would deal with Kuhnian paradigm shifts. Science proceeds in fits and bursts; and usually you stay within a creaky paradigm until there is a „scientific revolution“ of sorts. The scientific revolution necessarily changes the way that language is produced — e.g. a corpus of all of human writing prior to a scientific revolution is not a good representation of the language used after a scientific revolution - but the LLM will be trained to mimic the distribution of the training corpus. People point to in-context learning and argue that LLMs can incorporate new knowledge, but I am not convinced of that yet - the fact that all current models fail at generating a sequence of words that - when cut into 2-tuples - occur rarely or never in the training corpus shows that ICL is extremely limited in the way that it can adjust the distribution of LLM outputs.


Enough for today. Touch some grass, build some stuff

In theory, theory equals practice. In practice it doesn't. Stepping out of the theoretical realm of software (where generations of EE and chip engineers sacrificed their lives to give software engineers an environment where theory is close to practice most of the time) into real-world things that involve dust, sun, radiation, and equipment chatter is a sobering experience that we should all do more often. It's easy to devolve into scholasticism if you're not building anything.



Thursday, July 04, 2024

Some experiments to help me understand Neural Nets better, post 1 of N

While I have been a sceptic of using ML and AI in adversarial (security) scenarios forever, I also quite like the fact that AI/ML has become important, if only to make me feel like my Math MSc (and abortive Math PhD) were not a waste of time.

I am a big proponent of "bottom-up" mathematics: Playing with a large number of examples to inform conjectures to be dealt with later. I tend to run through many experiments to build intuition; partly because I have crippling weaknesses when operating purely formally, partly because most of my mathematics is somewhat "geometric intuition" based -- e.g. I rely a lot on my geometric intuition for understanding problems and statements.

For a couple years I've wanted to build myself a better intuition about what deep neural networks actually "do". There are folks in the community that say "we cannot understand them", and folks that say "we believe in mechanistic interpretability, and we have found the neuron to recognize dogs"; I never found either statement to be particularly convincing.

As a result, earlier this year, I finally found time to take a pen, pencil, and wastebasket and began thinking a bit about what happens when you send data through a neural network consisting of ReLU units. Why only ReLUs? Well, my conjecture is that ReLUs are as good as anything, and they are both reasonably easy to understand and actually used in practical ML applications. They are also among the "simplest examples" to work with, and I am a big fan of trying the simple examples first.

This blog post shares some of my experiments and insights; I called it the "paper plane or origami perspective to deep learning". I subsequently found out that there are a few people that have written about these concepts under the name "the polytope lens", although this seems to be a fringe notion in the wider interpretability community (which I find strange, because - unsurprisingly - I am pretty convinced this is the right way to think about NNs).

Let's get started. In order to build intuition, we're going to work with a NN that is supposed to learn a function from R^2 to R - essentially learning a grayscale image. This has several advantages:

1. We can intuitively understand what the NN is learning.
2. We can simulate training error and generalisation errors by taking very high-resolution images and training on low-resolution samples.
3. We stay within the realm of low-dimensional geometry for now, which is something most of us have an intuitive understanding of. High dimensions will create all sorts of complications soon enough.

Let's begin by understanding a 2-dimensional ReLU neuron - essentially the function f(x, y) = max( ax + by + c, 0) for various values of a, b, and c.

This will look a bit like a sheet of paper with a crease in it:

How does this function change if we vary the parameters a, b, or c? Let's begin by varying a:

Now let's have a look at varying b:
And finally let's have a look at varying c:

So the parameters a, b, c really just decide "in which way" the plane should be folded / creased, and the steepness and orientation of the non-flat part. It divides the plane into halfspaces; the resulting function is 0 on one half-plane and linear (respectively affine) on the other.

As a next step, let's imagine a single-layer ReLU network that takes the (x,y) coordinates of the plane, and then feeds it into 10 different ReLU neurons, and then combines the result by summing them using individual weights.

The resulting network will have 3 parameters to learn for each neuron: a, b, and c. Each "neuron" will represent a separate copy of the plane that will then be combined (linearly, additively, with a weight) into the output function. The training process will move the "creases" in the paper around until the result approximates the desired output well.

Let's draw that process when trying to learn the picture of a circle: The original is here:





This shows us how the network tries to incrementally move the creases around so that on each of the convex areas that are created by the creases, it can choose a different affine function (with the condition that on the "creases" the functions will take on the same value).

Let's do another movie, this time with a higher number of first-layer neurons - 500. And let's see how well we will end up approximating the circle.


Aside from being mesmerizing to watch, this is also kinda intriguing and raises a bunch of questions:

  1. I don't understand enough about Adam as an optimizer to understand where the very visible "pulse" in the optimization process is coming from. What's going on here?
  2. I am pretty surprised by the fact that so many creases end up being extremely similar -- what would cause them to bundle up into groups in the way they do? The circle is completely rotation invariant, but visually the creases seem to bunch into groups much more than random distribution would suggest. Why?
  3. It's somewhat surprising how difficult it appears to be to learn a "sharp" edge, the edge between white and black in the above diagram is surprisingly soft. I had expected it to be easier to learn to have a narrow polytope with very large a/b constants to create a sharp edge, somehow this is difficult? Is this regularization preventing the emergence of sharp edges (by keeping weights bounded)?
Clearly, there's work to do. For now, some entertainment: Training the same 500-neuron single-layer network to learn to reproduce a picture of me with a face full of zinc sunscreen:



It's interesting (perhaps unsurprising) that the reproduced image feels visually like folded paper.

Anyhow, this was the first installment. I'll write more about this stuff as I play and understand more.
Steps I'll explain in the near future:
  1. What happens as you deepen your network structure?
  2. What happens if you train a network on categorical data and cross-entropy instead of a continuous output with MSE?
  3. What can we learn about generalization, overfitting, and overparametrization from these experiments?
See you soon.

Wednesday, January 31, 2024

The end of my Elastic/optimyze journey ...

Hey all,

== tl;dr ==

Today is my last day at Elastic. I'll take an extended break and focus on rest, family, health, writing, a bit of startup mentoring/investing, and some research - at least for a while.

I'm thankful for my great colleagues and my leadership at Elastic - y'all are stellar, even if I was often grumbly about some technical or architectural issues. I'll also miss the ex-optimyze team a lot; you were the best team anyone doing technically sophisticated work could wish for - great individuals, but in sum greater than the parts. I think the future for the tech we built is bright, particularly in light of the recent Otel events :)

========

Extended Version:

Today is my last day at Elastic, and with that, the last day of my journey with optimyze. I am leaving with a heavy heart, and complicated emotions. The 5 years of optimyze (3 years optimyze, 2 years optimyze-integration-into-Elastic) were intense - moderately intense on the work front, but extremely intense on the life front. Fate somehow managed to cram a lot of the ups and downs of midlife into a very small number of years.

A timeline:

  1. I left Google on the 31st of December 2018, and started optimyze.cloud in February 2019. I was highly motivated by the idea of building a company that aligns my ecological, economic, and technical interests. I visited the RSA conference in SF in spring 2019 to network and get people interested in our "cut-of-savings" consulting approach. I met Corey Quinn for coffee, and to this day much appreciate all the sage advice he had (even if I had to ignore some and learn the hard lesson myself).
  2. In May 2019, I was elated to (finally!) become a father for the first time.
  3. During 2019, my co-founder Sean and me mostly spent our time trying to get our "cut-of-savings" consulting business of the ground, only to be thwarted by the unfortunate combination that (a) companies nimble enough to do it were too small to make it worth it, and (b) companies big enough to make it worth it couldn't figure out how to make the contract work from a legal and accounting perspective.
    We did a few small gigs with friendly startups, and realized in late summer that a zero-instrumentation, multi-runtime, fleet-wide profiler was sorely missing as a product. We also realized that with BOLT making progress, there'd be real value in being a SaaS that sits on profiling data from different verticals. Hence the vision for optimyze.cloud as a product company was born.
  4. By late 2019, we had a prototype for unwinding C/C++ stacks using .eh_frame, and Python code, both from eBPF. We knew we could be really zero-friction in deployment, which made us very happy and excited.
  5. We decided to raise funding, and did so over the winter months - with the funding wire transfer finally hitting our (Silicon Valley Bank) account some time in early 2020. We started building, and hiring what would turn out the best team I've ever worked on.
  6. We had a working UI and product by late fall 2020, and the first in-prod deployments around the same time. One particular part of the stack was too slow (a particular query that we knew we'd need to move to a distributed K/V store, but hadn't done yet), and we spent the next few months rebuilding that part of the stack to use Scylla.
  7. We made some very bad calls on the investor relations front, I foolishly stumbled into a premature, fumbled, and retrospectively idiotic fundraise, into the middle of which my second child was born and the first acquisition offers came in.
  8. We launched Prodfiler in August 2021, to great acclaim and success. People loved the product, they loved the frictionless deployment, they loved the fact that all their stack traces were symbolized out of the box etc. - the product experience was great.
  9. In mid-October, we were acquired by Elastic with the closing date November 1st. My mother had a hip surgery from which complications arose, which led to her being transferred into an ICU.
    The day the deal closed, my mother fell into a coma, and she would never wake up again. I spent the next weeks shuttling back and forth between Zurich (where my wife and my two kids were) and Essen, Germany, to spend time bedside in the ICU.
    My mother died in the morning hours of Jan 1st 2022, a few hours after the fireworks.
  10. My elderly father needed a lot of help dealing with the aftermath; at the same time the transition into the Elastic tech stack was technically challenging to pull off.
  11. In Summer 2022, my father stumbled after a small leg surgery, fell, and hit his head; after some complications in the German medical system, it became clear that the injury had induced dementia. We transferred him to a specialist hospital in Berlin and ultimately to a care home close to my brother's family. Since then, I've been shuttling back and forth to see him often.
  12. After two years of hard work at Elastic, we finally managed to launch our product again in fall 2023.

So the entire thing was 5 years, in which I had two children, started a company, hired the best team I've known, launched a product I was (and am) immensely proud of, then lost my mother, most of my father ... and "reluctantly let go" of the company and product.

The sheer forces at play when you cram so much momentum into such a short time-frame will strain everybody; and they will strain everybody's support system. I'm extremely grateful for my entire support system, in particular my brother. I don't know how I would've fared without him, but I hope my kids will have as good a relationship with each other as I do with my brother.

I'm also grateful to the folks at Elastic and the optimyze team, who were extremely supportive and understanding as I was dealing with complications outside of work.

I'm proud that we managed to build, I am also proud that we managed to port it to the Elastic stack and re-launch it. Even after more than 2 years focused on porting the back-end, our profiler remains ahead of the competition. I'm optimistic about what Elastic and the team can build on top of our technology, in particular with OTel profiling moving toward reality.

At the same time, I am pretty spent. My productivity is nowhere near where I expect it to be (it never is - I have difficulty accepting that I am a finite human - but the gap is bigger than usual), and this leads to me having difficulty switching off: When I feel like I am not getting the things I want to get done done, my brain wants to compensate by working more - which is rarely the right step.

So, with a heavy heart, I decided that I will take an extended break. It's been intense, and emotional, and I need some time to rest and recover, and accompany my father on his last few steps into the darkness (or light?). 2019 and 2020 were among the happiest years of my life, the last chunk of 2021 and most of 2022 the most difficult parts of my life. 2023 was trending up, and I expect things to continue trending up for the foreseeable future.

I have planned to do a bit of writing (I think having done two companies, one bootstrapped and one with VC money, gives me a few things I'd like to pass on), perhaps a bit of angel investing or VC scouting, perhaps a bit of consulting where things of particular interest arise - but mostly, I intend to stretch, breathe, be there for my kids, and get a clear view of the horizon.

Monday, December 11, 2023

A list of factors that act(ed) as drag on the European Tech/Startup scene

This post is an adaption of a Twitter thread where I listed the various factors that in my experience led to a divergence of the trajectories of the US tech industry around Silicon Valley (SV) and the tech industry in Europe. Not all of these factors are current (some of the cultural ones are less pronounced today than they used to be), and some of them could be relatively easily fixable.

I'll add a separate post on policy suggestions at a later point.

I should also note that there's many great things about Europe -- I still live here, I'd build my next company here, and I don't think I'd ever want to migrate to SV. I'll also write about the advantages in the future.

Now, on to the list, which was spawned by a thread with @martin_casado and @bgurley on the website previously known as Twitter.
  1. Cultural factors: When I was growing up in the 90s, there was significant uncertainty in the labor market, and one way to achieve economic security was seeking a government job. In many European countries, running a limited liability construct into insolvency effectively bans you from running another one in the foreseeable future. The mentality of "start a company in your 20s, and if you fail, you can either try again or get a job" wasn't a thing. So we are operating from a risk-averse base, due to a labor market with then-sluggish job creation and strong incumbent effects. (Bert Hubert has written a more extensive article on the cultural factors here).
  2. A terrifyingly fragmented market, along legal, linguistic, and cultural lines. Imagine every US state had its own language, defense budget, legal system, tax system, culture, employment law etc. - in the US, you build a product and you tap into a market of 340m people. The biggest market in Europe is Germany at 80m, not even a quarter of the size. Then France (65m), Italy (59m), Spain (47m), and then things fragment into a long tail. By the time you hit 340m customers, you're operating in 9-10 countries, 7+ languages and legal systems etc.
  3. Equally fragmented capital markets that are individually much smaller. Take the US stock market and cut it into 10+ pieces. This has knock-on effects for IPOs: IPOs, when they happen, tend to be much smaller. Raising large amounts of capital is more difficult, while big wins are smaller. This has terrible knock-on effects all the way down to seedstage VCs: If the power law home run you're angling for is 1/10th the size of the home run in the US, early stage investors need to be way more risk averse. You can see this even today where most European VC funds will offer less money at worse terms than their US counterparts. It was much worse in 2006-2007, when the Samwers were almost the only game in town for VC in the EU.
    Smaller IPOs also mean that it is comparatively much more attractive to sell to an existing (US-based) giant.
  4. The absence of a DARPA to shoulder fundamental research risks in technology. Different stages of R&D require different investors. The government is in the strange situation that they can indirectly benefit from investments without having an ownership stake because it gets to tax GDP. That means at the extremely high risk end of R&D, fundamental research, it can afford to just finance many many long shots blindly and (comparatively) simply, as it doesn't need to track ownership. So how do you fund fundamental R&D without it devolving into scholasticism? Interestingly, the most basic test ("can I use this to cause some damage") is already helpful. Europe's defense sector has never since WW2 grasped it's role in advancing technology, and it's terribly fragmented, underfunded, and can't do much research. DARPA has financed the early-stage development of many enabling technologies. Having a guaranteed customer (DoD) for high risk research has enabled better and higher risk-taking, and had large downstream effects.
  5. Terrible legislation with regards to employee stock options. People talk about how many big companies in Europe are family-owned as if that is something good. It's also a symptom of legal systems that make (or made) it terribly difficult to give lots of equity to early employees. This is slowly changes through concerted lobbying, but it is still difficult in most jurisdictions, and not unified at all.
  6. The way the EU is constructed where the EU gives a directive and each country implements it's own flavor is worst-case for legal complexity. Imagine if every state got to re-implement its own flavor of each federal law.
  7. Founder Brain Drain. Why would an ambitious founder not go to where the markets are bigger, capital is easier to raise on better terms, and incentivizing early employees is easier?
  8. Ecosystem effects permit risk-taking by employees in SV. SV has such strong demand for talent that an employee can "take risks" on early stage startups because the next job is easy to get. If you live in a place with just 1-2 big employers, leaving with intent to return is riskier.
  9. Network effects and path dependence. The fragmentation of the market led to smaller players in search and ads that then sold to larger US-based players. Without the deep revenue streams, no European player had the capital or expertise to go into cloud. As a result, there is no European player with enough compute, or datasets, or capital to effectively compete in cloud or AI. China has homegrown players, even Russia has to some extent, Europe's closest equivalent are OVH and Hetzner, which sell on price, not on higher-level services.
  10. GDPR after effects: EUparl saw that in situations where US states are fragmented they can act as a standards body, and there's a weird effect of "if we cannot be relevant through tech, we can still be relevant through shaping the legal landscape", and that's what leads to this terrible idea of "Europe as regulatory superpower", where it is more important for members of EUparl to have done "something" than having done "something right" - a mentality that seems to prefer bad regulation over no regulation, when good regulation would be needed. GDPR led to higher market concentration in Ads, which arguably undermines privacy in a different way, and it's imposed huge compliance and convenience cost on everybody. But in EUparl it's celebrated as success, because hey, for once Europe was relevant (even if net effects are negative).
  11. Pervasive shortsightedness among EU national legislators, undermining the single market and passing poor laws with negative side effects for startup and capital formation. The best example is Germans "exit tax": Imagine you are an Angel Investor in the US but if you move out of state it triggers immediate cap gains on all your illiquid holdings/Angel Investments at the valuation of the last round. It essentially means you can't angel invest if you don't know if you'll have to move in the next 8-10 years because you don't know if you can afford the tax bill. It's hair-raisingly insane, and likely illegal under EU rules, but who wants to fight the German IRS in European court?
I think these are the most important factors that come to mind. I'll add more if I remember more of them.

Also, given that this post has a strong resonance with extreme "anti government" and "libertarian" types, please be aware that I am very much on a different area of the political spectrum (centre-left, somewhere where the social democrats used to reside historically in Germany). I am strongly in favor of good and competent regulation to ensure markets function, competition works, and customers are protected.

Tuesday, February 23, 2021

Book Review: "This Is How They Tell Me the World Ends"

This blog post is a review of the book "This Is How They Tell Me the World Ends" by Nicole Perlroth. The book tries to shed light on the "zero day market" and how the US government interacts in this market, as well on various aspects of nation-to-nation "cyberwarfare".

I was excited to see this book come out given that there are relatively few hats in this field I have not worn. I have worked in information security since the late 1990's; I was part of a youth culture that playfully pioneered most of the software exploitation techniques that are now used by all major military powers. I have run a business that sold technology to both defenders and offensive actors. I have written a good number of exploits and a paper clarifying the theoretical foundations for understanding them. I have trained governments and members of civil society on both the construction and the analysis of exploits, and on the analysis of backdoors and implants. I have spent several months of my life reading the disassembled code of Stuxnet, Duqu, and the Russian Uroburos. I spent half a decade at Google on supporting Google's defense against government attackers; I spent a few additional years in Project Zero trying to nudge the software industry toward better practices. Nowadays, I spend my time on efficiency instead of security.

I have always been close to, but never part of, the zero-day market. My background and current occupation give me a deep understanding of the subject, while not tying me economically to any particular perspective. I therefore feel qualified like few others to review the book.

"This Is How They Tell Me the World Ends" tackles an important question: What causes the vulnerability of our modern world to "cyberattacks"? Some chapters cover various real-world cyberattacks, some chapters try to shed light on the "market for exploits", and the epilogue of the book discusses ideas for a policy response.

The author managed to get access to a fantastic set of sources. Many things were captured on the record that were previously only discussed on background. Several chapters recount interviews with former practitioners in the exploit market, and these chapters provide a glimpse into the many fascinating and improbable personalities that make up the field. This is definitely a strong asset of the book.

Given the exciting and impactful nature of the "cyberwar" subject, the many improbable characters populating it, and the many difficult and nuanced policy questions in the field, the level of access and raw material the author gathered could have been enough for a fantastic book (or even two).

Unfortunately, "This Is How They Tell Me the World Ends" is not a fantastic book. The potential of the source material is diluted by a large number of inaccuracies or even falsehoods, a surprising amount of ethnocentricity and US-American exceptionalism (that, while being a European, I perceived to border on xenophobia), a hyperbolic narration style, and the impression of facts bent to support a preconceived narrative that has little to do with reality.

For the layperson (presumably the target audience of this book) the many half-truths and falsehoods make the book an untrustworthy guide to an important and difficult topic. For the expert, the book may be an entertaining, if jarring read, provided one has the ability to dig through a fair bit of mud to find some gold. I am confident that the raw material must be great, and where it shines through, the book is good.

Inaccuracies and Falsehoods

The topic is complex, and technical details can be difficult to get right and transmit clearly. A book without any errors cannot and should not be expected, and small technical errors should not concern the reader. That said, the book is full of severe and significant errors - key misunderstandings and false statements that are used as evidence and to support conclusions - and those do raise concerns.

I will highlight a few examples of falsehoods or misleading claims. I only found those falsehoods that overlapped with expertise of mine; extrapolating from this, I am afraid that there may be many more in the book.

The following examples are from the first third of the book; and they are illustrative of the sort of mistakes throughout: Facts are either twisted or exaggerated to the point of becoming demonstrably false; and these twists and exaggerations seem to always happen in support of a narrative that places an unhealthy focus on zero-days.

First, one of the more egregious falsehoods is the claim that NSA hacked into Google servers to steal data:

... the agency hacked its way into the internal servers at companies like Google and Yahoo to grab data before it was encrypted.
This simply did not happen. As far as anyone in the industry knows, in the case of Google, unencrypted network connections between datacenters were tapped. This may sound inconsequential, but undermines the central "zero days are how hacking happens" theme of the book.

Second, the entire description of zero-days is full of false claims and hyperbole:
Chinese spies used a single Microsoft zero-day to steal some of Silicon Valley's most closely held source code.
This alludes to the Aurora attacks on Google; but anyone that knows Google's internal culture knows that source code is not most closely held by design. Google always had a culture where every engineer could roam through almost all the code to help fix issues.
...Once hackers have figured out the commands or written the code to exploit it, they can scamper through the world's computer networks undetected until the day the underlying flaw is discovered
This is simply not true. While a zero-day exploit will provide access to a given machine or resource, it is not a magic invisibility cloak. The Chinese attackers were detected, and many other attackers are routinely detected in spite of having zero-day exploits.
...Only a select few multinationals are deemed secure enough to issue the digital certificates that vouch (...) that Windows operating system could trust the driver  (...) Companies keep the private keys needed to abuse their certificates in the digital equivalent of Fort Knox.
This section is at best misleading: The driver in question was signed with a stolen JMicron "end-entity" certificate. There are thousands of those, all with the authority to sign device drivers to be trusted, and the due diligence to issue one used to be limited to providing a fax of an ID and a credit card number.

The "select few multinationals" Perlroth writes about here are the certificate authorities that issue such "end-entity" certificates. It is true that a CA is required to keep their keys on a hardware security module (a very high-security setup), and that the number of CAs that can issue driver-signing certificates is limited (and falling).

The text makes it appear as if a certificate from a certificate authority (and hence from a hardware security module) had been stolen. This is simply false. End-entity certificates are issued to hardware vendors routinely, and many hardware vendors play fast and loose with them.

(It is widely rumored - but difficult to corroborate - that there used to be a thriving black market where stolen end-entity certificates were traded a few years ago; the going rate was between $30k and $50k if I remember correctly.

Ethnocentricity and US exceptionalism

As a non-US person, the strangest part of the book was the rather extreme ethnocentricity of the book: The US is equated with "respecting human rights", everything outside of the US is treated as both exotic and vaguely threatening, and the book obsesses about a "capability gap" where non-US countries somehow caught up with superior US technology.

This ranges from the benign-but-silly (Canberra becomes the "outback", and Glenn Greenwald lives "in the jungles of Brazil" - evoking FARC-style guerillas, when - as far as I am informed - he lives in a heavily forested suburb of Rio) to seriously impacting and distorting the narrative.

The author seems to find it unimaginable that exploitation techniques and the use of exploits are not a US invention. The text seems to insinuate that exploit technologies and "tradecraft" were invented at NSA and then "proliferated" outward to potentially human-rights-violating "foreign-born" actors via government contractors that ran training classes.

This is false, ridiculous, and insulting on multiple levels.

First off, it is insulting to all non-US security researchers that spent good parts of their lives pioneering exploit techniques.

The reality is that the net flow of software exploitation expertise out of NSA is negative: Half a generation of non-US exploit developers migrated to the US over the years and acquired US passports eventually. The US exploit supply chain has always been heavily dependent on "foreign-born" people. NSA will enthusiastically adopt outside techniques; I have yet to learn about any exploitation technique of the last 25 years that "leaked" out of NSA vs. being invented outside.

The book's prologue, when covering NotPetya, seems to imply that Russia had needed the Shadowbrokers leaks - ("American weapons at its disposal") - to cause severe damage. Anybody with any realistic visibility into both the history of heap exploitation and the vulnerability development community knows this to be absolutely wrong.

Secondly, it seems to willfully ignore recent US history with regards to human rights. Somehow implying that the French police or the Norwegian government have a worse human rights track record than the US government - which unilaterally kills people abroad without fair trial via the drone-strike program, relatively recently stopped torturing people, and keeps prisoners in Guantanamo for 15+ years by having constructed a legal grey zone outside of the Geneva Conventions - is a bit rich.

In the chapter on Argentina, Ivan Arce calls the author out on her worldview (which was one of my favorite moments in the book), but this seems to have not caused any introspection or change of perspective. This chapter also reveals an odd relationship to gender: The narrative focuses on men wreaking havoc, and women seem to exist to rein in the out-of-control hackers. Given that there are (admittedly few, but extremely capable) women and non-binary folks active in the zero-day world, I find this narrative puzzling.

There is also an undercurrent that everything bad is caused by nefarious foreign intervention: The author expresses severe doubts that the 2016 US election would have had the same outcome without "Russian meddling", and in the Epilogue writes "it is now easier for a rogue actor to (...) sabotage (...) the Boeing 737 Max", somehow managing to link a very US-American management failure to vague evil forces.

In its centricity on the US and belief in US exceptionalism, its noticeable grief about the 2016 US election, and the vague suspicion that everything bad must have a foreign cause, the reader learns more about the mindset of a certain subset of the US population than about cybersecurity or cyberwarfare.

Hyperbolic language

The book is also made more difficult to read by constant use of hyperbolic language. Exploits are capable of "crashing Spacecraft into earth", "detonated to steal data", and things always need to be "the most secure", "the most secret", and so forth. The book would have benefitted from the editor-equivalent of an equalizer to balance out the wording.

The good parts

There are several things to like about the book: The chapters that are based on interviews with former practitioners are fun and engaging to read. The history of software exploits is full of interesting and unorthodox characters, and these chapters provide a glimpse into their world and mindsets.

The book also improves as it goes on: The frequency of glaring falsehoods seems to decrease - which lets the fact that it is generally engaging come through.

Depending on what one perceives the thesis of the book to be, one can also argue that the book advances an important point. The general subject - "how should US government policy balance offensive and defensive considerations" - is a deep and interesting one, and there is a deep, important, and nuanced discussion to be had about this. If the underlying premise of the book is "this discussion needs to be had", then that is good. The book seems to go much beyond this (reasonable) premise, and seems to mistakenly identify the zero-day market as the root cause of pervasive insecurity.

As a result, the book contributes little of utility to a defensive policy debate. The main drivers of the cyber insecurity are hardly discussed until the Epilogue: The economic misincentives that cause the tech industry to earn hundreds of billions of dollars from creating the vulnerabilities in the first place (for every million earned through the sale of exploits, an order of magnitude or two more is earned through the sale of the software that creates the security flaw), and the organisational misincentives that keep effective regulation from arising (NSA - rightly - has neither mission or authority to regulate the tech industry into better software, so accusing them of not doing so is a bit odd). By placing too much emphasis on governments knowing about vulnerabilities, the book distracts from the economic forces that create a virtually infinite supply of them.

The Epilogue (while containing plenty to disagree with) was one of the stronger parts of the book. The shortness makes it a bit shallow, but it touches on many points that warrant a serious discussion. (Unfortunately, it again insinuates that "ex-NSA hackers tutor Turkish Generals in their tradecraft"). If anything, the Epilogue can be used as a good (albeit incomplete) list of topics to discuss in any cybersecurity policy class.

Concluding thoughts

I wish the book realized more of the potential that the material provided. The debate about the policy trade-offs for both offense and defense needs to be had (although there is less of a trade-off than most people think: Other countries have SIGINT agencies that can do offense, and defensive agencies focused on improving the overall security level of society; fixing individual bugs will not fix systemic misincentives), and a good book about that topic would be very welcome.

Likewise, a book that gives a layperson a good understanding of the zero-day trade and the practitioners in the trade would be both useful and fascinating.

The present book had the potential to become either of the above good books - the first one by cutting large parts of the book and expanding the Epilogue, the second one by rigorous editing and sticking to the truth.

So I regret having to write that the present book is mostly one of unfulfilled potential, and that the layperson needs to consult experts before taking any mentioned "fact" in the book at face value.