This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

February 02, 2018

Progressing from tech to leadership

I've been a technical person all my life. I started doing vulnerability research in the late 1990s - and even today, when I'm not fiddling with CNC-machined robots or making furniture, I'm probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I'm a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It's been an interesting ride - and now that I'm on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I've been thinking a bit about the lessons learned along the way.

Of course, I'm a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it - and it's possible to draw precisely the wrong conclusions from such anecdotes. Still, I'm very proud of the culture we've created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident - so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one's level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems - and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here's a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you've started or helped improve; after all, it's your baby: you're familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it's also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don't make them deathly afraid of messing up, they will soon excel at their new jobs - and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers - but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don't run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don't spiral out of control - and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it's easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year's, or if you ever find yourself ending an argument by deferring to a policy or a process document, it's probably a sign that you're getting complacent. In security, complacency usually ends in tears - and when it doesn't, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they'd streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what's the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I'd usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks "why do you have so many people here and how do I know they are doing the right things?". Not everything in security can be quantified, but hard data can validate many of your assumptions - and will alert you to unseen issues early on.

When focusing on data, it's important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you're asking the right questions; instead of "how satisfied are you with our process", try "is your product better as a consequence of talking to us?"

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It's probable that your team disagrees with your vision or its feasibility - and that you're either not listening to their feedback, or they don't think you'd care. It's good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It's good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that's needed is honesty about the business trade-offs, so that your team feels like your "partner in crime", not a victim of circumstance. For example, we'd tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs - and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it's how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I've come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it's important to move quickly, rather than prolonging the uncertainty - but it's also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don't react to repeated scuffles, your best people will probably start looking for other opportunities: it's draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don't run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you'll ever make.

December 13, 2017

The deal with Bitcoin

♪ Used to have a little now I have a lot
I'm still, I'm still Jenny from the block
          chain ♪

For all that has been written about Bitcoin and its ilk, it is curious that the focus is almost solely what the cryptocurrencies are supposed to be. Technologists wax lyrical about the potential for blockchains to change almost every aspect of our lives. Libertarians and paleoconservatives ache for the return to "sound money" that can't be conjured up at the whim of a bureaucrat. Mainstream economists wag their fingers, proclaiming that a proper currency can't be deflationary, that it must maintain a particular velocity, or that the government must be able to nip crises of confidence in the bud. And so on.

Much of this may be true, but the proponents of cryptocurrencies should recognize that an appeal to consequences is not a guarantee of good results. The critics, on the other hand, would be best served to remember that they are drawing far-reaching conclusions about the effects of modern monetary policies based on a very short and tumultuous period in history.

In this post, my goal is to ditch most of the dogma, talk a bit about the origins of money - and then see how "crypto" fits the bill.

1. The prehistory of currencies

The emergence of money is usually explained in a very straightforward way. You know the story: a farmer raised a pig, a cobbler made a shoe. The cobbler needed to feed his family while the farmer wanted to keep his feet warm - and so they met to exchange the goods on mutually beneficial terms. But as the tale goes, the barter system had a fatal flaw: sometimes, a farmer wanted a cooking pot, a potter wanted a knife, and a blacksmith wanted a pair of pants. To facilitate increasingly complex, multi-step exchanges without requiring dozens of people to meet face to face, we came up with an abstract way to represent value - a shiny coin guaranteed to be accepted by every tradesman.

It is a nice parable, but it probably isn't very true. It seems far more plausible that early societies relied on the concept of debt long before the advent of currencies: an informal tally or a formal ledger would be used to keep track of who owes what to whom. The concept of debt, closely associated with one's trustworthiness and standing in the community, would have enabled a wide range of economic activities: debts could be paid back over time, transferred, renegotiated, or forgotten - all without having to engage in spot barter or to mint a single coin. In fact, such non-monetary, trust-based, reciprocal economies are still common in closely-knit communities: among families, neighbors, coworkers, or friends.

In such a setting, primitive currencies probably emerged simply as a consequence of having a system of prices: a cow being worth a particular number of chickens, a chicken being worth a particular number of beaver pelts, and so forth. Formalizing such relationships by settling on a single, widely-known unit of account - say, one chicken - would make it more convenient to transfer, combine, or split debts; or to settle them in alternative goods.

Contrary to popular belief, for communal ledgers, the unit of account probably did not have to be particularly desirable, durable, or easy to carry; it was simply an accounting tool. And indeed, we sometimes run into fairly unusual units of account even in modern times: for example, cigarettes can be the basis of a bustling prison economy even when most inmates don't smoke and there are not that many packs to go around.

2. The age of commodity money

In the end, the development of coinage might have had relatively little to do with communal trade - and far more with the desire to exchange goods with strangers. When dealing with a unfamiliar or hostile tribe, the concept of a chicken-denominated ledger does not hold up: the other side might be disinclined to honor its obligations - and get away with it, too. To settle such problematic trades, we needed a "spot" medium of exchange that would be easy to carry and authenticate, had a well-defined value, and a near-universal appeal. Throughout much of the recorded history, precious metals - predominantly gold and silver - proved to fit the bill.

In the most basic sense, such commodities could be seen as a tool to reconcile debts across societal boundaries, without necessarily replacing any local units of account. An obligation, denominated in some local currency, would be created on buyer's side in order to procure the metal for the trade. The proceeds of the completed transaction would in turn allow the seller to settle their own local obligations that arose from having to source the traded goods. In other words, our wondrous chicken-denominated ledgers could coexist peacefully with gold - and when commodity coinage finally took hold, it's likely that in everyday trade, precious metals served more as a useful abstraction than a precise store of value. A "silver chicken" of sorts.

Still, the emergence of commodity money had one interesting side effect: it decoupled the unit of debt - a "claim on the society", in a sense - from any moral judgment about its origin. A piece of silver would buy the same amount of food, whether earned through hard labor or won in a drunken bet. This disconnect remains a central theme in many of the debates about social justice and unfairly earned wealth.

3. The State enters the game

If there is one advantage of chicken ledgers over precious metals, it's that all chickens look and cluck roughly the same - something that can't be said of every nugget of silver or gold. To cope with this problem, we needed to shape raw commodities into pieces of a more predictable shape and weight; a trusted party could then stamp them with a mark to indicate the value and the quality of the coin.

At first, the task of standardizing coinage rested with private parties - but the responsibility was soon assumed by the State. The advantages of this transition seemed clear: a single, widely-accepted and easily-recognizable currency could be now used to settle virtually all private and official debts.

Alas, in what deserves the dubious distinction of being one of the earliest examples of monetary tomfoolery, some States succumbed to the temptation of fiddling with the coinage to accomplish anything from feeding the poor to waging wars. In particular, it would be common to stamp coins with the same face value but a progressively lower content of silver and gold. Perhaps surprisingly, the strategy worked remarkably well; at least in the times of peace, most people cared about the value stamped on the coin, not its precise composition or weight.

And so, over time, representative money was born: sooner or later, most States opted to mint coins from nearly-worthless metals, or print banknotes on paper and cloth. This radically new currency was accompanied with a simple pledge: the State offered to redeem it at any time for its nominal value in gold.

Of course, the promise was largely illusory: the State did not have enough gold to honor all the promises it had made. Still, as long as people had faith in their rulers and the redemption requests stayed low, the fundamental mechanics of this new representative currency remained roughly the same as before - and in some ways, were an improvement in that they lessened the insatiable demand for a rare commodity. Just as importantly, the new money still enabled international trade - using the underlying gold exchange rate as a reference point.

4. Fractional reserve banking and fiat money

For much of the recorded history, banking was an exceptionally dull affair, not much different from running a communal chicken ledger of the old. But then, something truly marvelous happened in the 17th century: around that time, many European countries have witnessed the emergence of fractional-reserve banks.

These private ventures operated according to a simple scheme: they accepted people's coin for safekeeping, promising to pay a premium on every deposit made. To meet these obligations and to make a profit, the banks then used the pooled deposits to make high-interest loans to other folks. The financiers figured out that under normal circumstances and when operating at a sufficient scale, they needed only a very modest reserve - well under 10% of all deposited money - to be able to service the usual volume and size of withdrawals requested by their customers. The rest could be loaned out.

The very curious consequence of fractional-reserve banking was that it pulled new money out of thin air. The funds were simultaneously accounted for in the statements shown to the depositor, evidently available for withdrawal or transfer at any time; and given to third-party borrowers, who could spend them on just about anything. Heck, the borrowers could deposit the proceeds in another bank, creating even more money along the way! Whatever they did, the sum of all funds in the monetary system now appeared much higher than the value of all coins and banknotes issued by the government - let alone the amount of gold sitting in any vault.

Of course, no new money was being created in any physical sense: all that banks were doing was engaging in a bit of creative accounting - the sort of which would probably land you in jail if you attempted it today in any other comparably vital field of enterprise. If too many depositors were to ask for their money back, or if too many loans were to go bad, the banking system would fold. Fortunes would evaporate in a puff of accounting smoke, and with the disappearance of vast quantities of quasi-fictitious ("broad") money, the wealth of the entire nation would shrink.

In the early 20th century, the world kept witnessing just that; a series of bank runs and economic contractions forced the governments around the globe to act. At that stage, outlawing fractional-reserve banking was no longer politically or economically tenable; a simpler alternative was to let go of gold and move to fiat money - a currency implemented as an abstract social construct, with no predefined connection to the physical realm. A new breed of economists saw the role of the government not in trying to peg the value of money to an inflexible commodity, but in manipulating its supply to smooth out economic hiccups or to stimulate growth.

(Contrary to popular beliefs, such manipulation is usually not done by printing new banknotes; more sophisticated methods, such as lowering reserve requirements for bank deposits or enticing banks to invest its deposits into government-issued securities, are the preferred route.)

The obvious peril of fiat money is that in the long haul, its value is determined strictly by people's willingness to accept a piece of paper in exchange for their trouble; that willingness, in turn, is conditioned solely on their belief that the same piece of paper would buy them something nice a week, a month, or a year from now. It follows that a simple crisis of confidence could make a currency nearly worthless overnight. A prolonged period of hyperinflation and subsequent austerity in Germany and Austria was one of the precipitating factors that led to World War II. In more recent times, dramatic episodes of hyperinflation plagued the fiat currencies of Israel (1984), Mexico (1988), Poland (1990), Yugoslavia (1994), Bulgaria (1996), Turkey (2002), Zimbabwe (2009), Venezuela (2016), and several other nations around the globe.

For the United States, the switch to fiat money came relatively late, in 1971. To stop the dollar from plunging like a rock, the Nixon administration employed a clever trick: they ordered the freeze of wages and prices for the 90 days that immediately followed the move. People went on about their lives and paid the usual for eggs or milk - and by the time the freeze ended, they were accustomed to the idea that the "new", free-floating dollar is worth about the same as the old, gold-backed one. A robust economy and favorable geopolitics did the rest, and so far, the American adventure with fiat currency has been rather uneventful - perhaps except for the fact that the price of gold itself skyrocketed from $35 per troy ounce in 1971 to $850 in 1980 (or, from $210 to $2,500 in today's dollars).

Well, one thing did change: now better positioned to freely tamper with the supply of money, the regulators in accord with the bankers adopted a policy of creating it at a rate that slightly outstripped the organic growth in economic activity. They did this to induce a small, steady degree of inflation, believing that doing so would discourage people from hoarding cash and force them to reinvest it for the betterment of the society. Some critics like to point out that such a policy functions as a "backdoor" tax on savings that happens to align with the regulators' less noble interests; still, either way: in the US and most other developed nations, the purchasing power of any money kept under a mattress will drop at a rate of somewhere between 2 to 10% a year.

5. So what's up with Bitcoin?

Well... countless tomes have been written about the nature and the optimal characteristics of government-issued fiat currencies. Some heterodox economists, notably including Murray Rothbard, have also explored the topic of privately-issued, decentralized, commodity-backed currencies. But Bitcoin is a wholly different animal.

In essence, BTC is a global, decentralized fiat currency: it has no (recoverable) intrinsic value, no central authority to issue it or define its exchange rate, and it has no anchoring to any historical reference point - a combination that until recently seemed nonsensical and escaped any serious scrutiny. It does the unthinkable by employing three clever tricks:

  1. It allows anyone to create new coins, but only by solving brute-force computational challenges that get more difficult as the time goes by,

  2. It prevents unauthorized transfer of coins by employing public key cryptography to sign off transactions, with only the authorized holder of a coin knowing the correct key,

  3. It prevents double-spending by using a distributed public ledger ("blockchain"), recording the chain of custody for coins in a tamper-proof way.

The blockchain is often described as the most important feature of Bitcoin, but in some ways, its importance is overstated. The idea of a currency that does not rely on a centralized transaction clearinghouse is what helped propel the platform into the limelight - mostly because of its novelty and the perception that it is less vulnerable to government meddling (although the government is still free to track down, tax, fine, or arrest any participants). On the flip side, the everyday mechanics of BTC would not be fundamentally different if all the transactions had to go through Bitcoin Bank, LLC.

A more striking feature of the new currency is the incentive structure surrounding the creation of new coins. The underlying design democratized the creation of new coins early on: all you had to do is leave your computer running for a while to acquire a number of tokens. The tokens had no practical value, but obtaining them involved no substantial expense or risk. Just as importantly, because the difficulty of the puzzles would only increase over time, the hope was that if Bitcoin caught on, latecomers would find it easier to purchase BTC on a secondary market than mine their own - paying with a more established currency at a mutually beneficial exchange rate.

The persistent publicity surrounding Bitcoin and other cryptocurrencies did the rest - and today, with the growing scarcity of coins and the rapidly increasing demand, the price of a single token hovers somewhere south of $15,000.

6. So... is it bad money?

Predicting is hard - especially the future. In some sense, a coin that represents a cryptographic proof of wasted CPU cycles is no better or worse than a currency that relies on cotton decorated with pictures of dead presidents. It is true that Bitcoin suffers from many implementation problems - long transaction processing times, high fees, frequent security breaches of major exchanges - but in principle, such problems can be overcome.

That said, currencies live and die by the lasting willingness of others to accept them in exchange for services or goods - and in that sense, the jury is still out. The use of Bitcoin to settle bona fide purchases is negligible, both in absolute terms and in function of the overall volume of transactions. In fact, because of the technical challenges and limited practical utility, some companies that embraced the currency early on are now backing out.

When the value of an asset is derived almost entirely from its appeal as an ever-appreciating investment vehicle, the situation has all the telltale signs of a speculative bubble. But that does not prove that the asset is destined to collapse, or that a collapse would be its end. Still, the built-in deflationary mechanism of Bitcoin - the increasing difficulty of producing new coins - is probably both a blessing and a curse.

It's going to go one way or the other; and when it's all said and done, we're going to celebrate the people who made the right guess. Because future is actually pretty darn easy to predict -- in retrospect.

December 10, 2017

Weekend distractions, part deux: a bench, and stuff

Continuing the tradition of the previous post, here's a perfectly good bench:

The legs are 8/4 hard maple, cut into 2.3" (6 cm) strips and then glued together. The top is 4/4 domestic walnut, with an additional strip glued to the bottom to make it look thicker (because gosh darn, walnut is expensive).

Cut on a bandsaw, joined together with a biscuit joiner + glue, then sanded, that's about it. Still applying finish (nitrocellulose lacquer from a rattle can), but this was the last moment when I could snap a photo (about to get dark) and it basically looks like the final product anyway. Pretty simple but turned out nice.

Several additional, smaller woodworking projects here.

November 04, 2017

Weekend distractions: a perfectly good dining table

I've been a DIYer all my adult life. Some of my non-software projects still revolve around computers, especially when they deal with CNC machining or electronics. But I've been also dabbling in woodworking for quite a while. I have not put that much effort into documenting my projects (say, cutting boards) - but I figured it's time to change that. It may inspire some folks to give a new hobby a try - or help them overcome a problem or two.

So, without further ado, here's the build log for a dining table I put together over the past two weekends or so. I think I turned out pretty nice:

Have fun!

May 04, 2017

RFD: the alien abduction prophecy protocol

"It's tough to make predictions, especially about the future."
- variously attributed to Yogi Berra and Niels Bohr

Right. So let's say you are visited by transdimensional space aliens from outer space. There's some old-fashioned probing, but eventually, they get to the point. They outline a series of apocalyptic prophecies, beginning with the surprise 2032 election of Dwayne Elizondo Mountain Dew Herbert Camacho as the President of the United States, followed by a limited-scale nuclear exchange with the Grand Duchy of Ruritania in 2036, and culminating with the extinction of all life due to a series of cascading Y2K38 failures that start at an Ohio pretzel reprocessing plan. Long story short, if you want to save mankind, you have to warn others of what's to come.

But there's a snag: when you wake up in a roadside ditch in Alabama, you realize that nobody is going to believe your story! If you come forward, your professional and social reputation will be instantly destroyed. If you're lucky, the vindication of your claims will come fifteen years later; if not, it might turn out that you were pranked by some space alien frat boys who just wanted to have some cheap space laughs. The bottom line is, you need to be certain before you make your move. You figure this means staying mum until the Election Day of 2032.

But wait, this plan is also not very good! After all, how could your future self convince others that you knew about President Camacho all along? Well... if you work in information security, you are probably familiar with a neat solution: write down your account of events in a text file, calculate a cryptographic hash of this file, and publish the resulting value somewhere permanent. Fifteen years later, reveal the contents of your file and point people to your old announcement. Explain that you must have been in the possession of this very file back in 2017; otherwise, you would not have known its hash. Voila - a commitment scheme!

Although elegant, this approach can be risky: historically, the usable life of cryptographic hash functions seemed to hover at somewhere around 15 years - so even if you pick a very modern algorithm, there is a real risk that future advances in cryptanalysis could severely undermine the strength of your proof. No biggie, though! For extra safety, you could combine several independent hashing functions, or increase the computational complexity of the hash by running it in a loop. There are also some less-known hash functions, such as SPHINCS, that are designed with different trade-offs in mind and may offer longer-term security guarantees.

Of course, the computation of the hash is not enough; it needs to become an immutable part of the public record and remain easy to look up for years to come. There is no guarantee that any particular online publishing outlet is going to stay afloat that long and continue to operate in its current form. The survivability of more specialized and experimental platforms, such as blockchain-based notaries, seems even less clear. Thankfully, you can resort to another kludge: if you publish the hash through a large number of independent online venues, there is a good chance that at least one of them will be around in 2032.

(Offline notarization - whether of the pen-and-paper or the PKI-based variety - offers an interesting alternative. That said, in the absence of an immutable, public ledger, accusations of forgery or collusion would be very easy to make - especially if the fate of the entire planet is at stake.)

Even with this out of the way, there is yet another profound problem with the plan: a current-day scam artist could conceivably generate hundreds or thousands of political predictions, publish the hashes, and then simply discard or delete the ones that do not come true by 2032 - thus creating an illusion of prescience. To convince skeptics that you are not doing just that, you could incorporate a cryptographic proof of work into your approach, attaching a particular CPU time "price tag" to every hash. The future you could then claim that it would have been prohibitively expensive for the former you to attempt the "prediction spam" attack. But this argument seems iffy: a $1,000 proof may already be too costly for a lower middle class abductee, while a determined tech billionaire could easily spend $100,000 to pull off an elaborate prank on the entire world. Not to mention, massive CPU resources can be commandeered with little or no effort by the operators of large botnets and many other actors of this sort.

In the end, my best idea is to rely on an inherently low-bandwidth publication medium, rather than a high-cost one. For example, although a determined hoaxer could place thousands of hash-bearing classifieds in some of the largest-circulation newspapers, such sleigh-of-hand would be trivial for future sleuths to spot (at least compared to combing through the entire Internet for an abandoned hash). Or, as per an anonymous suggestion relayed by Thomas Ptacek: just tattoo the signature on your body, then post some post some pics; there are only so many places for a tattoo to go.

Still, what was supposed to be a nice, scientific proof devolved into a bunch of hand-wavy arguments and poorly-quantified probabilities. For the sake of future abductees: is there a better way?

April 22, 2017

AFL experiments, or please eat your brötli

When messing around with AFL, you sometimes stumble upon something unexpected or amusing. Say, having the fuzzer spontaneously synthesize JPEG files, come up with non-trivial XML syntax, or discover SQL semantics.

It is also fun to challenge yourself to employ fuzzers in non-conventional ways. Two canonical examples are having your fuzzing target call abort() whenever two libraries that are supposed to implement the same algorithm produce different outputs when given identical input data; or when a library produces different outputs when asked to encode or decode the same data several times in a row.

Such tricks may sound fanciful, but they actually find interesting bugs. In one case, AFL-based equivalence fuzzing revealed a bunch of fairly rudimentary flaws in common bignum libraries, with some theoretical implications for crypto apps. Another time, output stability checks revealed long-lived issues in IJG jpeg and other widely-used image processing libraries, leaking data across web origins.

In one of my recent experiments, I decided to fuzz brotli, an innovative compression library used in Chrome. But since it's been already fuzzed for many CPU-years, I wanted to do it with a twist: stress-test the compression routines, rather than the usually targeted decompression side. The latter is a far more fruitful target for security research, because decompression normally involves dealing with well-formed inputs, whereas compression code is meant to accept arbitrary data and not think about it too hard. That said, the low likelihood of flaws also means that the compression bits are a relatively unexplored surface that may be worth poking with a stick every now and then.

In this case, the library held up admirably - spare for a handful of computationally intensive plaintext inputs (that are now easy to spot due to the recent improvements to AFL). But the output corpus synthesized by AFL, after being seeded just with a single file containing just "0", featured quite a few peculiar finds:

  • Strings that looked like viable bits of HTML or XML: <META HTTP-AAA IDEAAAA, DATA="IIA DATA="IIA DATA="IIADATA="IIA, </TD>.

  • Non-trivial numerical constants: 1000,1000,0000000e+000000, 0,000 0,000 0,0000 0x600, 0000,$000: 0000,$000:00000000000000.

  • Nonsensical but undeniably English sentences: them with them m with them with themselves, in the fix the in the pin th in the tin, amassize the the in the in the inhe@massive in, he the themes where there the where there, size at size at the tie.

  • Bogus but semi-legible URLs: / /00(0(

  • Snippets of Lisp code: )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))).

The results are quite unexpected, given that they are just a product of randomly mutating a single-byte input file and observing the code coverage in a simple compression tool. The explanation is that brotli, in addition to more familiar binary coding methods, uses a static dictionary constructed by analyzing common types of web content. Somehow, by observing the behavior of the program, AFL was able to incrementally reconstruct quite a few of these hardcoded keywords - and then put them together in various semi-interesting ways. Not bad.

February 01, 2017

...or, how I learned not to be a jerk in 20 short years

People who are accomplished in one field of expertise tend to believe that they can bring unique insights to just about any other debate. I am as guilty as anyone: at one time or another, I aired my thoughts on anything from CNC manufacturing, to electronics, to emergency preparedness, to politics. Today, I'm about to commit the same sin - but instead of pretending to speak from a position of authority, I wanted to share a more personal tale.

The author, circa 1995. The era of hand-crank computers and punch cards.

Back in my school days, I was that one really tall and skinny kid in the class. It wasn't trying to stay this way; I preferred computer games to sports, and my grandma's Polish cooking was heavy on potatoes, butter, chicken, dumplings, cream, and cheese. But that did not matter: I could eat what I wanted, as often as I wanted, and I still stayed in shape. This made me look down on chubby kids; if my reckless ways had little or no effect on my body, it followed that they had to be exceptionally lazy and must have lacked even the most basic form of self-control.

As I entered adulthood, my habits remained the same. I felt healthy and stayed reasonably active, walking to and from work every other day and hiking with friends whenever I could. But my looks started to change:

The author at a really exciting BlackHat party in 2002.

I figured it's just a part of growing up. But somewhere around my twentieth birthday, I stepped on a bathroom scale and typed the result into an online calculator. I was surprised to find out that my BMI was about 24 - pretty darn close to overweight.

"Pssh, you know how inaccurate these things are!", I exclaimed while searching online to debunk that whole BMI thing. I mean, sure, I had some belly fat - maybe a pizza or two too far - but nothing that wouldn't go away in time. Besides, I was doing fine, so what would be the point of submitting to the society's idea of the "right" weight?

It certainly helped that I was having a blast at work. I made a name for myself in the industry, published a fair amount of cool research, authored a book, settled down, bought a house, had a kid. It wasn't until the age of 26 that I strayed into a doctor's office for a routine checkup. When the nurse asked me about my weight, I blurted out "oh, 175 pounds, give or take". She gave me a funny look and asked me to step on the scale.

Turns out it was quite a bit more than 175 pounds. With a BMI of 27.1, I was now firmly into the "overweight" territory. Yeah yeah, the BMI metric was a complete hoax - but why did my passport photos look less flattering than before?

A random mugshot from 2007. Some people are just born big-boned, I think.

Well, damn. I knew what had to happen: from now on, I was going to start eating healthier foods. I traded Cheetos for nuts, KFC for sushi rolls, greasy burgers for tortilla wraps, milk smoothies for Jamba Juice, fries for bruschettas, regular sodas for diet. I'd even throw in a side of lettuce every now and then. It was bound to make a difference. I just wasn't gonna be one of the losers who check their weight every day and agonize over every calorie on their plate. (Weren't calories a scam, anyway? I think I read that on that cool BMI conspiracy site.)

By the time I turned 32, my body mass index hit 29. At that point, it wasn't just a matter of looking chubby. I could do the math: at that rate, I'd be in a real pickle in a decade or two - complete with a ~50% chance of developing diabetes or cardiovascular disease. This wouldn't just make me miserable, but also mess up the lives of my spouse and kids.

Presenting at Google TGIF in 2013. It must've been the unflattering light.

I wanted to get this over with right away, so I decided to push myself hard. I started biking to work, quite a strenuous ride. It felt good, but did not help: I would simply eat more to compensate and ended up gaining a few extra pounds. I tried starving myself. That worked, sure - only to be followed by an even faster rebound. Ultimately, I had to face the reality: I had a problem and I needed a long-term solution. There was no one weird trick to outsmart the calorie-counting crowd, no overnight cure.

I started looking for real answers. My world came crumbling down; I realized that a "healthy" burrito from Chipotle packed four times as many calories as a greasy burger from McDonald's. That a loaded fruit smoothie from Jamba Juice was roughly equal to two hot dogs with a side of mashed potatoes to boot. That a glass of apple juice fared worse than a can of Sprite, and that bruschetta wasn't far from deep-fried butter on a stick. It didn't matter if it was sugar or fat, bacon or kale. Familiar favorites were not better or worse than the rest. Losing weight boiled down to portion control - and sticking to it for the rest of my life.

It was a slow and humbling journey that spanned almost a year. I ended up losing around 70 lbs along the way. What shocked me is that it wasn't a painful experience; what held me back for years was just my own smugness, plus the folksy wisdom gleaned from the covers of glossy magazines.

Author with a tractor, 2017.

I'm not sure there is a moral to this story. I guess one lesson is: don't be a judgmental jerk. Sometimes, the simple things - the ones you think you have all figured out - prove to be a lot more complicated than they seem.

August 26, 2016

So you want to work in security (but are too lazy to read Parisa's excellent essay)

If you have not seen it yet, Parisa Tabriz penned a lengthy and insightful post about her experiences on what it takes to succeed in the field of information security.

My own experiences align pretty closely with Parisa's take, so if you are making your first steps down this path, I strongly urge you to give her post a good read. But if I had to sum up my lessons from close to two decades in the industry, I would probably boil them down to four simple rules:

  1. Infosec is all about the mismatch between our intuition and the actual behavior of the systems we build. That makes it harmful to study the field as an abstract, isolated domain. To truly master it, dive into how computers work, then make a habit of asking yourself "okay, but what if assumption X does not hold true?" every step along the way.

  2. Security is a protoscience. Think of chemistry in the early 19th century: a glorious and messy thing, chock-full of colorful personalities, unsolved mysteries, and snake oil salesmen. You need passion and humility to survive. Those who think they have all the answers are a danger to themselves and to people who put their faith in them.

  3. People will trust you with their livelihoods, but will have no way to truly measure the quality of your work. Don't let them down: be painfully honest with yourself and work every single day to address your weaknesses. If you are not embarrassed by the views you held two years ago, you are getting complacent - and complacency kills.

  4. It will feel that way, but you are not smarter than software engineers. Walk in their shoes for a while: write your own code, show it to the world, and be humiliated by all the horrible mistakes you will inevitably make. It will make you better at your job - and will turn you into a better person, too.

August 04, 2016

CSS mix-blend-mode is bad for your browsing history

Up until mid-2010, any rogue website could get a good sense of your browsing habits by specifying a distinctive :visited CSS pseudo-class for any links on the page, rendering thousands of interesting URLs off-screen, and then calling the getComputedStyle API to figure out which pages appear in your browser's history.

After some deliberation, browser vendors have closed this loophole by disallowing almost all attributes in :visited selectors, spare for the fairly indispensable ability to alter foreground and background colors for such links. The APIs have been also redesigned to prevent the disclosure of this color information via getComputedStyle.

This workaround did not fully eliminate the ability to probe your browsing history, but limited it to scenarios where the user can be tricked into unwittingly feeding the style information back to the website one URL at a time. Several fairly convincing attacks have been demonstrated against patched browsers - my own 2013 entry can be found here - but they generally depended on the ability to solicit one click or one keypress per every URL tested. In other words, the whole thing did not scale particularly well.

Or at least, it wasn't supposed to. In 2014, I described a neat trick that exploited normally imperceptible color quantization errors within the browser, amplified by stacking elements hundreds of times, to implement an n-to-2n decoder circuit using just the background-color and opacity properties on overlaid <a href=...> elements to easily probe the browsing history of multiple URLs with a single click. To explain the basic principle, imagine wanting to test two links, and dividing the screen into four regions, like so:

  • Region #1 is lit only when both links are not visited (¬ link_a ∧ ¬ link_b),
  • Region #2 is lit only when link A is not visited but link B is visited (¬ link_a ∧ link_b),
  • Region #3 is lit only when link A is visited but link B is not (link_a ∧ ¬ link_b),
  • Region #4 is lit only when both links are visited (link_a ∧ link_b).

While the page couldn't directly query the visibility of the segments, we just had to convince the user to click the visible segment once to get the browsing history for both links, for example under the guise of dismissing a pop-up ad. (Of course, the attack could be scaled to far more than just 2 URLs.)

This problem was eventually addressed by browser vendors by simply improving the accuracy of color quantization when overlaying HTML elements; while this did not eliminate the risk, it made the attack far more computationally intensive, requiring the evil page to stack millions of elements to get practical results. Gave over? Well, not entirely. In the footnote of my 2014 article, I mentioned this:

"There is an upcoming CSS feature called mix-blend-mode, which permits non-linear mixing with operators such as multiply, lighten, darken, and a couple more. These operators make Boolean algebra much simpler and if they ship in their current shape, they will remove the need for all the fun with quantization errors, successive overlays, and such. That said, mix-blend-mode is not available in any browser today."

As you might have guessed, patience is a virtue! As of mid-2016, mix-blend-mode - a feature to allow advanced compositing of bitmaps, very similar to the layer blending modes available in photo-editing tools such as Photoshop and GIMP - is shipping in Chrome and Firefox. And as it happens, in addition to their intended purpose, these non-linear blending operators permit us to implement arbitrary Boolean algebra. For example, to implement AND, all we need to do is use multiply:

  • black (0) x black (0) = black (0)
  • black (0) x white (1) = black (0)
  • white (1) x black (0) = black (0)
  • white (1) x white (1) = white (1)

For a practical demo, click here. A single click in that whack-a-mole game will reveal the state of 9 visited links to the JavaScript executing on the page. If this was an actual game and if it continued for a bit longer, probing the state of hundreds or thousands of URLs would not be particularly hard to pull off.

May 11, 2016

Clearing up some misconceptions around the "ImageTragick" bug

The recent, highly publicized "ImageTragick" vulnerability had countless web developers scrambling to fix a remote code execution vector in ImageMagick - a popular bitmap manipulation tool commonly used to resize, transcode, or annotate user-supplied images on the Web. Whatever your take on "branded" vulnerabilities may be, the flaw certainly is notable for its ease of exploitation: it is an embarrassingly simple shell command injection bug reminiscent of the security weaknesses prevalent in the 1990s, and nearly extinct in core tools today. The issue also bears some parallels to the more far-reaching but equally striking Shellshock bug.

That said, I believe that the publicity that surrounded the flaw was squandered by failing to make one very important point: even with this particular RCE vector fixed, anyone using ImageMagick to process attacker-controlled images is likely putting themselves at a serious risk.

The problem is fairly simple: for all its virtues, ImageMagick does not appear to be designed with malicious inputs in mind - and has a long and colorful history of lesser-known but equally serious security flaws. For a single data point, look no further than the work done several months ago by Jodie Cunningham. Jodie fuzzed IM with a vanilla setup of afl-fuzz - and quickly identified about two dozen possibly exploitable security holes, along with countless denial of service flaws. A small sample of Jodie's findings can be found here.

Jodie's efforts probably just scratched the surface; after "ImageTragick", a more recent effort by Hanno Boeck uncovered even more bugs; from what I understand, Hanno's work also went only as far as using off-the-shelf fuzzing tools. You can bet that, short of a major push to redesign the entire IM codebase, the trickle won't stop any time soon.

And so, the advice sorely missing from the "ImageTragick" webpage is this:

  • If all you need to do is simple transcoding or thumbnailing of potentially untrusted images, don't use ImageMagick. Make a direct use of libpng, libjpeg-turbo, and giflib; for a robust way to use these libraries, have a look at the source code of Chromium or Firefox. The resulting implementation will be considerably faster, too.

  • If you have to use ImageMagick on untrusted inputs, consider sandboxing the code with seccomp-bpf or an equivalent mechanism that robustly restricts access to all user space artifacts and to the kernel attack surface. Rudimentary sandboxing technologies, such as chroot() or UID separation, are probably not enough.

  • If all other options fail, be zealous about limiting the set of image formats you actually pass down to IM. The bare minimum is to thoroughly examine the headers of the received files. It is also helpful to explicitly specify the input format when calling the utility, as to preempt auto-detection code. For command-line invocations, this can be done like so:

    convert [...other params...] -- jpg:input-file.jpg jpg:output-file.jpg

    The JPEG, PNG, and GIF handling code in ImageMagick is considerably more robust than the code that supports PCX, TGA, SVG, PSD, and the likes.

February 09, 2016

Automatically inferring file syntax with afl-analyze

The nice thing about the control flow instrumentation used by American Fuzzy Lop is that it allows you to do much more than just, well, fuzzing stuff. For example, the suite has long shipped with a standalone tool called afl-tmin, capable of automatically shrinking test cases while still making sure that they exercise the same functionality in the targeted binary (or that they trigger the same crash). Another similar tool, afl-cmin, employed a similar trick to eliminate redundant files in any large testing corpora.

The latest release of AFL features another nifty new addition along these lines: afl-analyze. The tool takes an input file, sequentially flips bytes in this data stream, and then observes the behavior of the targeted binary after every flip. From this information, it can infer several things:

  • Classify some content as no-op blocks that do not elicit any changes to control flow (say, comments, pixel data, etc).
  • Checksums, magic values, and other short, atomically compared tokens where any bit flip causes the same change to program execution.
  • Longer blobs exhibiting this property - almost certainly corresponding to checksummed or encrypted data.
  • "Pure" data sections, where analyzer-injected changes consistently elicit differing changes to control flow.

This gives us some remarkable and quick insights into the syntax of the file and the behavior of the underlying parser. It may sound too good to be true, but actually seems to work in practice. For a quick demo, let's see what afl-analyze has to say about running cut -d ' ' -f1 on a text file:

We see that cut really only cares about spaces and newlines. Interestingly, it also appears that the tool always tokenizes the entire line, even if it's just asked to return the first token. Neat, right?

Of course, the value of afl-analyze is greater for incomprehensible binary formats than for simple text utilities; perhaps even more so when dealing with black-box parsers (which can be analyzed thanks to the runtime QEMU instrumentation supported in AFL). To try out the tool's ability to deal with binaries, let's check out libpng:

This looks pretty damn good: we have two four-byte signatures, followed by chunk length, four-byte chunk name, chunk length, some image metadata, and then a comment section. Neat, right? All in a matter of seconds: no configuration needed and no knobs to turn.

Of course, the tool shipped just moments ago and is still very much experimental; expect some kinks. Field testing and feedback welcome!

January 14, 2016

Show and tell: doomsday planning for less crazy folk

Yup. I've been quiet of recent, but that's in part because I've been working on this piece:

It's a fairly systematic and level-headed approach to threat modeling and risk management, except not for computer systems - and instead, for real life. There's not much I can add on top of what's already said on the linked page; have a look, you will probably find it to be an interesting read.

October 02, 2015

Subjective explainer: gun debate in the US

In the wake of the tragic events in Roseburg, I decided to briefly return to the topic of looking at the US culture from the perspective of a person born in Europe. In particular, I wanted to circle back to the topic of firearms.

Contrary to popular beliefs, the United States has witnessed a dramatic decline in violence over the past 20 years. In fact, when it comes to most types of violent crime - say, robbery, assault, or rape - the country now compares favorably to the UK and many other OECD nations. But as I explored in my earlier posts, one particular statistic - homicide - is still registering about three times as high as in many other places within the EU.

The homicide epidemic in the United States has a complex nature and overwhelmingly affects ethnic minorities and other disadvantaged social groups; perhaps because of this, the phenomenon sees very little honest, public scrutiny. It is propelled into the limelight only in the wake of spree shootings and other sickening, seemingly random acts of terror; such incidents, although statistically insignificant, take a profound mental toll on the American society. At the same time, the effects of high-profile violence seem strangely short-lived: they trigger a series of impassioned political speeches, invariably focusing on the connection between violence and guns - but the nation soon goes back to business as usual, knowing full well that another massacre will happen soon, perhaps the very same year.

On the face of it, this pattern defies all reason - angering my friends in Europe and upsetting many brilliant and well-educated progressives in the US. They utter frustrated remarks about the all-powerful gun lobby and the spineless politicians, blaming the partisan gridlock for the failure to pass even the most reasonable and toothless gun control laws. I used to be in the same camp; today, I think the reality is more complex than that.

To get to the bottom of this mystery, it helps to look at the spirit of radical individualism and classical liberalism that remains the national ethos of the United States - and in fact, is enjoying a degree of resurgence unseen for many decades prior. In Europe, it has long been settled that many individual liberties - be it the freedom of speech or the natural right to self-defense - can be constrained to advance even some fairly far-fetched communal goals. On the old continent, such sacrifices sometimes paid off, and sometimes led to atrocities; but the basic premise of European collectivism is not up for serious debate. In America, the same notion certainly cannot be taken for granted today.

When it comes to firearm ownership in particular, the country is facing a fundamental choice between two possible realities:

  • A largely disarmed society that depends on the state to protect it from almost all harm, and where citizens are generally not permitted to own guns without presenting a compelling cause. In this model, adopted by many European countries, firearms tend to be less available to common criminals - simply by the virtue of limited supply and comparatively high prices in black market trade. At the same time, it can be argued that any nation subscribing to this doctrine becomes more vulnerable to foreign invasion or domestic terror, should its government ever fail to provide adequate protection to all citizens. Disarmament can also limit civilian recourse against illegitimate, totalitarian governments - a seemingly outlandish concern, but also a very fresh memory for many European countries subjugated not long ago under the auspices of the Soviet Bloc.

  • A well-armed society where firearms are available to almost all competent adults, and where the natural right to self-defense is subject to few constraints. This is the model currently employed in the United States, where it arises from the straightfoward, originalist interpretation of the Second Amendment - as recognized by roughly 75% of all Americans and affirmed by the Supreme Court. When following such a doctrine, a country will likely witness greater resiliency in the face of calamities or totalitarian regimes. At the same time, its citizens might have to accept some inherent, non-trivial increase in violent crime due to the prospect of firearms more easily falling into the wrong hands.

It seems doubtful that a viable middle-ground approach can exist in the United States. With more than 300 million civilian firearms in circulation, most of them in unknown hands, the premise of reducing crime through gun control would inevitably and critically depend on some form of confiscation; without such drastic steps, the supply of firearms to the criminal underground or to unfit individuals would not be disrupted in any meaningful way. Because of this, intellectual integrity requires us to look at many of the legislative proposals not only through the prism of their immediate utility, but also to give consideration to the societal model they are likely to advance.

And herein lies the problem: many of the current "common-sense" gun control proposals have very little merit when considered in isolation. There is scant evidence that reinstating the ban on military-looking semi-automatic rifles ("assault weapons"), or rolling out the prohibition on private sales at gun shows, would deliver measurable results. There is also no compelling reason to believe that ammo taxes, firearm owner liability insurance, mandatory gun store cameras, firearm-free school zones, bans on open carry, or federal gun registration can have any impact on violent crime. And so, the debate often plays out like this:

At the same time, by the virtue of making weapons more difficult, expensive, and burdensome to own, many of the legislative proposals floated by progressives would probably gradually erode the US gun culture; intentionally or not, their long-term outcome would be a society less passionate about firearms and more willing to follow in the footsteps of Australia or the UK. Only as we cross that line and confiscate hundreds of millions of guns, it's fathomable - yet still far from certain - that we would see a sharp drop in homicides.

This method of inquiry helps explain the visceral response from gun rights advocates: given the legislation's dubious benefits and its predicted long-term consequences, many pro-gun folks are genuinely worried that making concessions would eventually mean giving up one of their cherished civil liberties - and on some level, they are right.

Some feel that this argument is a fallacy, a tell tale invented by a sinister corporate "gun lobby" to derail the political debate for personal gain. But the evidence of such a conspiracy is hard to find; in fact, it seems that the progressives themselves often fan the flames. In the wake of Roseburg, both Barack Obama and Hillary Clinton came out praising the confiscation-based gun control regimes employed in Australia and the UK - and said that they would like the US to follow suit. Depending on where you stand on the issue, it was either an accidental display of political naivete, or the final reveal of their sinister plan. For the latter camp, the ultimate proof of a progressive agenda came a bit later: in response to the terrorist attack in San Bernardino, several eminent Democratic-leaning newspapers published scathing editorials demanding civilian disarmament while downplaying the attackers' connection to Islamic State.

Another factor that poisons the debate is that despite being highly educated and eloquent, the progressive proponents of gun control measures are often hopelessly unfamiliar with the very devices they are trying to outlaw:

I'm reminded of the widespread contempt faced by Senator Ted Stevens following his attempt to compare the Internet to a "series of tubes" as he was arguing against net neutrality. His analogy wasn't very wrong - it just struck a nerve as simplistic and out-of-date. My progressive friends did not react the same way when Representative Carolyn McCarthy - one of the key proponents of the ban on assault weapons - showed no understanding of the supposedly lethal firearm features she was trying to eradicate. Such bloopers are not rare, too; not long ago, Mr. Bloomberg, one of the leading progressive voices on gun control in America, argued against semi-automatic rifles without understanding how they differ from the already-illegal machine guns:

Yet another example comes Representative Diana DeGette, the lead sponsor of a "common-sense" bill that sought to prohibit the manufacture of magazines with capacity over 15 rounds. She defended the merits of her legislation while clearly not understanding how a magazine differs from ammunition - or that the former can be reused:

"I will tell you these are ammunition, they’re bullets, so the people who have those know they’re going to shoot them, so if you ban them in the future, the number of these high capacity magazines is going to decrease dramatically over time because the bullets will have been shot and there won’t be any more available."

Treating gun ownership with almost comical condescension has become vogue among a good number of progressive liberals. On a campaign stop in San Francisco, Mr. Obama sketched a caricature of bitter, rural voters who "cling to guns or religion or antipathy to people who aren't like them". Not much later, one Pulitzer Prize-winning columnist for The Washington Post spoke of the Second Amendment as "the refuge of bumpkins and yeehaws who like to think they are protecting their homes against imagined swarthy marauders desperate to steal their flea-bitten sofas from their rotting front porches". Many of the newspaper's readers probably had a good laugh - and then wondered why it has gotten so difficult to seek sensible compromise.

There are countless dubious and polarizing claims made by the supporters of gun rights, too; examples include a recent NRA-backed tirade by Dana Loesch denouncing the "godless left", or the constant onslaught of conspiracy theories spewed by Alex Jones and Glenn Beck. But when introducing new legislation, the burden of making educated and thoughtful arguments should rest on its proponents, not other citizens. When folks such as Bloomberg prescribe sweeping changes to the American society while demonstrating striking ignorance about the topics they want to regulate, they come across as elitist and flippant - and deservedly so.

Given how controversial the topic is, I think it's wise to start an open, national conversation about the European model of gun control and the risks and benefits of living in an unarmed society. But it's also likely that such a debate wouldn't last very long. Progressive politicians like to say that the dialogue is impossible because of the undue influence of the National Rifle Association - but as I discussed in my earlier blog posts, the organization's financial resources and power are often overstated: it does not even make it onto the list of top 100 lobbyists in Washington, and its support comes mostly from member dues, not from shadowy business interests or wealthy oligarchs. In reality, disarmament just happens to be a very unpopular policy in America today: the support for gun ownership is very strong and has been growing over the past 20 years - even though hunting is on the decline.

Perhaps it would serve the progressive movement better to embrace the gun culture - and then think of ways to curb its unwanted costs. Addressing inner-city violence, especially among the disadvantaged youth, would quickly bring the US homicide rate much closer to the rest of the highly developed world. But admitting the staggering scale of this social problem can be an uncomfortable and politically charged position to hold. For Democrats, it would be tantamount to singling out minorities. For Republicans, it would be just another expansion of the nanny state.

PS. If you are interested in a more systematic evaluation of the scale, the impact, and the politics of gun ownership in the United States, you may enjoy an earlier entry on this blog. Or, if you prefer to read my entire series comparing the life in Europe and in the US, try this link.

August 31, 2015

Understanding the process of finding serious vulns

Our industry tends to glamorize vulnerability research, with a growing number of bug reports accompanied by flashy conference presentations, media kits, and exclusive interviews. But for all that grandeur, the public understands relatively little about the effort that goes into identifying and troubleshooting the hundreds of serious vulnerabilities that crop up every year in the software we all depend on. It certainly does not help that many of the commercial security testing products are promoted with truly bombastic claims - and that some of the most vocal security researchers enjoy the image of savant hackers, seldom talking about the processes and toolkits they depend on to get stuff done.

I figured it may make sense to change this. Several weeks ago, I started trawling through the list of public CVE assignments, and then manually compiling a list of genuine, high-impact flaws in commonly used software. I tried to follow three basic principles:

  • For pragmatic reasons, I focused on problems where the nature of the vulnerability and the identity of the researcher is easy to ascertain. For this reason, I ended up rejecting entries such as CVE-2015-2132 or CVE-2015-3799.

  • I focused on widespread software - e.g., browsers, operating systems, network services - skipping many categories of niche enterprise products, Wordpress add-ons, and so on. Good examples of rejected entries in this category include CVE-2015-5406 and CVE-2015-5681.

  • I skipped issues that appeared to be low impact, or where the credibility of the report seemed unclear. One example of a rejected submission is CVE-2015-4173.

To ensure that the data isn't skewed toward more vulnerable software, I tried to focus on research efforts, rather than on individual bugs; where a single reporter was credited for multiple closely related vulnerabilities in the same product within a narrow timeframe, I would use only one sample from the entire series of bugs.

For the qualifying CVE entries, I started sending out anonymous surveys to the researchers who reported the underlying issues. The surveys open with a discussion of the basic method employed to find the bug:

  How did you find this issue?

  ( ) Manual bug hunting
  ( ) Automated vulnerability discovery
  ( ) Lucky accident while doing unrelated work

If "manual bug hunting" is selected, several additional options appear:

  ( ) I was reviewing the source code to check for flaws.
  ( ) I studied the binary using a disassembler, decompiler, or a tracing tool.
  ( ) I was doing black-box experimentation to see how the program behaves.
  ( ) I simply noticed that this bug is being exploited in the wild.
  ( ) I did something else: ____________________

Selecting "automated discovery" results in a different set of choices:

  ( ) I used a fuzzer.
  ( ) I ran a simple vulnerability scanner (e.g., Nessus).
  ( ) I used a source code analyzer (static analysis).
  ( ) I relied on symbolic or concolic execution.
  ( ) I did something else: ____________________

Researchers who relied on automated tools are also asked about the origins of the tool and the computing resources used:

  Name of tool used (optional): ____________________

  Where does this tool come from?

  ( ) I created it just for this project.
  ( ) It's an existing but non-public utility.
  ( ) It's a publicly available framework.

  At what scale did you perform the experiments?

  ( ) I used 16 CPU cores or less.
  ( ) I employed more than 16 cores.

Regardless of the underlying method, the survey also asks every participant about the use of memory diagnostic tools:

  Did you use any additional, automatic error-catching tools - like ASAN
  or Valgrind - to investigate this issue?

  ( ) Yes. ( ) Nope!

...and about the lengths to which the reporter went to demonstrate the bug:

  How far did you go to demonstrate the impact of the issue?

  ( ) I just pointed out the problematic code or functionality.
  ( ) I submitted a basic proof-of-concept (say, a crashing test case).
  ( ) I created a fully-fledged, working exploit.

It also touches on the communications with the vendor:

  Did you coordinate the disclosure with the vendor of the affected

  ( ) Yes. ( ) No.

  How long have you waited before having the issue disclosed to the

  ( ) I disclosed right away. ( ) Less than a week. ( ) 1-4 weeks.
  ( ) 1-3 months. ( ) 4-6 months. ( ) More than 6 months.

  In the end, did the vendor address the issue as quickly as you would
  have hoped?

  ( ) Yes. ( ) Nope.

...and the channel used to disclose the bug - an area where we have seen some stark changes over the past five years:

  How did you disclose it? Select all options that apply:

  [ ] I made a blog post about the bug.
  [ ] I posted to a security mailing list (e.g., BUGTRAQ).
  [ ] I shared the finding on a web-based discussion forum.
  [ ] I announced it at a security conference.
  [ ] I shared it on Twitter or other social media.
  [ ] We made a press kit or reached out to a journalist.
  [ ] Vendor released an advisory.

The survey ends with a question about the motivation and the overall amount of effort that went into this work:

  What motivated you to look for this bug?

  ( ) It's just a hobby project.
  ( ) I received a scientific grant.
  ( ) I wanted to participate in a bounty program.
  ( ) I was doing contract work.
  ( ) It's a part of my full-time job.

  How much effort did you end up putting into this project?

  ( ) Just a couple of hours.
  ( ) Several days.
  ( ) Several weeks or more.

So far, the response rate for the survey is approximately 80%; because I only started in August, I currently don't have enough answers to draw particularly detailed conclusions from the data set - this should change over the next couple of months. Still, I'm already seeing several well-defined if preliminary trends:

  • The use of fuzzers is ubiquitous (incidentally, of named projects, afl-fuzz leads the fray so far); the use of other automated tools, such as static analysis frameworks or concolic execution, appears to be unheard of - despite the undivided attention that such methods receive in academic settings.

  • Memory diagnostic tools, such as ASAN and Valgrind, are extremely popular - and are an untold success story of vulnerability research.

  • Most of public vulnerability research appears to be done by people who work on it full-time, employed by vendors; hobby work and bug bounties follow closely.

  • Only a small minority of serious vulnerabilities appear to be disclosed anywhere outside a vendor advisory, making it extremely dangerous to rely on press coverage (or any other casual source) for evaluating personal risk.

Of course, some security work happens out of public view; for example, some enterprises have well-established and meaningful security assurance programs that likely prevent hundreds of security bugs from ever shipping in the reviewed code. Since it is difficult to collect comprehensive and unbiased data about such programs, there is always some speculation involved when discussing the similarities and differences between this work and public security research.

Well, that's it! Watch this space for updates - and let me know if there's anything you'd change or add to the questionnaire.

July 15, 2015

Poland and the United States: all that begins must end

With my previous entry, I wrapped up an impromptu series of articles that chronicled my childhood experiences in Poland and compared the culture I grew up with to the American society that I'm living in today. For the readers who want to be able to navigate the series without scrolling endlessly, I wanted to put together a quick table of contents. Here it goes.

The entry that started it all:

  • "On journeys" - a personal story recounting my travels from Poland to the US.

Oh, the places you won't go:

Poland (and Europe) vs the United States:

And now, back to the regularly scheduled programming...

Poland vs the United States: American exceptionalism

This is the fourteenth article talking about Poland, Europe, and the United States. To explore the entire collection, start here.

This is destined to be the final entry in the series that opened with a chronicle of my journey from Poland to the United States, only to veer into some of the most interesting social differences between America and the old continent. There are many other topics I could still write about - anything from the school system, to religion, to the driving culture - but with my parental leave coming to an end, I decided to draw a line. I'm sure that this decision will come as a relief for those who read the blog for technical insights, rather than political commentary :-)

The final topic I wanted to talk about is something that truly irks some of my European friends: the belief, held deeply by many Americans, that their country is the proverbial "city upon a hill" - a shining beacon of liberty and righteousness, blessed by the maker with the moral right to shape the world - be it by flexing its economic and diplomatic muscles, or with its sheer military might.

It is an interesting phenomenon, and one that certainly isn't exclusive to the United States. In fact, expansive exceptionalism used to be a very strong theme in the European doctrine long before it emerged in other parts of the Western world. For one, it underpinned many of the British, French, Spanish, and Dutch colonial conquests over the past 500 years. The romanticized notion of Sonderweg played a menacing role in German political discourse, too - eventually culminating in the rise of the Nazi ideology and the onset of World War II. It wasn't until the defeat of the Third Reich when Europe, faced with unspeakable destruction and unprecedented loss of life, made a concerted effort to root out many of its nationalist sentiments and embrace a more harmonious, collective path as a single European community.

America, in a way, experienced the opposite: although it has always celebrated its own rejection of feudalism and monarchism - and in that sense, it had a robust claim to being a pretty unique corner of the world - the country largely shied away from global politics, participating only very reluctantly in World War I, then hoping to wait out World War II up until being attacked by Japan. Its conviction about its special role on the world stage has solidified only after it paid a tremendous price to help defeat the Germans, to stop the march of the Red Army through the continent, and to build a prosperous and peaceful Europe; given the remarkable significance of this feat, the post-war sentiments in America may be not hard to understand. In that way, the roots of American exceptionalism differed from its European predecessors, being fueled by a fairly pure sense of righteousness - and not by anger, by a sense of injury, or by territorial demands.

Of course, the new superpower has also learned that its military might has its limits, facing humiliating defeats in some of the proxy wars with the Soviets and seeing an endless spiral of violence in the Middle East. The voices predicting its imminent demise, invariably present from the earliest days of the republic, have grown stronger and more confident over the past 50 years. But the country remains a military and economic powerhouse; and in some ways, its trigger-happy politicians provide a counterbalance to the other superpowers' greater propensity to turn a blind eye to humanitarian crises and to genocide. It's quite possible that without the United States arming its allies and tempering the appetites of Russia, North Korea, or China, the world would have been a less happy place. It's just as likely that the Middle East would have been a happier one.

Some Europeans show indignation that Americans, with their seemingly know-it-all attitudes toward the rest of the world, still struggle to pinpoint Austria or Belgium on the map. It is certainly true that the media in the US pays little attention to the old continent. But deep down inside, European outlets don't necessarily fare a lot better, often focusing its international coverage on the silly and the formulaic: when in Europe, you are far more likely to hear about a daring rescue of a cat stuck on a tree in Wyoming, or about the Creation Museum in Kentucky, than you are to learn anything substantive about Obamacare. (And speaking of Wyoming and Kentucky, pinpointing these places on the map probably wouldn't be the European viewer's strongest feat). In the end, Europeans who think they understand the intricacies of US politics are probably about as wrong as the average American making sweeping generalizations about Europe.

And on that intentionally self-deprecating note, it's time to wrap the series up.

Poland vs the United States: work and entitlements

This is the thirteenth article in a short series about Poland, Europe, and the United States. To explore the entire series, start here.

In one of my earlier posts, I alluded to the pervasive faith in the American Dream: the national ethos of opportunity, self-sufficiency, and free enterprise that influences the political discourse in the United States. The egalitarian promise of the American Dream is simple: no matter who you are, hard work and ingenuity will surely allow you to achieve your dreams. From that, it follows that on your journey, you are not entitled to much; the government will be there to protect your freedom, but it will not give you a head start.

Unlike many of my peers, I suspect that there is truth to the cliche; the United States is a remarkably industrious nation and the home to many of the world's most innovative and fastest-growing businesses. It certainly treads ahead of European economies, still dominated by pre-war industrial conglomerates and former state monopolists, and weighed down by aging populations, highly regulated markets, and inflexible, out-of-control costs. America's mostly-self-made magnates, the likes of Elon Musk, Bill Gates, and Warren Buffett, are also far more likable and seemingly more human than Europe's stereotypical caste of aristocratic families and shadowy oligarchs.

On the flip side, the striking upward mobility of rags-to-riches icons such as Steve Jobs or Oprah Winfrey tends to be an exception, not a rule. Many scholars point out that parents' incomes are highly predictive of the incomes of their children - and that in the US, this effect is more pronounced than in some of the European states. Such studies can be misleading, because in less unequal EU societies, moving to a higher income quantile may confer no substantial change in the quality of life - but ultimately, there is no denying that people who are born into poor families will usually remain poor for the rest of their lives. And with the contemporary trends in outsourcing and industrial automation, the opportunities for unskilled blue collar labor - once a key stepping stone in the story of the American Dream - are shrinking fast.

In contrast with the United States, many in Europe reject Milton Friedman's views on consensual capitalism and hold that it is a basic human right to be able to live a good life or to have an honest and respectable job. This starts with the labor law: in much of the United States, firing an employee can happen in the blink of an eye, for almost any reason - or without giving a reason at all. In Europe, the employer will need a just cause and will go through a lengthy severance period; depending on the circumstances, the company may be also barred from hiring another person to do the same job. Employment benefits follow the same pattern; in the US, paid leave is largely up to employers to decide, with skilled workers being lured with packages that would make Europeans jealous - but many unskilled laborers, especially in the retail and restaurant business, getting the short end of that stick.

In Europe, enabling the disadvantaged to contribute to the society and to live fulfilling lives is also a matter of government policy, often implemented through sweeping wealth redistribution - or through public-sector employment orchestrated at a scale that rivals that of quasi-communist China and other authoritarian countries (for example, in France and Greece, about one in three jobs is run by the state). Such efforts tend to be more successful in small and wealthy Scandinavian countries, where the society can be engineered with more finesse. In many other parts of the continent, systemic, long-term poverty is still rampant, with the government being able to do little more than providing people with a lifetime of subsidized basic sustenance and squalor living conditions. Ultimately, when it comes to combating multi-generational poverty, financial aid administered by sprawling national bureaucracies is not always a cure-all.

Perhaps interestingly, the benefits that are most frequently described as inadequate in the US are not as strikingly different from what one would be entitled to in the EU. For example, the minimal wage is quite comparable; it is around $2.60 per hour in Poland, about $3.70 in Greece, some $9.30 in Germany, and in the ballpark of $10.00 in the UK. In the US, the national average hovers somewhere around $8.00, with some of the states with higher costs of living on track to raise it to $10.00 within a year or two; in fact, some progressive municipalities are aiming for $15.

Unemployment and retirement benefits, although certainly not lavish, also follow the same pattern. When it comes to unemployment in particular, in the States, workers are entitled to about half of their previous salary for up to six months - although that period has been routinely extended in times of economic calamity. In Europe, the figures are roughly comparable, with payments in the ballpark of 50-70% of your previous salary, typically extending for somewhere between 6 and 12 months. The main difference is that the upper limit for monthly benefits tends to be significantly lower in the US than in Europe, often putting far greater strain on single-income families in places with high cost of living. In France, the ceiling seems to be around $8,000 a month; in the US, you will probably see no more than $2,000.

Another overlooked dimension of this debate is the unique tradition of charitable giving in the United States - a phenomenon that allows private charities to provide extensive assistance to people in need. Such giving happens on a staggering scale, with citizens donating more than $350 billion a year - more than twenty times the amount donated in the UK. The bulk of that money goes to organization that provide food, shelter, and counseling to the poor. It is an interesting model, with its own share of benefits and trade-offs: private charities operate on a more local scale and have a far stronger incentive to spend money wisely and provide meaningful aid. On the flip side, their reach is not as universal - and the benefits are not guaranteed.

Many of the conservatives who preach the virtues of the American Dream vastly underestimate the pervasive and lasting consequences of being born into poverty or falling onto hard times; they also underestimate the role that unearned privilege and luck played in their own lives. The progressives often do no better, seeing European social democracies as a flawless role model, even in the midst of the enduring sovereign debt crisis in the eurozone; breathlessly reciting knock-off Marxist slogans; and portraying the rich as Mr. Burns-esque villains of unfathomable wealth, motivated by just two goals: to exploit the working class and to avoid paying taxes at any cost. In the end, helping the disadvantaged is a moral imperative - but many ideas sound better on a banner than when implemented as a government policy.

For the next and final article in the series, click here.