John Deere, Connected Products, and the Problem with Licensing

I wonder whether Deere & Company, colloquially known as John Deere, ever get bored of having to stand in as poster child for all that’s wrong with copyright, licensing, and assorted other ills that befall our world where, increasingly, hardware and software merge.

John Deere, if you remember, was at the heart of the Electronic Frontier Foundation’s (EFF) battle for an exemption of section 1201 of the Digital Millennium Copyright Act (DMCA) for land-based vehicles, which ultimately was successful. But John Deere has gotten a lot of flak for its position.

In a particularly spectacular display of corporate delusion, John Deere—the world’s largest agricultural machinery maker—told the Copyright Office that farmers don’t own their tractors. Because computer code snakes through the DNA of modern tractors, farmers receive “an implied license for the life of the vehicle to operate the vehicle.”

In Nebraska’s current ongoing legislative push dubbed Right to Repair, which, among other things, would prohibit manufacturers from granting any kind of exclusive access to diagnostics tools and software. The relevant sections articulate that if you sell in Nebraska, you better be prepared to give your customers and independent service businesses the same kind of access to your repair documentation and diagnostics tools as your authorized dealers and repair shops, imbuing quasi-FRAND licensing requirements on any supporting software.

The proposition is heavily supported by the EFF, repair.org, iFixIt (whom Wien serves as CEO), and others. Similar legislative proposals are pending in several other US states, and are opposed by manufacturers like John Deere, or as notably mentioned in the press, Apple.

What does Ownership mean?

The position of proponents is fairly straightforward to understand. You purchase a product from a manufacturer, so you might reasonably assume that you gain control over all aspects of the product. This is how things used to work for as long as we can remember. If you buy something, you can do whatever you damn well please with it. Sometimes there are other legal restrictions around that: you shouldn’t shoot people with your gun, you shouldn’t modify your car so it becomes unsafe to operate. But in principle, there’s nobody stopping you rewiring the audio system in your car, or retrofitting it with parts that aren’t officially sanctioned from the supplier. It is your car, after all. (Also, you’re free to sell it on, which turns out to be a rather important part.)

With the advent of software, a lot of this changed. Because software is disembodied, you generally do not own what you purchase. You’re paying a license fee and are granted a license to use the software subject to licensing terms. Those might be labelled Terms of Service, End User License Agreement, or similar, and cover what you can and cannot do with the software the license gives you access to. This is distinctly different from ownership, where limitations on how you use the product you bought generally can only be imposed by statute and law, not by the seller.

License agreements, on the other hand, give broad powers to the seller of the license. And generally speaking, those sellers have an incentive to restrict what you’re able to do with that license. We’re all familiar with the battles the music industry fought to retain some semblance of control over who got access to the music they distributed. It was just files, after all, and so they were easily copied. But you probably did agree, in a license agreement, not to copy those music files.

The extent to which the music industry went to enforce licensing agreements over their now disembodied goods shows the desperation with which they defended their business model. They had very strong economic incentives to pursue a very restricted licensing regime, and enforce that licensing regime drastically. The Digital Millennium Copyright Act is evidence of that.

We’ve slowly come around to accepting that software is different from physical goods, and that we only ever purchase a license to access it. But where the John Deere case and the whole right to repair issue cuts to the bone is that increasingly, software can be found in everything we buy. A tractor is not just a tractor anymore, it’s a computer on wheels. And that’s where expectations come to a loggerheads. If you buy a tractor, do you really expect to sign a licensing agreement? So far, you could do with that tractor what you damn well pleased, as long as you adhered to the law. Suddenly, the manufacturer imposes restrictions on what you can do with the tractor, because those restrictions are part of the license. We’ve talked about the Internet of Things you don’t really own before.

But why would a company like John Deere go to such lengths to impose its licensing agreement? The answer, of course, lies in economics.

Price Discrimination

Even though it might sound awful at first, price discrimination is a foundational principle of economics. You’re going to charge different customers different prices according to their willingness to pay. Because you can’t easily tell what a buyers upper limit cost limit for any given transaction is, you’re going to use different features to distinguish different variants of your product. The price difference between products doesn’t really have to have anything to do with the feature comparison. Features are usually only a crude estimate to bucket customers into different pricing tiers.

We’re getting used to that in software. Enterprise versions of software can easily run 10x more expensive than consumer versions, without there necessarily being a 10x increase in functionality. It’s just that enterprises usually are clearer about their willingness to pay for things, compared to individual customers.

Without price discrimination, a lot of the cheap services we take for granted wouldn’t really be possible, as the companies offering them wouldn’t be in a position to make a profit. Take this illustration of how airline seat pricing works out for airlines.

And John Deere works similarly. They have a range of different products that they price differently. Some of those price differences are obvious: you want a bigger tractor, you gotta pay more. But some of those differences are more subtle. You want a more powerful engine? That’s gonna cost some more. But here’s the rub: you don’t actually get a bigger engine in your tractor. It’s the same engine as the lower-powered model, but with restrictions on engine performance removed in software.

This helps John Deere save money in R&D and production. If they only have to equip their tractors with one make of engine, that’s a lot cheaper than developing and producing a lot of different engines. And with the help of software, they can still price performance differently. If you’re more cost-conscious, but need a large tractor, you might want to choose one with a little less horse power. Functionally, that tractor is the same as the pricier model, but the engine’s performance is throttled by software, so as to make price differentiation possible.

In conventional products, variability is costly because it requires variation in physical parts. But the software in smart, connected products makes variability far cheaper. For example, John Deere used to manufacture multiple versions of engines, each providing a different level of horsepower. It now can alter the horsepower of a standard physical engine using software alone.

The catch is that you have to maintain control over your software to be able to do that. Once licensing terms become unenforceable, there’s really nothing you can do to maintain your pricing structure. If the software that throttles engine performance can be easily modified, of course everybody would buy the cheaper version and just change to software to give them full access to the mechanical properties of the product. Short of going back and differentiating via distinct hardware, your pricing has suddenly become untenable.

Trade-offs

The push to control the software that controls equipment has unintended consequences, as the recent story about US farmers using cracked software to circumvent the digital locks put in place by John Deere shows:

To avoid the draconian locks that John Deere puts on the tractors they buy, farmers throughout America's heartland have started hacking their equipment with firmware that's cracked in Eastern Europe and traded on invite-only, paid online forums.

Tractor hacking is growing increasingly popular because John Deere and other manufacturers have made it impossible to perform "unauthorized" repair on farm equipment, which farmers see as an attack on their sovereignty and quite possibly an existential threat to their livelihood if their tractor breaks at an inopportune time.

It’s a tight line to walk between protecting your business model and potentially alienating your customers. Especially in the farming industry, goodwill is hard to come by, and you could argue that John Deere is overstepping the mark with severely curtailing what customers can and cannot do with the equipment they purchased.

As always, there are trade-offs to be made. John Deere would do well in allowing some self-repair and modification of their systems, if that allows them then to keep control over the parts of their systems that’s most valuable to them: maintaining the ability to discriminate on price.

Pushing customers to use unauthorised software, potentially opening them up to malicious actors, cannot be in their interest. But that’s the effect of such a hard-line stance.

The case of John Deere serves to illustrate a broader point that we will increasingly have to reckon with: as Software is Eating the World, as Mark Andreessen famously proclaimed, a lot of our everyday products will come encumbered with licensing agreements, curtailing our optionality in our use of them. And as more and more Things move from being products being sold to services being subscribed to, how we deal with those licenses becomes an ever more important discussion.

What’s at the heart of these discussions is a volatile transition period, in which we move from an economic model primarily based on the sale of goods to a model in which licenses play an ever increasing role. That farmers would be some of the first to have to reckon with this reads like a joke on Everett Rogers.


Innovating on Inputs or Outputs?

In the world of the Internet of Things, especially in the industrial realm, you find a lot of confusion around the terminology and definitions of different approaches to connecting things. You have the Industrial Internet, Industry 4.0, you have a Web of Things, the Internet of Things, or Machine to Machine Communication.

While the proponents of each of these different movements would have no problem in explaining the relative merits and distinction between these, at the core, all of those topics ponder the effects of increasing connectivity between “Things”, that is, machinery, computers, you-name-it. So while in terms of terminology, these approaches might be quite different, and they certainly are in their constituent organisational parts, for the uninitiated the differences can look like differences of degree, not of kind. And yet, there is one structural difference that I’d like to highlight. It’s the difference between Inputs-thinking and Outputs-Thinking, and it loops back to Carlota Perez’s work on Installation and Deployment phases of technological revolutions. 1

A quick primer

Perez’s theory on the diffusion of technological change identifies two distinct phases of adoption. The first is the Installation Phase, the second the Deployment Phase, and between them there usually happens to be some kind of crash. To put it in Fred Wilson’s words:

What she found was that there are two phases of every technological revolution, the installation phase when the technology comes into the market and the infrastructure is built (rails for the railroads, assembly lines for the cars, server and network infrastructure for the internet) and the deployment phase when the technology is broadly adopted by society (the development of the western part of the US in the railroad era, the creation of suburbs, shopping malls, and fast food in the auto era, and the adoption of iPhones, Facebook, and ridesharing in the internet/mobile era).

Now what’s interesting about those two distinct phases is the difference of motivations and effect. In the Installation Phase, you primarily have technological adoption driven by a desire for efficiency. You’re using new technology to do old jobs, but better, faster, and cheaper. It’s only once you reach the Deployment Phase — where this model assumes the underlying technology is widely adopted — that you start to experiment across new behaviours, new products, new processes. The Installation Phase is concerned with a change of Inputs, whereas the Deployment Phase is concerned with a change of Outputs. It’s what Ben Evans observes in Mobile 2.0:

The smartphone's image sensor, in particular, is becoming a universal input, and a universal sensor. Talking about 'cameras' taking 'photos' misses the point here: the sensor can capture something that looks like the prints you got with a 35mm camera, but what else? Using a smartphone camera just to take and send photos is like printing out emails - you're using a new tool to fit into old forms.

Thinking about industry

If you’re looking at the predominant projects that industrial players pursue in the Internet of Things, you’ll notice – with the lens of Perez – that the majority of them are happening in the Installation Phase. You’re looking at better supply chain management. You’re looking at better utilisation of factory floors. You’ll see a lot of predictive maintenance. And of course you’ll hear all about it on the conference circuit. And it’s understandable that especially established industrial players would focus on that. Increased efficiency is what they yearn for, and here’s a tool that provides it. But it still is inputs-driven thinking. Your mode of operation doesn’t really change by being better at servicing your machinery. It might save you quite a lot of money indeed, and that alone is no small feat, but it falls still far short of the potential that comes with connectivity.

That potential, however, can only be discovered once the players either start thinking about new use cases that build off of connectivity as a given, or they wait for new players to come along and redefine what their industry is about. The inputs-driven phase is quite friendly to incumbents, it turns out. The business models and processes are relatively well understood. It’s the Deployment Phase that’s challenging and fraught with risk.

The Internet’s Deployment Phase

If you’re looking at the effects the Internet had on various industries, you’ll find that most of the fundamental changes, those we associate with “Disruptive Innovation”, only happened in the Deployment Phase. It took sufficient broadband capacity for things like p2p music sharing, social networking, video streaming, or just plain old searching for information to have a material impact. While the Internet was a novelty, and connectivity was scarce, we were applying our old thinking to the new distribution mechanism. Web 2.0 could only happen once we weren’t all excited about the internet as such anymore, but could take it in many ways for granted, and see what we could actually use it for.

And so the real change happened not on the inputs, during the Installation Phase, but after. The music industry was one of the first victims of the internet, and it wasn’t because of their inputs. What changed was the outputs. The distribution completely changed, and that changed the market dynamics. The same goes for news media. If you look at any given news room today, they pretty much would be recognisable from a vantage point 20 years ago. But the whole business of making news has changed completely. That is because the business of news has hardly anything to do with ink and printing presses and newsstands anymore. The inputs are still roughly the same. The outputs are incomparable. And so it’ll go with almost anything the internet touches.

This is the underappreciated effect of technological roll-outs. Once you’re at a stage where you think you have your inputs under control, you’ve done all you can on efficiency, somebody comes along and redefines the outputs. But that’s exactly where world-changing advancements come from.

So what’s your play? Are you increasing your efficiency? Or are you redefining your industry?

  1. For more on Carlota Perez, I recommend reading here standard work, Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages


Fitbit, wearables, and the insurance data conundrum

Imagine you’re an exec at Fitbit. You brought the company public with a decent return for your investors. But now you have to live with the scrutiny of public markets, and they’re not pretty. The overall category of wearables has a bit of a problem currently, as it’s not quite clear whether it can support ecosystems in it’s own right, or whether it’s dependent on a larger ecosystem. As we’ve covered in the latest episode of Thingonomics (in German, out any day now…), even Android Wear isn’t selling well enough to convince Android OEMs to continue making them. Oh, and then there’s the attrition rate in wearables, which is awful, as this analysis by RockHealth from last year1 has shown. So, not a pretty business to be in.

So, what do you do? Withings, one of the main competitors in the wearables space got acquired by someone with more resources than you, and a track record of working with regulators to bring products to market. Their trajectory is clearly health-tech, and you really don’t want to deal with the FDA and the time and cost associated with bringing health products to market. You can’t outright compete with Apple, the elephant in the room, because you can’t fully tap into iOS, and Android Wear is a dud, as covered above. So you decide to do a corporate wellness program.

How this works is pretty straightforward. You partner with a large corporate to subsidise or buy your product for their workforce, for a whole litany of potential upside. But your clients are corporate, so what ultimately sticks is the vague promise of reduced health insurance premiums due to a fact of better overall health of their workforce by virtue of not sitting on their arse all day. That’s the rough outline. And it seems to be working.

Technology analyst group Gartner forecasts that by 2018, some 2m people will be required by their employer to wear fitness trackers. […] “But at the end of the day we wanted to encourage people to participate.” More than 2,500 employees signed up in the first week and he says the company is on course to easily beat its target of enrolling 20 per cent of SAP’s north American organisation in the scheme. “Being a data-driven tech company, from a demographics standpoint we think it’s going to connect with our employees.” […] SAP is one of a growing number of companies that hope they may eventually be able to improve their staff’s health enough to lower healthcare costs 2

And of course, incentivising your customers workforce can work wonders for your product sell-through. And if you have concrete financial upside associated with the use of your product, that helps solve some of that pesky attrition problem, too.

BP gives employees a lower-deductible on their health plan if they walk 1m steps in a year, validating the results using trackers.

Now, there’s a couple of problems with that approach, and their not exclusive to wearables, or healthcare. You see them crop up in pay-per-mile car insurance (or early-driver car insurance), for instance, too. The first is that if you just flip the rhetoric around, that discount you get for getting tracked is essentially a tax you have to pay for the privilege of not being tracked (or at least your tracking data not being shared with an insurance company). Looked at in this way, it becomes a lot less appealing and a lot more dystopian, and potentially discriminatory in ways that would be litigious if not done in a technologically mediated way.
But the second is more interesting, and more threatening to the self-interest of insurance providers themselves. Taken to the logical conclusion, you’d insure every customer according to their own risk profile. Low probability of having to fall back on insurance = low premiums. High risk - high premiums. Now, this is nothing new - insurers have worked this way for most of their existence. The level of granularity and the speed of data acquisition present what is perceived as new opportunities to the actuaries at insurers. And by virtue of this, you incentivise better, less risky behaviour, and drive down costs in claims. Here’s how Izabella Kaminska with the FT puts it:

The proposition here is simple. Soon enough, telematics companies will gather data from all our connected devices, fitbits and cars, scrutinise it intricately, then determine whether we are “good” or “bad” agents. Good behaviours will be rewarded with cheaper insurance policies, bad ones will be penalised. The relative cost of being a bad agent, meanwhile, will incentivise good behaviours, eliminating evil from our world forever. Amen. […] There’s only one problem. Personalising insurance contracts to this degree undermines the whole concept of insurance. Insurance doesn’t really work unless risk is pooled in such a way that good agents pay over the odds to the benefit of the bad ones. 3

In short: if there’s not some mismatch between premiums and claims for the individual, the insurance model doesn’t work. And while the prima facie argument of insurers is to improve behaviour of their riskier customers and thus improve the overall economics, it looks awfully close to trying to improve the premiums/claims ratio for individual customers. Now, this might draw regulatory ire, and the quoted FT piece hints at that. But at the core, going down this path makes insurers superfluous, in that with perfect information symmetry, the socialising aspect of it falls by the wayside, and they work much more like a savings fund. Now, why would you need the insurance overhead for that?

That argument isn’t to say that connected products cannot make sense for insurers or in improving workforce health, but you need to think about your objectives and your strategy. Going down the path of financial optimization ultimately isn’t going to work, and only going to add to a lot of insurers woes that come with increasing volume of autonomous and electric vehicles (which is going to be interesting in its own right) and constantly improving longevity and precision medicine.

Now, for Fitbit, which we started with, this all ain’t pretty, as it’s hard to see where their growth should come from. They don’t compete well on the top end of the market, and in the middle are an undifferentiated product with huge brand expense. They might be market leader in terms of volume in fitness trackers, but they fail to enable new ecosystems, and are more of a gateway product. People who stay with the category churn away to premium products (Garmin or the Apple Watch come to mind), and people who don’t into non-consumption. It’s hard to see how this could lead to a sustainable future for Fitbit.


Disruption from the top and adjacent competition

Having read Jerry Neumanns excellent piece on the cargo-cult that “disruption” has become, I was wondering about what Jerry seemingly missed, and a lot of critics and defendants of capital-D Disruption seem to neglect or disavow: That disruption very seldomly happens from the bottom of a market.

To illustrate what I mean let’s take Jerry’s example of Intel’s market entry as a starting point:

Intel, after all, did not enter the microprocessor market by intentionally introducing a cheaper general purpose computer, they entered it by introducing a much more expensive slide rule…the 4004 was developed to power electronic calculators. The market for electronic calculators was small, allowing Intel the room to build expertise in CPUs, but Intel’s entry can’t be described by Christensen’s attack from below process unless you take into account facts not then in evidence: that CPUs would be used to build general purpose computers.

This is a strategy that we see implemented time and time again, and it’s staggering that it doesn’t receive more attention. I’ve taken to call this strategy »take rich people’s money to build your business.« And it often works. Jerry again:

Finding a foothold market for a new technology gave Intel the time and space to explore other potential markets for the technology, and even though the strategy itself was not disruption, Intel was successful.

To attack and change a well-developed market, your business needs time and experience to start competing in the market. Oftentimes this can be found in the relatively remote confines of adjacent market that incumbents don’t even realize are touching upon their core businesses until it’s too late.

Tesla comes to mind in this regard. Often claimed not to fall within the Disruption paradigm, the effects the firm has on sales numbers of other higher-end car manufacturers is staggering. True, this is not canonical disruption in that it serves novel user needs at a lower cost. It serves novel user needs at a significant premium. Nonetheless, that technology trickles down, and already changes the face of the auto industry to the point of classic big auto manufacturers trying to either partner with Tesla or cargo-cult what they perceive to be Tesla’s secret sauce. 1 All the while Mr. Musk made Tesla’s strategy plenty clear, giving a playbook for this “disruption from the top”:

Build sports car
Use that money to build an affordable car
Use that money to build an even more affordable car
While doing above, also provide zero emission electric power generation options
Don’t tell anyone.

The adjacent component of Tesla’s strategy is the secondary use of the vehicle tech, namely the batteries. With the Power Wall (to complement SolarCity’s products) a notional car manufacturer suddenly walked into the market of energy firms and changed the rules. This is adjacent competition in action.

Another example is the case of the Nest Smart Thermostat. Often, industry analysts purely focus on device sales of the product, missing a significant revenue stream that Nest has uncovered adapting their product to adjacent markets. What did they do? They started offering Demand Response services to local utilities companies, contracting out what in the industry is called Negawatts, reducing electricity consumption during times of peak demand, and filled those contracts with their Rush Hour Rewards program, into which customers served by participating utilities can opt in. In essence, Nest then reduces the energy consumption of a customer’s AC during peak load, in turn helping stabilize the grid and reduce the need for utilities to fire up back-up plants.

So your fancy gizmo of a thermostat suddenly becomes a crucial instrument in energy policy, something that Power Companies have been trying to implement for ages, albeit unsuccessfully.

And while Jerry describes Disruption as a warning to the managers of incumbents, disruption and adjacent competition can serve as both, a warning to incumbents and a potential strategy for new entrants.

  1. VW just announced their intent to build out their own battery production facility, in essence viewing Tesla’s ”Gigafactory” as the cornerstone of that company’s relative success, neglecting the infrastructure buildout, for one.


The Prisoner’s Dilemma of the IoT Standards Wars

Time and again, you hear calls for a unified Internet of Things standard. The argument is essentially this: the value of the Internet of Things does not lie in the individual products themselves, but in the connections they can make, the network they can tap into. Without a universal standard, these products will live in “walled gardens”, restricting the network size, and following Metcalfe’s law, will restrict the total value of the network, as the number of connections between nodes is handicapped.

While this argument is sensible, it doesn’t incorporate the pathways on how we might end up at such a universal standard. After all, there are now a multitude of competing standards organizations that lay claim to being exactly that. The W3C has compiled a neat overview that currently counts 32 different consortia.

This puts developers of applications and devices for the Internet of Things in an impossible position that ultimately ensures suboptimal outcomes for the ecosystem. This is driven in large parts by incumbents trying to secure a favorable outcome in what appears to be an incredibly large future market with perceived winner-takes-all dynamics – the power laws of the internet apply – and thus strategically hedging their bets. Theo Priestly had a look into that over at Forbes and found:

While there are differences between their focus; for example industrial IOT use cases and smart home interoperability, what’s interesting to note is just how many camps these 6 vendors are playing footsie with. Cisco and Samsung are part of seven initiatives. IBM, Honeywell and Intel are aligned with five.

Why this makes sense for large platform providers to stave off potential disruptors is handily explained by Ars Technica

But this is typical of how Google operates. The company’s actions have shown it doesn’t really believe in focusing on a single solution to a problem, regardless of how much easier that would make things for users. It has to deal with external competitors in all sorts of areas, and Google seems to see no reason why competition can’t also come from within—Google products competing with other Google products.

It’s almost like every product category is just a big A/B experiment for Google. As Google’s search engine constantly gathers data from the Web to learn and improve, Google the company works much the same way. It provides multiple solutions to a single problem and expects the best one to win out over the other.

They’re spreading their investments in the hopes to have a presence in whatever platform or standard wins out. But while Ars Technica focuses on the user, that is not the target audience the standards groups are speaking to.

Any potential IoT standard rises and falls with developer adoption. It is third party manufacturers that need to be convinced to buy into an ecosystem and design products with that ecosystems standard in mind. It is application developers that need convincing to write apps for the platform. After all, that’s the point of “interoperable” standards.

But while it makes sense for incumbents to place their bets across many different standards, developers and manufacturers don’t have that luxury. Often it is simple cost and time constraints that requires them to focus on a single supported platform. How many startups do you know that launch iPhone first, with a vague promise to support Android later, and no mention of any of the also-ran mobile platforms?

But which platform to develop for? The optimal outcome for the ecosystem would be for manufacturers and developers to agree on a standard, forego short-term profits for growing the long-term market size. Growing the pie, rather than your piece of it. But with current market dynamics, the strategic choices are limited to guarantee suboptimal outcomes, as the dominant strategy for every market participant is to roll their own. In that way, they at least have a modicum of control over their underlying infrastructure, rather than being at the whims of a whole-market A/B experiment, of which they might easily back the wrong side.

What would be required for interoperability to happen is non-zero-sum thinking with the big platform vendors. And yet we have entered a spiral of zero-sum competition in what seems to be a crucial part of a still-emerging market.

What this will lead to is a necessary segmentation of the market. It is unrealistic, and a problematic aspect of the whole Internet of Things discussion, to have cluster such a variety of things as Industrial and Supply Chain applications, Automotive, Health and Home together. Each of these industries will likely develop their own set of competing standards, where winners will be easier to find, but a lot of deadweight loss will occur. (Think HD-DVD firms and customers.) The really valuable position to sit in such a scenario is at the fringes of respective verticals – being a gateway and translation point between them. We’re starting to see key player jockey into position for this.