The Future of TV (& Games and Computing)

Gary LauderBy Gary Lauder, Managing Director of Lauder Partners

Television is changing rapidly in many ways. Most of us are familiar with the major trends, but it’s interesting to explore how these quantitative trends will qualitatively change TV delivery and viewing experiences. Many things can be predicted by linear extrapolation, but the really interesting changes result from passing discrete tipping points.  By ten years from now, a lot will have changed.

In the year 2000, for CED Magazine‘s 25th anniversary issue, they asked several people to predict what the cable industry would look like 25 years hence. My article can be seen here, so I won’t restate its arguments.  We are now more than halfway from 2000 to 2025, and on-track for the article’s central prediction to come true: that cable will be free. I would revise the prediction to say that not every channel will be free (e.g. HBO), and the definition of “cable” may include its former content delivered OTT by a new entrant, but as long as advertising is accepted, most popular channels will be available for free from at least some providers.

Television image resolution will continue to climb as larger displays’ prices decline.  Video walls will emerge from just being in commercial locations (e.g. billboards, Comcast & IAC’s lobbies) and will begin to show up in homes.  HD will give way to 4K and then higher resolutions.  4K will not be a fad the way that 3DTV was.  Initially, 4K displays will be mostly fed by HD content that is upconverted by the set.  Unless the 4K screen is really big, most viewers can’t distinguish between 4K and upconverted HD source content. Bandwidth demand will increase as screens get bigger and 4K content becomes readily available. So will the migration from broadcast to unicast.

Last month, Ericsson announced that the fraction of people viewing on-demand streamed video at least once per week (75%) was about to surpass the fraction of those who watch scheduled broadcast TV (77%).  While this viewing frequency trend does not mean that parity in viewing hours had been achieved, that too will eventually come to pass.  An Ericsson representative was quoted here as saying that “By the end of 2020 we will see that consumers consume as much on-demand content as they do linear content.”

By 2025, most video consumption will be unicast. Much of that transition will be due to the shift to Over-The-Top (OTT) services (or to MVPD services that emulate OTT offerings).  This will be driven by UI and business model simplicity (e.g. NetFlix) and is enabled by the ongoing decline in the costs of adding bandwidth capacity.  This trend is described by both Gilder’s Law (that bandwidth capacity increases 3X faster than Moore’s Law (probably exaggerated)) and Nielsen’s Law (high-end users’ connection speed grows by 50% per year). As portable devices proliferate, wireless will have to keep up…and it will.  Cooper’s Law (the number of wireless conversations in a given unit of area increases by about 36%/year) is a result primarily of improvements in frequency reuse, also known as Space Division Multiple Access (SDMA) or cellularization (what hybrid fiber coax (HFC) node multiplicity enables on cable plants).  The wireless bandwidth expansion trend will continue for both of the main delivery mediums: Wi-Fi and Cellular.  The trend has continued with a wireless technology named MIMO, which increases capacity by up to 4X, and there is tremendous promise in a new product, “pCell,” by Artemis Networks, that has demonstrated increasing LTE capacity by over 25X. This company was founded by Steve Perlman, the founder of WebTV and OnLive, both of which I was an investor in…and the latter of which I am now the main owner.

Both wireless and terrestrial networks’ (e.g. cable, DSL, fiber, etc.) differing capacities improve at almost parallel exponential rates described by Edholm’s Law. This steady capacity ratio has historically limited the competition between them, but being almost parallel is not the same as being parallel. A decade ago, when that law was described, it projected that by 2030, wireless speeds would catch up with terrestrial networks.  Since those wireless technologies are always the local tail of a terrestrial network, these trends are inherently complementary. Which network is ultimately used by the consumer will depend on the competitive choices. It is likely that wireless providers will be effective competitors for services that are today the domain of cable, telephone and satellite providers.

By 2025 we may see new TYPES of entrants providing high-speed data (and therefore all types of service (video, voice, data & more)). By “new TYPES” I mean data provided using technologies that have not yet emerged.  While I can’t disclose specifics, expect that they will use underutilized parts of the electromagnetic spectrum combined with innovative uses of the third dimension. One example of such an innovation is Google’s Project Loon. There are more such out-of-the-box innovations to come.  Those approaches are inherently more promising for third parties who wish to compete (such as Google) than conventional approaches — such as Google Fiber — since the technologies of digging up streets and lawns don’t improve exponentially.

Today most network traffic is very asymmetrical towards downstream.  This will continue, but some applications, such as video surveillance and off-site backup, may dramatically increase upstream use.  The former is offered by DropCam, and there are many off-site backup services.

Similar to Edholm’s Law, there has been a persistent large cost differential between the cost per delivered bit over HFC via cable modem (i.e. IP via DOCSIS) vs. QAM to set-top boxes (STB’s)(which have become oxymoronic since they will no longer balance on top of a thin flat TV).  IP currently costs about 5X more than QAM delivery, but its slope is twice as steep, so it too will cross QAM…around 2023, by which time the cost of this part of the system will have become insignificant.  There are also newer IP architectures such as CCAP that have lower costs, but they do not have enough of a history to infer an accurate trend line. See graph:

Cost Of Bandwidth

Unaware of a prior claim, I hereby claim the observation that QAM will remain materially cheaper until it becomes irrelevant as “Lauder’s Law.” which should not be confused with Gary’s Law of Extension Cords (first appearing in print here) (which stipulates that extension cords should always be at least as thick as the power cords that plug into them in order to prevent overloading the extension cord).

The cost ratio used to mean the order of magnitude difference between $1/GB and $0.10/GB, but now the ratio is more like 1¢ vs. 0.1¢, although the lower costs are not reflected in pricing. When it comes to movie deliveries, the difference is immaterial, but at the other extreme, it would be quite material for delivering scenery for an 8K video wall that is always on. While the network cost of delivery via IP DOCSIS is getting quite low, there are other implications of the QAM vs. IP DOCSIS choice for operators.  The most significant one is that the existing STB infrastructure requires QAM, and IP DOCSIS can only serve IP devices.  The costs of replacing STB’s dwarfs the network infrastructure cost differentials.  An open question is whether over-zealous government network neutrality rules that might emerge in the future would preclude a third-party from being able to enter an arm’s-length agreement with an MSO to deliver new services to their STBs.

Now that cable bandwidth has gotten very fast and cheap via HFC, new services are emerging that would not previously have been possible. The most profound advance is the virtualization of the CPU/GPU (Central Processing Unit/Graphics Processing Unit) and even the whole STB (CPE).  When the network is fast enough, a tipping point is passed that enables a service provider to offer a service that previously would have required new CPE, but now only requires high bandwidth.  This becomes possible by moving the locus of the interactive session execution from the home to the network (cloud/headend), then rendering and encoding the video output with low latency (low delay), and streaming it using a new or existing device’s video decode function.

“Transparent” means it’s there, but you can’t see it. “Virtual” means you can see it, but it’s not really there.  By virtualizing the CPU/GPU, the user perceives the interaction as local, even though it is really with the distant cloud server.  This innovation could be considered a type of Network Function Virtualization (NFV) despite its existing long before that term did.  See diagram.

3 Models Of Computing Interactive

Both the Client-Server model and the Virtual CPU model are referred to as having a “cloud architecture,” so they are often confused. They are very different.

The Virtual CPU model has many important implications:

1)   It enables great user experiences on existing installed devices.  Examples:

  1. ActiveVideo* delivers the best UI’s to any old (or new) STB.
  2. OnLive* delivers the latest AAA games to old PCs, Macs & other thin devices such as smart TV’s.  If it works for fast-twitch gaming, it can work for ANY application including general purpose computing.

2)   It unshackles the UI from the limitations of local devices.

  1. This enables heterogeneous platforms to be unified.
  2. It lowers the lowest common denominator (e.g. Chromecasts could be even thinner and therefore cheap enough to give away. Roku’s could deliver UI’s better than the fanciest STB’s now do. Chromebooks or similar platforms could cost half of their present price).
  3. As the number and variety of end user devices proliferate, all can have a single unified experience.

3)   Since PC screens are delivered as video, and video is increasingly important to PC usage, this further blurs the boundary between the TV & PC.

4)   Moore’s Law and its cousins drive down the costs for new customer premise equipment (CPE), but it also drives down the cost of cloud infrastructure.  Once CPE is installed, its hardware capabilities are frozen in time.  The cloud infrastructure that feeds it can and does continue to improve over time.

5)   This architecture can often deliver faster speeds than a high-end computer could have achieved due to the gigabit connectivity in the data center.[1]

6)   It enables the last hold-out of digital media to be streamed: interactive applications.  Media has historically migrated from physical delivery to downloads to streaming.  This occurred first in music (file sizes ≈5MB), then in video (≈1GB), and now with games (≈5-50GB), electronic program guides (EPG) and other applications. After transitioning to this paradigm, there will not be much reason to invest in local functionality for the class of devices that can rely on always having good bandwidth (e.g. devices that output to a TV or to a monitor).  All future platform evolution can then occur in the cloud.  This is disruptive to those who want to see new subscriber boxes get deployed — which is a group that includes some counterintuitive members.

We will only see a few major technology transitions in our lifetimes.  The transition from mainframe to mini, and then to microcomputers were all driven by Moore’s Law. While that law has not been repealed, the laws of bandwidth will come to dominate.  The architectural shift is inevitable, but the speed of adoption is difficult to divine.  The technology has been available for a while, but many MVPD’s have been slow to adopt it due to some combination of: the Einstellung effect (when habituation to a conventional solution blinds one to recognizing better solutions), functional fixedness (the inability to recognize novel potential uses for familiar items), the tendencies to see things as false dichotomies (e.g. virtual CPU or new STB, but not both — despite complementariness) and/or the NIH Syndrome.

Prophesies against a new technology working are self-fulfilling if they preclude properly trying that new technology, but this means that they are only postponed.  About 200 years ago, the philosopher Arthur Schopenhauer wrote something in German that has since morphed into this brilliant observation: “All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.”  Most of the major providers are like generals fighting the last war and are relying on architectures optimized for the past, not the immediate future. There is a huge opportunity for those who fully embrace the cloud.  The movement towards the cloud is inexorable.  It’s not if, but when.  Since OTT is competitive and fast-moving, expect experimentation and great things to result.

* Gary Lauder is a major shareholder of ActiveVideo Networks and OnLive

[1] This is explained at this part of this presentation I gave at London’s Cable Congress in 2013:http://youtu.be/WU6uKlwKQys?t=8m42s

3 Comments on The Future of TV (& Games and Computing)

Comments are closed.