Skip to main content


Showing posts from June, 2020

From lock-in to "Trust us"

Ryan Kuo What struck me in this opinion piece  is the depiction of how multisided (e.g., two-sided) platforms evolve, in an animated GIF by Ryan Kuo. Platform owners feel the need to say "Trust us" at some point, long after contractual relationships are established. Platform owners gain power and lock in participants (e.g., sellers, buyers, app developers, users) by accumulating network effects and creating switching costs*. More power leads to governance decisions that are increasingly one-sided (e.g., decisions on application approval, product listings, content sharing, or commission/fees). Conflict of interest arises quickly. Trust deteriorates. Lack of trust can make data centric companies vulnerable to disruption in the long term, even if network effects offer a protection in the short term. One sure way not to gain trust is having to say "Trust us." *Cross-side network effects: The more sellers on a platform, the more value for buyers. More buyers j

Mistaken like a human

Traditionally, computers process data quite differently from how human brains do so. Computers are designed for precision while human brains rely on intuition. With artificial intelligence (#AI), or more specifically, deep learning and neural networks, one idea is to mimic the way human brains work. Does this mean that the hardware, or the body also needs to change? Are CPUs and GPUs not up to the task anymore? claims so, and argues that CPUs and even GPUs are out, and IPUs are in. Graphcore's #IPU stands for intelligence processing units and is prone to imprecision by design. It is a high-performance computing unit that processes data very imprecisely. Consider a task like going to a restaurant. A human brain wouldn't calculate the GPS coordinates but use associations; e.g., recall the restaurant's name, its neighborhood, and neighboring shops. The difference resembles one between Boolean logic and fuzzy logic, and is true. What is under the hood, o

Analyzing data to do nothing

With an increasing availability of data and off-the-shelf analysis tools, interventions are thriving. Interventions rarely create value. Rarity is expected simply because the probability of noise is often disproportionately higher. However, larger amounts of data exacerbate the problem of finding value in interventions while none exists. E.g., a frequentist test using a 0.01 p-value threshold would justify an intervention if the probability of an effect occurring by chance is less than 1%. This probability gets smaller with more data, not because the intervention gains value*. 1% should be a moving target, but it is often treated as a fixed one. It should be adjusted also for other reasons, such as running multiple tests. More importantly, it should be adjusted for unintended consequences. While quantifying the consequences is difficult, we can incentivize analytics teams for finding out what not to do. Action is visible but inaction is not. Successful data centric companies sho

Has Apple become the -old- Microsoft?

Why old? Well, it would be unfair to compare #Apple with today's #Microsoft, the owner of #GitHub, a sponsor of Open Source Initiative and proponent of innovation through collaboration and co-creation (!). The exclamation will have to stay for a while. The fight between #Apple and #Hey ( , a contender to #Gmail) is not a surprise but a reminder that Apple is increasingly in the business of value capture, not creation. The gist of the story is, Apple forces Hey to sell subscriptions on its iOS platform but Hey refuses because the cost of doing so is a 30% commission for every subscriber. You can find the details in Kara Swisher's article: Apple seems to be stuck with incremental one-sided ideas, another iPhone with a larger screen or "dark mode" on its iOS platform, and have forgotten the value of co-creation, which propelled the company at the first place. Apple should be encouraging not oppressing experiments like Hey. For that, it is tim