Skip to main content

Posts

To log or how to log

I avoid posting technical notes here. This is an exception because I have an agenda. Log transformation is widely used in modeling data for several reasons: Making data "behave," calculating elasticity etc. When an outcome variable naturally has zeros, however, log transformation is tricky. Many data modelers (including seasoned researchers) instinctively add a positive constant to each value in the outcome variable. One popular idea is to add 1 to the variable and transform raw zeros to log-transformed zeros. Another idea is to add a very small constant, especially when the scale of the outcome variable is small. Well, bad news is these are arbitrary choices and the resulting estimations may be biased. To me, if an analysis is correlational (as most are), a small bias may not be a big concern. If it is causal, and for example, an estimated elasticity will be used to take action (with an intention to change an outcome), that's trouble waiting to happen. This is a problem

In defense of Amazon (Trends)

#WSJ continues to report on #Amazon's shady practices. An earlier article said Amazon used sales data on third-party sellers to offer copycat, private-label products (like AmazonBasics). It was a coherent story but making hasty generalizations.  Another piece  showed how Amazon manipulates product search ads to favor its products. Both articles (linked within) underlined a data access problem: Amazon has access to the data on its rivals and exploits it for competitive advantage. This latest article  is not as coherent and a bit all over the place, but Amazon's response is not helping either. Amazon says "Offering products inspired by the trends to which customers are responding is a common practice across the retail industry." Amazon needs to nurture trust in its ecosystem but seems to be doing the opposite. I don't actually see any rampant issues except for access to product search data. Amazon is the dominant leader of the product search market (above Google a

Visualizing the death of James Wolfe

History paintings are like data visualizations. Here, NYT's Jason Farago presents  Benjamin West's 1770 painting "The Death of General Wolfe." If your dashboard looks like West's painting, you are in trouble. Then you need a Jason Farago to make it accessible to the management team. Dashboards summarize data, as West did in this history painting in 1770 (accurately or not -See Jason's walkthrough on that). The higher the density of information, the lower the chances of communicating successfully. Businesses increasingly need data translators or communicators, not so much "data artists." West is the data artist. Jason is the data translator. West skillfully abuses ggplot and matplotlib for the sake of art. Jason further masters Plotly, Shiny, and Dash. #dataart #datascience #visualization #dataviz #r #rstats #python #datacentricity archive.gtozer.net

Even guesswork starts with "I don't know"

To guess is to admit not knowing in the first place. The problem with Dilbert's coworkers and with most managerial teams is resisting to admit they don't know. Even horoscopes and guesswork should start with the acknowledgment of a knowledge gap. Without such an acknowledgment, the time and effort needed to formulate and solve a problem is not justified. To guess is then to pretend knowing. Guesswork supersedes learning from data because there is nothing to learn when it is all known. Successful data centric companies need a culture that encourages not knowing as much as knowing. #data #analytics #datamining #dataanalysis #datacentricity archive.gtozer.net

Data worker vs. intelligent agent of AI

Absent of imagination , data workers perform at best on par with intelligent agents, finding associations but failing in causality. Identifying causal links requires thinking in counterfactuals, which, in turn, requires imagining what could have been. What is absent must be imagined while what is present remains obvious, even to an algorithm. Data centric companies should invest at least as much in the thinking skills and imaginative ability as in the coding skills of their data workers for value creation. #data #analytics #ai #imagination #causality #causalinference #datacentricity archive.gtozer.net

Swimming in data but blindly

Data show that masks can slow down the spread. Getting our economy back on its feet depends on slowing down the spread. Yet, wearing masks is not mandated, not at the federal level, not decisively. We are swimming in data but blindly. In addition to the likely direct effect on the spread, behavioral change following such a mandate can potentially help regain consumer confidence, increase spending, and boost economy (or not, but an experiment worth pursuing given there is little to lose, if any). Data centricity requires a shift in mindset, no matter whether it is policy making or business strategy making. Without this shift, decision makers may swim in a pool of charts and tables but can't see. #data #analytics #covid19 #decisionmaking #datacentricity archive.gtozer.net

From lock-in to "Trust us"

Ryan Kuo What struck me in this opinion piece  is the depiction of how multisided (e.g., two-sided) platforms evolve, in an animated GIF by Ryan Kuo. Platform owners feel the need to say "Trust us" at some point, long after contractual relationships are established. Platform owners gain power and lock in participants (e.g., sellers, buyers, app developers, users) by accumulating network effects and creating switching costs*. More power leads to governance decisions that are increasingly one-sided (e.g., decisions on application approval, product listings, content sharing, or commission/fees). Conflict of interest arises quickly. Trust deteriorates. Lack of trust can make data centric companies vulnerable to disruption in the long term, even if network effects offer a protection in the short term. One sure way not to gain trust is having to say "Trust us." *Cross-side network effects: The more sellers on a platform, the more value for buyers. More buyers j