I am working on R&D/engineering and product strategy with a fantastic startup company that is developing solutions to help mothers achieve their breastfeeding goals. The following represents my personal opinions, not those of the company or any of its individual employees (or anyone else).

Besides providing me with a chance to do something meaningful and to develop intellectually stimulating technology, this work has given me an opportunity to think about both intentions and risk factors when it comes to product design and the relationship with the customer/user. This is because the products are both extremely personal and timely in terms of privacy, ownership, and security issues (i.e., big data related).

Working toward positive outcomes

To be clear, this is NOT because these risk factors have materialized or because I think bad things WILL or are even likely to occur. Rather, I like to envision future scenarios of all kinds, including the good, the bad and the ugly, both because it’s a constructive exercise in imagination and so that I can determine what I can do today, to make more likely tomorrow the positive outcome that I do want.

Clearly, I believe in what the company is doing else I wouldn’t be there. So, without going into either product (breastfeeding support) or technology (including data analytics and learning) details, let’s imagine that a product is a gigantic hit with a target user base (new mothers) and let’s imagine that all new mothers use the product.

What do the extremes tell us?

What would be the consequences of such success – to users, to the company, to society, or to others? We’d like to think the consequences would be exactly as intended – that more women are supported in being able to achieve their own breastfeeding goals, with outcomes of improved health for mother and baby. Are there any qualitatively different outcomes, including unintended consequences – positive or negative – that arise from this being a runaway success in adoption of the product by the market? In other words, what is qualitatively different in a scenario when “everyone” is using it versus simply enough people to make some users happy and to make a company financially successful? To think about unintended consequences, I find it’s helpful to think of radically different scenarios, such as total adoption in the market (close to 100% usage). What do those extremes tell us?

If (virtually) ALL new mothers used the device, what would that do for or to them? This would become the new norm. Would it shift breastfeeding behavior – for the better? For the worse? We like to think for the better, of course. (In this essay, I am using “we” in some kind of societal, collective manner, not to mean any particular company or group.) Could this also create a different kind of a pressure on new mothers, one that would exact a different price from them? Probably… for better or for worse, peer pressure has always existed.

What’s different here? What about the fact that we might then be in possession of breastfeeding data on ALL of these women? What kind of powerful recommendations or insights would that enable? Similarly, if someone were so inclined, what are the ways in which one could potentially misuse the data or the new insights that come from this level of access?

In order to do the right thing…

I believe it is critical for us to consider the ways in which we could end up doing the wrong thing, either because of later access to power (via data and insights) or simply because of unintended consequences.

I am reminded of one company’s early guidance to “do no evil”. I have never worked with or for the company, and I wasn’t there at its inception – but I’m guessing that motto surely may have been one of the guiding principles of company behavior. At the same time, today we find ourselves in a situation where governments are debating the power that has become built up in this company’s “hands” (do companies have hands?) and whether or to what extent this is a problem for all of us individuals who continue to feed our data into its machinery. To what extent did early individuals there consider the ways in which – if they were wildly successful, as in fact they have become – such a motto would be tested in reality? To what extent did they think through the way in which the company might start to take on its own momentum to keep gathering up more power to become self-sustaining? This is not a criticism of the company; it is merely one highly visible example of a company that a) is extremely successful; b) is an avid gatherer (from individual behaviors) of data; c) is non-transparent in what it does with the data; and d) appears to have had good intentions. They are not the only example one could use, simply a visible and large one, and it is reasonable to ask questions about who is benefiting and at what cost.

If a company dealing in data wants to have success and wants to both draw its own line (wherever that line is defined for its founders) and then ensure that it stays on one side of that line, how does it avoid future problems? Whatever the answer, it seems that early guidelines must be in place, as well as a decision-making structure and environment that enables them to be followed even as the company radically changes as it moves along its growth path.

In the breastfeeding example, if one company knew an awful lot about all new mothers’ breastfeeding behavior, what kinds of conclusions would we be able to start drawing from this? There would be data that would surely tell us about population norms – what exactly, and how, are we, collectively, feeding our newborn babies? We might be able to alert mothers as to when we believe something is trending in the wrong direction. Surely, that’s useful information to provide.

At the same time, do we really know that what is, for example, the norm, is also the healthy thing to strive for? Maybe we are all already so chemically altered by our environment and our long-term eating (or other) choices that what is the population norm, and perhaps therefore a benchmark, is not in fact something to strive for and to recommend to mothers. If we start to make machine learning based recommendations to mothers based on our data, what assumptions are we using in the development of the models that create these recommendations? Are these valid assumptions? Or are we, for example, hiding behind the apparent wisdom and authority of machine learning algorithms?

As a devil’s advocate, you might ask: How is this really different from previous generations in which individual physicians made recommendations, which may have been as well intentioned as they were wrong, to individual mothers? There is a strong tendency to believe what the seemingly sophisticated algorithm came up with, that is not so different from prior generations believing what the apparently omniscient physician recommended (ordered!). In different ways, both are authority figures in their own contexts.

Two factors make this qualitatively different: One is transparency, or lack thereof, and the second is scale. We may or may not be called upon – or capable – of explaining or understanding those assumptions (a physician in years past would at least have been capable of explaining why he or she made certain diagnoses and prescribed certain treatments). Further, we may (in our hypothetical extreme example) be, on a massive scale, misguiding large populations of women, with negative impact on them and their babies, thus harming (potentially) both the individuals and society as a whole.

How do we prevent this kind of outcome developing out of our good intentions?

And if we proactively try to prevent this, how do we determine what are good recommendations? Again, what are we basing those assumptions on, and where is the potential for hiding behind technology? How do we build in the recognition – for ourselves and for our users – that at some level the users themselves are the experts, not to be misled by peer pressure or by alleged expert input (“proven” by modern technology!)?

Again, I want to be crystal clear that there is no veiled criticism here. The company I am working with is in fact doing a great job of considering these issues. These are the kinds of questions I ask myself in designing technologies and products relying heavily on data, to increase the likelihood of positive outcomes for all stakeholders. It would be worthwhile for more people involved in digital product design to do something similar. If that’s you, I welcome your thoughts on how you approach this topic – including why you may see it completely differently.