February 27, 2012

Translucent modeling, and the future of internet marketing

There’s a growing consensus that consumers require limits on the predictive modeling that is done about them. That’s a theme of the Obama Administration’s recent work on consumer data privacy; it’s central to other countries’ data retention regulations; and it’s specifically borne out by the recent Target-pursues-pregnant-women example. Whatever happens legally, I believe this also calls for a technical response, namely:

Consumers should be shown key factual and psychographic aspects of how they are modeled, and be given the chance to insist that marketers disregard any or all of those aspects.

I further believe that the resulting technology should be extended so that

information holders can collaborate by exchanging estimates for such key factors, rather than exchanging the underlying data itself.

To some extent this happens today, for example with attribution/de-anonymization or with credit scores; but I think it should be taken to another level of granularity.

My name for all this is translucent modeling, rather than “transparent”, the idea being that key points must be visible, but the finer details can be safely obscured.

Examples of dialog I think marketers should have with consumers include:

and above all

Some of these possibilities are essential to marketer/consumer trust. Others are just nice-to-haves, benefitting marketer and consumer alike. All — and many more like them — should be simple and routine to conduct.

There are commercial precedents to what I’m suggesting. Regulators demand high degrees of transparency from consumer financial services companies, who build their final models accordingly.* I say “final” model because sometimes they build “ideal” models using all available factors, then hold those up as a standard against which their regulation-safe production models are compared. And many simple models, including those used in academic research, are expected to be laid out for readers in full detail.

*Of course, given the credit bubble, it’s not obvious that the models actually work very well.

One thing that should make this idea workable is that it need not go to ridiculous extremes. Maybe a consumer doesn’t want his race taken into account; but if he likes 15 hip-hop artists from Atlanta, it should still be OK to recommend a 16th. Similarly, it’s not important if ALL factors used to differentiate among consumers are disclosed; rather, you need to check enough assumptions to avoid giving too much annoyance or offense. And when it comes to anti-fraud and so on, some of what you measure will be wholly hush-hush.

So far, what I’ve emphasized is how to avert consumer and regulatory backlash. But the idea has positive, breaking-new-ground aspects as well. Envision, for each consumer, a giant matrix. The columns are a variety (perhaps hundreds or thousands) of potential demographic and psychographic facts or scores. The rows are sources of scores, with Row 1 being the consumer herself. Cells contain scores, and perhaps a little other metadata such as confidence or recency. Further imagine a reasonable permission regime for exchanging that information.

I think what you’re imagining is the future of internet marketing.

Related link

Comments

Leave a Reply




Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.