Big Data- Big Mistakes?

„Big Data will change the world.“ I guess so. Probably. Because Big Data makes it possible to concentrate on correlations that were undiscovered until now and that are simply stated without explaining how they are formed. That is actually one of the biggests point of criticism against Big Data: Big Data does not explain anything, because correllation is not causality.

That is not the problem, in my opinion, on the contrary: computers become phenomenological machines, taking the world as it presents itself, consulting it and registrating the answers. As a trained Gestalt therapist, I am delighted by the idea of phenomenological computers, of course. After all, Gestalt therapists base themselves on phenomenological awareness: they direct attention to things they see and hear, and leave the task of giving meaning to these things to their clients – one of the most effective methods that exist. It allows true co-creation and the transition from a causal view to a holistic and systemic one. Big Data might have a similar potential.

There are two significant pitfalls, however. First: whoever wants to think systemically, must be strengthened enough in his or her personality to be able to accept and to bear „not knowing“ and the complete absence of causality that could be relied on. If not, he or she will continue to defend it. If you want to deal with Big Data, a similar step must be made. Without it there will always be a temptation to derive causality from correlation and to relapse into old patterns.

Second: Gestalt Therapy works, because there are at least two people in the room. The client receives inputs from a bystander who is outside of his individual cognition patterns. And that is barely the case with Big Data: somebody has to tell the machine how they are to connect data. And computers, compared to humans, can only think more, but not differently (see my article about out of the box thinking). That’s as if I told my therapist, consultant, or coach which questions to ask – few new things will arise with such an approach.

Big Data is on everyone’s lips, and we are fascinated to hear that a baby product company knew that one of its clients was pregnant before she knew it herself. Her buying patterns had pointed to it. Such examples make us think that at last we have made it: at last we are able to predict how things will turn out with mathematic and statistic methods – and there we go and return to the causality trap. Back to square one, only with more powerful tools. Only with a faith in predictability that is bigger than ever, because it is based on so much data. The motto is: „Data can be unreliable – so let’s take much more data“, hoping to find an emergent quality as it happens in physics: Superconductivity it it’s only cold enough, space warp if only gravity is strong enough, new material porperties if we go down to one atom in thickness. But it is totally unclear wheather such effects will occur.

There is a risk that the option is cleared from our minds that our conclusions might just as well be wrong: Big Data, big mistakes. We would do well to keep in mind, even more than in the past, that we are building models, and that we are building them.

flying highzoom