A few weeks ago, we had ChatGPT write our blog post to see what the AI machine knew about site selection. The results surprised us, as for the most part the AI bot wrote something that we could stand behind. While it was somewhat boring and stiff, it got us thinking—will ChatGPT be making site selection decisions in the near future? Spoiler alert: we don’t think so.

When we asked ChatGPT to hand over the keys to the site selection problem, the first thing that they brought up was demographic data, and of course, we agree.

Over the years, we have seen the rise and fall of many a company that believes that science, applied properly, can once and for all solve the problem. Black box models. Circular fields. Drive times. Quarterly updates. AI models. Machine learning. Mobile data. Each latched onto like a rope thrown to a man overboard. Some of these were good innovations, some not so much, but the problem is that each is touted as the complete solution to the problem.

Machine learning algorithms are not new, but it is only in recent years that the computing technology has made them feasible. Theoretically, the computer is able to consider from an unbiased perspective far more intricacies of a problem than is the hopelessly biased and frail human analyst.

The machine is limited to the quality and sufficiency of the underlying data and the range of experiences which it is attempting to predict. It must learn, by trial and error, what works and what doesn’t. How does the machine know what data to bring to the problem? What data is reliable and what isn’t? You can’t throw the entire knowledge base of humanity at the problem and let the machine sort it out. Instead, you need to selectively guide the computer towards the goal and that means that the human, biased and all, a priori makes major decisions to the scope of the model. That is to say, absent an infinite amount of money and computing power, the unbiased methods are biased from the outset.

At the end of the day, the model will be met with great fanfare, and until the day that the model doesn’t work, all will be well. When the model doesn’t work, the answer is that the machine must learn from its mistakes. As we have said in the past, education is expensive and best on someone else’s dime than yours.

Why? Because site selection and network planning are both art and science. Good science is necessary but not sufficient to solve the problem. What is needed is a great team of humans who can guide the science, interpret the results, then make the decisions. Use the model as a guide for “roughly where” and use field visits to look at the intricacies and non-quantifiable aspects of the site, then allow for the all important human intuition to overrule anything the machine says.

We know that many industries are worried how ChatGPT and other AI learning machines will impact jobs in the future, but those in demographic data and commercial real estate should not be concerned—humans will always be needed in our lines of work.

As we have always said, we would take one crusty and opinionated old real estate analyst over machines any time. So, ChatGPT and its brethren? Sure, use them. Use any data and analytics tools you can find and afford. Then let your team of analysts decide. The real investment in this industry has always been, and will always be, those talented individuals who just seem to know when things are right. And when they aren’t.