I started my undergraduate degree in geography in the late 1970’s, but lest you think I roamed free with the dinosaurs, I admit to being slightly underage for a college freshman and most species were likely already extinct. It was an exciting time in geography as the 1960’s quantitative revolution was about to meet the newly developing field of “computer cartography”. So, two stories from my early years in what we now call the geo-spatial industry….

Until the early 1990’s, I had a map on my office wall that I often looked at to reflect on how far the GIS world had progressed. The map, produced using SYMAP, displayed income by census tract.  Shading on a dot-matrix printer was accomplished by using printing a character, and repeatedly backspacing then printing a different character over top to achieve a shaded appearance. The program was fed by punch cards, and the card readers had a cruel tendency to randomly destroy cards. It was hideous, but for the time was a pretty impressive achievement.

By this time, we had a range of desktop mapping applications like AtlasGIS and MapInfo, which were in comparison simple to use. And, believe it or not, they even produced color maps. The map remained on the wall, because every time a new employee came in to whine about how hard it was to create a nice map, I would simply point towards the wall, grunt, then point them to the door. A very effective complaint suppressor.

Sadly, I no longer have that map, but I did manage to find an image of a SYMAP production showing Connecticut by county with some unknown shading variable –

As impressive as this map may be, the contrast to today’s cartographic software needs no further comment, except that the ease with which one can create a map is quite sadly uncorrelated with its quality and usability.

The second example comes from an ill-fated experiment in the late 1980’s. There was a new player in the GIS world called Tydac, who had an apparently impressive quad-tree based raster GIS (SPANS) running on a Sun Microstation. They were anxious to see how it worked in a commercial environment and lent us the appropriate hardware – and believe it or not – threw in one of their senior programmers for a month.

My first experiments went quite poorly. Having successfully processed and displayed the provincial boundaries, I decided to throw the entire set of several hundred thousand Canadian postal code points at it. It of course failed miserably, repeatedly, with each failure requiring a rather lengthy workstation reboot.  The cycle of reboots was accompanied by groans from the borrowed programmer, who kept asking me “why would you want to do that anyways?” The fact that they didn’t understand was probably the reason the software did not exactly flourish.

The Sun Microstation and the software we gladly jettisoned at the end of the test period, and while we tried to retain our befriended senior programmer, he eventually escaped our custody and we never saw him again. We hear he decided on a more satisfying career selling ice cream treats to small children.

On a more practical level, we eventually managed to map postal codes and eagerly set off to show retailers why they needed this newfangled wizardry. Live demonstrations have always been risky, and we quickly discovered that one ‘wet’ postal code (that is, geocoded beautifully into the middle of Toronto harbor) is enough to allow the skeptical to entirely dismiss the technology and worse yet, the data behind it. After that, they questioned every demographic report they got from us.

From this we learned our first major lesson – which remains relevant to this day – displaying raw data means showing its errors and that a single, blatant error is enough to discredit the entire dataset.

To the point, then, we offer some hopefully obvious reflections on the change in our little corner of the world over the last four decades –

First, there has been an explosion of data, both public and private, which is geographically based, and this has reduced the cost of data by several orders of magnitude. Governments worldwide have adopted “open” data policies which means that the very expensively created base datasets – topography, administrative boundaries, roads – are freely available. Commercial data costs have come way down as a result since it can be difficult to compete with slightly out of date free data.

Second, the dramatic change in computing speed and storage has made the visualization of large datasets practical and affordable. A map showing a million points does not crash an expensive piece of software but instead is finished almost instantly. Mapping applications are now snap-in components of larger business processes, often with a single API call.

Third, data quality has improved dramatically. The mapping of location data forced improvements to geocoding, base map quality, and has pushed many datasets towards increasing spatial detail. A single bad coordinate pair in a database which used to go unnoticed is now the first thing that a user sees.

Fourth, spatial analytic methodologies largely developed some decades ago, are now viable for practical usage. Routing applications, developed in the 1970’s, are now commonplace and required elements of most business use. Hot spot analysis, based on spatial statistics long ago developed, is now practical and nearly commonplace.

Despite this, much of the analysis which is currently undertaken does not represent much of an advance – it just happens faster. We at AGS hope to continue to be a significant part of this evolution with product innovations like our site signature analysis and distance adjusted trade areas. Geospatial analysis is about to get interesting!