OpenStreetMap is the free, open-source map of the world created by volunteers all over the globe. The US chapter, guided by its Board of Directors, supports the OpenStreetMap project in the United States through education, fostering awareness, ensuring broad availability of data, continuous quality improvement, and an active community.
Since our Board is made of (elected) volunteer positions, our time to enact our larger goals is somewhat limited given that these efforts often require massive coordination, planning, and execution. To provide the needed support to the US mapping community, we are hiring our first ever Executive Director.
This is truly a unique time for the OpenStreetMap community, as the Executive Director will have a chance to make a difference at the local, national, and international level. I recently asked my fellow Board Members why they are excited for this new role.
Ian Dees – Having an executive director means we can expand our ability to build the OpenStreetMap community in the US. There’s so many great ideas we have but none of us have the time to coordinate them. The executive director will help us with these things and keep OpenStreetMap US moving forward.
Maggie Cawley – Prior to joining the board, I had only the slightest idea of how much it took to run a small nonprofit. Board members and generous community members can only do so much with their limited volunteer time. With an Executive Director at the helm, there will be dedicated support not only for the State of the Map US conference and basic admin tasks, but more importantly someone to lead a broader organization strategy, fundraising efforts, and be there to support the local communities and more projects. It will be a change, but hopefully one that positively impacts the broader OpenStreetMap community.
Bryan Housel – OpenStreetMap is in an amazing position right now. People depend on up-to-date maps more than ever, and OpenStreetMap data is being used in popular apps like Snapchat, Pokemon Go, Tinder, and by companies like Facebook, Apple, Microsoft, IBM, and Uber. I’m inspired by what we’ve been able to create with an engaged community of volunteers who care about making the map around them as accurate as possible. Bringing on an executive director can help channel this energy into new projects to measure and improve data quality, communicate our successes, and grow and strengthen our community of mappers.
So, what are you waiting for? Are you interested? Know someone who’d be a great fit? We encourage you to share this announcement within your networks to help us find the right candidate.
The Executive Director Position
The duties of this role cover a broad scope, encompassing organizational program and strategy, as well as fundraising, finance, and marketing. This position will require a high degree of flexibility and creativity, and a collaborative and inventive orientation.
The successful candidate will be mission-driven and passionate about the idea of creating and applying open, accurate geospatial data for the world. This is a role with ample room for growth and creativity, and the successful candidate will come from a diversity of backgrounds.
Here are some helpful links with more information about the position and the application form:
Someone somewhere, with a similar addiction to being busier than humanly possible, said that when a door opens, you should walk through it. In other words, when opportunity knocks, if you’re at all interested, you should pounce. I guess that’s what I was thinking about this time last year when Atanas Entchev reached out to the GeoHipster advisory board to see if anyone was interested in undertaking an effort to make GeoHipster a business independent from his previous ventures. I immediately said yes, and convened a hangout with several other board members to go over the options.
Fortunately for me, two other board members, Jonah Adkins and Amy Smith, also expressed interest in taking on new duties, and Atanas agreed to stay on once he knew he wouldn’t have to run the entire operation himself. It took a while for us to figure out the optimal formal business structure: a sole proprietorship LLC registered in Minnesota, which allows me to take over most operational and financial duties while the others focus on communications, editorial duties, and creative efforts. And yes, I fully realize and enjoy the irony that drips from the phrase, “CEO of GeoHipster, LLC”…and the fact that our fiscal year will start on Groundhog Day.
On the outside, however, very little will change about GeoHipster as a website and a collaborative effort. Our mission remains the same, we still rely on volunteer authors to help us generate content, and our editorial policy is unchanged. By undertaking this transition behind the scenes, we hope the result is a more sustainable GeoHipster, so we can continue interviewing interesting geohipsters from around the world, and our readers can learn from their experiences.
A few of my family members and colleagues have asked me why I decided to do this. Perhaps I was inspired by my good friend and fellow dad Justin Bell, who holds down a day job, plays in two bands, owns a side business, and teaches classes at night. I figure if he can make time for all those things plus family time, I can make time for something that I enjoy. And ever since that first interview I conducted with David Bitner, I’ve very much enjoyed my involvement with GeoHipster. It’s a major change of pace from my day job, a place where I can promote my tutorial on REST endpoints, and probably the only way I’ll ever be able to use a basin wrench as a metaphor.
Or maybe it’s all just a ploy to score another GeoHipster t-shirt. Might as well look stylish when walking through that door that just opened.
Will Cadell is the founder and CEO of Sparkgeo.com, a Prince George-based business which builds geospatial technology for some of the biggest companies on Earth. Since starting Sparkgeo, he has been helping startups, large enterprises, and non-profits across North America make the most of location and geospatial technology.
Leading a team of highly specialized, deeply skilled geospatial web engineers, Will has built products, won patents, and generally broken the rules. Holding a degree in Electronic and Electrical Engineering and a Masters in Environmental Remote Sensing, Will has worked in academia, government, and in the private sector on two different continents, making things better with technology. He is on the board of Innovation Central Society, a non-profit society committed to growing and supporting technology entrepreneurs in North Central BC.
I’m not really old enough to reflect on cartography and its “nature”, however I want to comment on a trend I see in the modern state of our art and suggest a pattern back to an old truism.
At Sparkgeo we have a unique position in the market. Let me clarify that position, we create web mapping products. Meaning cartographic or simply geographic products which are built for people to consume primarily via web browsers. Additionally, we are vendor agnostic and continue to push the idea of geographic excellence & client pragmatism rather than particular brands. We work with organizations as diverse as financial institutions, startups, big tech, satellite companies and non-profits. In essence we build a lot of geographic technology, for a lot of very different organizations. We have also created paper maps, but in the last half decade I haven’t created a paper product. Not because we haven’t pursued projects of this nature, but because no one has asked us to. To be clear, we have created signage, for trail networks and such, but our activity with personal mapping products has moved to the web almost completely.
Telling. But not entirely surprising given that maps are largely tools and tools evolve with available technology.
Our position in the market, therefore is as a company creating cartographic products using the medium which is most pertinent to the users of that product. In the vast majority, those users are on a computer or most likely a mobile device.
Maps are of course defined by their relationship between things and people. An art form which links people to events and things on our (or indeed any other) planet. People and places, my friends. This will be obvious to most of my readers here, but what may be less obvious is the linkage therefore that our industry must have to cities. More-so, that cities and indeed urbanization have and will continue to craft the art of cartography for our still young millennium.
I say this whilst flying from one highly urbanized place to another, but also whilst calling relative rurality home. I am a great fan of open space, but even I can see that large groups of people are sculpting the future of our industry. It could be argued that cartography was originally driven by the ideas of discovery & conquest. Conquest or our more modern equivalence, “defense” is still very much an industrial presence. Subsequently, it could be argued that ‘GIS’ was driven by the resource sector, indeed much effort is still being undertaken in this space. I would have, until the last half decade, still argued that geospatial was in the majority the domain of those in the defense trade and the resource sector. Not so now. We have become an urban animal and with that urbanization it is clear that the inhabitants and administrators of our cities will drive geospatial. Cities and their evolution into smart cities will determine how we understand digital geography.
Let’s take a look at some of the industrial ecology which has enabled this trend. My hope is to engender some argument and discussion. Feel free to dissent and challenge, we are all better for it. I want to talk briefly about 5 key features of our environment which have individually, but more-so together, altered the tide of our industry.
It is clear that the general trend has and is continuing to be for people to move toward cities (https://en.wikipedia.org/wiki/Urbanization_by_country). Now, though I dispute that this is necessary (https://www.linkedin.com/pulse/location-life-livelihood-will-cadell), I cannot ignore the evidence that clearly describes the mass migration of people of most nationalities towards the more urbanized areas of their worlds. Our pastoral days have been coming to an end for some time. We will of course always need food, but the vast majority of Earth’s population will be in or around cities. The likelihood of employment, economy, and *success* are central to this trend it seems.
Where there are people there is entrepreneurism, administration and now, devices. Entrepreneurism and devices mean data; administration and devices mean data.
Our world is becoming urbanized and our urbanized world is connected. Our devices, our sensors, are helping to augment our realities with extra information. The weather of the place we are about to arrive at, the result of a presidential debate, the nearest vendor of my favorite coffee and opinions disputing the quality of my favorite coffee. Ah, the Internet. My reality is now much wider than it would have been without my device. Some might argue shallower too, but that is a different discussion. The central point here is that my device detects things about my personal universe and stores those data points in a variety of places. I now travel with three devices: a laptop, tablet and phone. This would have been ludicrous to me a decade ago, but much of what I do now would have been ludicrous a decade ago. We truly live in the future. Much of that future has been enabled by devices and our subsequently connected egos.
Devices capture data. Really, all a device is is a node attached to a variety of greater networks. Whether those networks are temperature gradients, a telephonic grid, home wifi, elevation or a rapid transit line, the device is simply trying to record its place in our multidimensional network and relay that in some meaningful way back to you and likely a software vendor. Devices capture and receive data on those networks. That data could be your voice or a location, and that data could be going A N Y W H E R E.
But, the fact that the data is multidimensional and likely has a location component is critical for the geospatially inclined amongst us. The crowd-sourced effect, coupled with the urbanization effect equal enormous amounts of location data. That is the basic social contract of consumer geospatial.
Of course, the abilities of our devices are magnified by connectivity, wifi, or whatever. Although Sparkgeo is still creating online – offline solutions for data capture, these are becoming more an exception than the rule. Connectivity is a modern utility, it is a competitive advantage that urban centers have over rurality. With increased connectivity we have great access to data transfer, connectivity is thus enabling geographic data capture. Its presence encourages the use of devices which captures data which is often geographic. Urban areas have greater access to connectivity due to the better economies of scale for the cellular and cable companies (who are quickly becoming digital media distribution companies). It is simple really; more people in less area equals more money for less infrastructural investment. For the purposes of this article in reality we just need to concede that those multitude of devices talked about above are more connected for less money in cities than anywhere else.
Compute is the ability to turn the data we collect into ‘more’, whatever that might mean; perhaps some data science, or ‘analysis’ like we used to call it, perhaps some machine learning. In essence compute is joining data to a process to achieve something greater. Amazon Web Services, and subsequently Microsoft’s Azure and Google’s Cloud platforms have provided us with amazing access to relatively inexpensive infrastructure which supports the ability to undertake meaningful compute on the web. Not enough can be said about the opportunity that increased compute on the web provides, but consider that GIS has typically be data limited and RAM limited. With access to robust cloud networks, those two limitations have been entirely removed.
People and devices mean data. Without doubt, lots of people and lots of devices mean lots of data, but there is also likely a multiplier effect here too as we become accustomed to creating data via communication and consumption of digital services. As an example, more ride-sharing happens in urbanized locations, so more data is created in that regard. Connectivity to various networks enabled those rides. Compute will be applied to those recorded data points to determine everything from the cost of the journey to the impact on a municipal transit network and congestion. At every step in that chain of events more data was created, obviously adding more data volume, but also greater opportunity for understanding, of what is yet to be seen. Beyond consumer applications however, city administration and their data also play deeply into this equation.
With these supportive trends we have seen two ends of our industry grow enormously. It is a wonderful, organic symbiosis really.
On one hand we have the idea of consumer geospatial (Google Maps, Mapbox), which has put robust mapping platforms in the hands of everyone with an appropriate device. Consumer geospatial has enabled activities like location based social networks (Nextdoor), location based advertising (Pokemon Go), ride sharing (Uber, Lyft), wearables (Fitbit, Apple watch), quantified self (Strava, Life360), connected vehicles (Tesla, Uber), digital realty (Zillow), and many others.
On another hand we have seen the rise in the availability of data, and in particular open data. Open data is the publishing of data sets describing features of potentially public interest such as financial reports, road networks, public health records, zip-code areas, census statistics, detected earthquakes, etc.
The great promises of open data are increased transparency and an enabling effect. The enabling of entrepreneurism based on the availability of data to which value can subsequently be added. Typically, bigger cities have more open data available. This is not always true, and the developing world is still approaching this problem, but in general terms a bigger population pays more tax which supports a bigger municipal infrastructure which therefore has the ability to do ‘more’. In recent discussions I am still asked if those promises are being kept, is the investment worth it? The idea of transparency is ‘above my pay grade’, but I can genuinely attest to the entrepreneurial benefit of open data. Though, that benefit might not be realized in the geographic community where the data is published. As a community of data consumers however, we do benefit through better navigational aids, more robust consumer geospatial platforms and ‘better technology’. As a company we at Sparkgeo have recently built a practice around the identification, assessment, cleansing and generalization of open data, because demand for this work never ceases. It’s clear that our open data revolution is in a somewhat chaotic (*ref) phase, but is very much here to stay.
Our geospatial technology industry has taken note too. Greater emphasis from Esri on opening municipal datasets through their http://opendata.arcgis.com/ program is an interesting way for cities who might easily already be using Esri products to get more data “out”. Additionally, Mapbox Cities (https://www.mapbox.com/blog/mapbox-cities/) is a program which is also looking at how to magnify the urban data effect. Clearly there is industrial interest in supporting cities in the management of ever growing data resources. Consider that Uber, an overtly urban company is building its very own map fabric.
If we combine the ideas of consumer geospatial and those of open data, what do we reveal? Amongst other things we can see that more & better data result in many benefits for the consumer, typically in the form of services and products. But we can also see that too much focus on the consumer & crowd based data can be problematic. Indeed, the very nature of the ‘crowd’ is to be less precise and more general. The ‘mob’ is not very nuanced, yet. For crowd based nuance, we can look to advances in machine learning and AI. In the meantime, it’s great to ask the crowd if something exists, but it’s terrible to ask the crowd where that thing is, precisely.
> “is there a new subdivision?” – “yes!”
> “When, exactly should my automated vehicle start to initiate its turn to enter that new subdivision?” – “Now, no wait, now… stop, go back”
Generalization and subsequent trend determination is the domain of the crowd; precision through complex environments is something much more tangible, especially if you miss the turn. As we move towards our automated vehicle future, once that vehicle knows a new subdivision exists, then conceivably it can use on-board LiDAR to provide highly detailed data back to whomever it may concern. This is really where smart cities need to enter our purview. Smart Cities will help join the consumer web to the municipal web, and indeed numerous other webs too. Not to be too facetious, but my notion of consumer geospatial could also be a loose description of an Internet of Things (IoT) application. Smart cities are in essence an expansive IoT problem set.
It’s clear that cities with their growing populations have in-part driven our understanding of people and digital geography through greater data volume. But as we push harder into what a future smart city will look like, we will also start to see even greater multiplier effects.
Mike Dolbow is a GIS Supervisor for the State of Minnesota and the operations manager of the Minnesota Geospatial Commons. He has served on the GeoHipster Advisory Board since 2014.
This past summer, I put a new set of stairs on the end of my deck. In the grand scheme of home improvement, it was a small job, but I’d never done anything like it before, so there was a lot of cursing, achy muscles, and extraneous trips to the hardware store. But that’s how I roll: chuck the manual and learn by doing. I’ve found that understanding your own learning style is a surprisingly underrated secret to success.
In a way, I learned that secret from my dad, who also made sure I had the skills to figure out how to get a job done with the tools available. He told cautionary tales about how my uncle would often dismiss a potential project because he thought “you need a special tool”, or it simply couldn’t be done. The way my dad saw it, that was a poor excuse for not doing the job. Sure, having a “special tool” would make it easier, maybe even faster. But if you had the will, you could find a way to get it done.
My uncle passed away years ago, but my dad and I still honor his memory with a running joke about “the special tool”. You see, now that he can afford some of those tools, he takes a sick pleasure in buying them for me. That way I don’t have any excuses when it comes to my own home improvement projects.
For example, the first Christmas I was a homeowner, he gave me a basin wrench. I looked at it and was like, “Dad, what the heck is this thing?” He replied, “Mike, that’s the special tool!” He proceeded to explain how important it would be in the upcoming faucet replacement jobs I had planned. And once I looked at the old, rusty supply-line nuts under the sinks of my 1915 St. Paul home, I knew he was right: the basin wrench would save me tons of time.
But if I hadn’t had it, I would have found a way.
Years later, he gave me his old carpenter’s square shown below. Could I have drawn that angle on the 2×4 without it? Sure. Did it save me time because I had it? Absolutely. And I’m sure there are also a bunch of hyper-specific tools that might have saved me time in adding those stairs. But if I had waited around to acquire every single special tool that would possibly aid in the process, I’d probably still be working on it now.
So, what does this mean in my day job at the intersection of IT and geospatial?
Well, if it’s not obvious by this point, I’m going to be wary of anyone who says they absolutely need software X,Y, or Z in order to get a job done. Listen, I’m going to tell you my requirements for a finished product. I’ll try to get you the tools you need to get there, but ultimately I don’t care how you get there, you have to figure it out.
With developers, I cringe if I hear them say stuff like, “I need a fully loaded Eclipse IDE with a local JBoss server and plugins for Git, Maven, Spring, and Hibernate pre-configured.” Well, for one, I barely know what some of those things are, but for the small web apps we build, I’m thinking you’re only going to need about half of them. Heck, if you really know what you’re doing, you ought to be able to write a decent app with Notepad++. But I simply can’t guarantee that you’re going to have the exact suite of tools you had at your last job, and you’re going to have to adjust.
On the geospatial side, I get concerned if someone says they need the full Esri stack to get anything done. “No, I don’t just need ArcGIS Desktop, I need the Advanced level. I also need an ArcGIS Server install with enterprise ArcSDE at 10.2 and an AGOL subscription.” Well, that adds up to some pretty hefty maintenance fees very quickly. If I just need a map of our office on our website, most of those tools are overkill.
I was looking at a lot of painful angle cuts for my floors, then my FIL was all like, “Say hello to my LIL FRIEND!” pic.twitter.com/E28tmlBCel
Ultimately, I guess I can’t blame people for wanting to work with the best tools available: whether you’re talking Esri, Microsoft, Oracle, Mapbox, or Google, there’s some amazing stuff to work with out there in the IT and spatial worlds. Having access to so many amazing tools is a tremendous modern luxury. And if you’ve had a tool before, I know it can be hard to go without it. But sometimes, your favorite “special tool” won’t be available, and I might not be in a position to change that, even if the reason is simply “bureaucracy”. So let’s make sure we know enough about the problem to fix it with whatever we have at hand. After all, we’ll never run out of problems to solve.
Besides, the best tool you’ll ever have available is the one you’ve been using since you were born: your brain. So let’s put it to work, get the job done, and learn something along the way.
Maybe next time my dad will give you the basin wrench.
Paul Ramsey is a Solutions Engineer at CartoDB. He has been working with geospatial software for over 15 years: consulting to government and industry, building a geospatial software company, and programming on open source. He co-founded the PostGIS spatial database project in 2001, and is currently an active developer and member of the project steering committee. In 2008, Paul received the Sol Katz Award for achievement in open source geospatial software. Paul speaks and teaches regularly at conferences around the world.
I’m writing this article for GeoHipster almost simultaneously with the Esri User Conference (UC) plenary session, which feels appropriate. If being a “hipster” means being in some way unconventional, then I’m missing out on the peak event of the “conventional” GIS community, and what could be more “GeoHipster” than that?
It’s been a long time since I attended the UC, probably 10 years or so, and the dominant feeling I remember coming away from the last event was one of absolute dejection and depression.
I was at the time, as I am now, a proponent of doing things “differently”, of exploring other options than the dominant enterprise mainstream, and it’s very hard to sit in a room full of over 10 thousand people applauding the dominant enterprise mainstream and still think your ideas have much merit. And as much as I enjoy GeoHipsterism and all its proponents, one of the dangers of our little echo-chamber is that we forgot just how fundamentally irrelevant our ideas are to the actual practice of professional GIS in the world.
The source of my dejection while sitting in the UC plenary had a lot to do with the futility of my position: here were 10K folks who would never care a whit about what I was working in. Here also was a company with so many resources that they could afford to waste the efforts of huge development teams on products and ideas that would never pan out.
That particular plenary, back in 2005, included lots of 3D technology that has never seen the light of day since, and felt like a festival of technological spaghetti throwing. There was not a wall left unfestooned with spaghetti. And it wasn’t random either. They were comprehensively going down every possible track of future technology, even though 75% of them were going to end up dead-ends, just to avoid missing out on the one track that turned out to be relevant for the future.
And this brought yet more dejection. Even, if by some amazing chance, I did hit on an idea or technology that was important enough to gain a market presence or interest, Esri would turn their vast development resources upon the problem and render it an also-ran in short order.
Why even bother?
It took me about a month to recover.
Since what I was working on then and what I’m working on now is open source, my ability to keep on working and growing it are never at issue. Open source can’t be driven out of business. What is at issue is relevance: whether the work is helpful and worthwhile and useful to people to make the world a better place. Even with 99% of the professional geospatial world locked up and working in the Esri ecosystem, the remaining 1% (pick whatever numbers you like) is still a lot of folks, and a lot of those folks can do things with open source that they could never do with Esri.
So I saw the NGOs and First Nations and academics and innovative governments still doing cool things with open source, and I got happy again and kept soldiering on.
Fast forward ten years.
Heading into this years UC, there was a brief twitter-storm around Esri’s use of vector tiles, which is worth following through several of the conversation chains if you have the time.
very exciting to be working on the open source project @esri is quietly rebranding as their product
In an earlier era, it would not have been hyperbole to state that having Esri use your code/steal your idea guaranteed its relevance in ways that having them ignore it never would. Andrew Turner once told me that one of the big plusses of being acquired by (big, bad) Esri was that his ideas had a much better impact than they did when he was working in his (teeny, tiny) start-up.
But this is a new era, and the people Esri will be serving with their adoption of Tom’s vector tile technology are almost completely separate from the people Tom’s company (Mapbox) will be serving with that technology. There truly is a win-win here. There’s also lots of relevance to be had beyond the now tiny world of “professional” GIS.
And this is where the “GeoHipster” thing gets a little weird. If being a “hipster” means standing outside the mainstream, what becomes of your status when the former mainstream itself becomes marginalized? When I read the list of interviewees and their interviews, it’s clear that mostly we “geohipsters” share a history within the old mainstream and that we have to varying degrees decided to look beyond that mainstream.
But with the growth of the industry “geohipsters” are becoming a minority within a minority. The new kids can’t identify, because they’ve never had to break out of the old paradigm. Tom MacWright, whom I quoted above, and who has already built so much amazing open source geospatial software in his career, has no experience with Esri tools. Outside the solutions engineers, none of my colleagues at CartoDB have any Esri experience either.
To call Esri the dominant company in our field these days is to radically misread what our field actually is, and who is leading it. What technology has changed our field in the last ten years?
Globe visualization and ubiquitous access to imagery (Google/Keyhole)
Mass access to mobile location (Apple/Samsung)
Mobile maps and vector mapping (Google/Apple)
Oblique imagery and model extractions (Microsoft)
Esri isn’t calling the tune, and neither is open source — we’re all just fast followers now.
So I can take some comfort that — some 10 years after I sat in the Esri UC plenary and wondered why I bother to get up in the morning — some poor Esri exec is going to have to sit in the Google I/O plenary and have the same experience. The jungle is very very large, and there’s always a bigger gorilla.
A self-professed map addict, Gary Gale has worked in the mapping and location space for over 20 years through a combination of luck and occasional good judgement. He is co-founder and director of Malstow Geospatial, a consultancy firm offering bespoke consulting and services in the geospatial, geotechnology, maps and location based services fields. A Fellow of the Royal Geographical Society, he tweets about maps, writes about them, and even occasionally makes them.
This is very much an opinion piece of writing, and as such I want to start with a disclaimer. In the past I’ve worked on Yahoo’s maps, on Ovi/Nokia/HERE maps, and these days I’m freelancing, which means the Ordnance Survey — the United Kingdom’s national mapping agency — is my current employer. What follows is my opinion and views, not those of my current employer, not those of previous employers, and certainly not those of future employers. It’s just me. So with that out of the way and stated upfront, I want to opine on OpenStreetMap…
Dear OpenStreetMap, you are truly amazing. Since you started in 2004 with those first few nodes, ways and relationships, you have — to paraphrase a certain Dr. Eldon Tyrell — burned so very, very brightly. (Those of you who know your Blade Runner quotes will know that just after saying this, Tyrell was killed by Roy Baty; I’m not suggesting that anyone should take this literally.)
Just looking at the latest set of database statistics (over 4.6 billion GPS points, over 2.8 billion nodes, over 282.5 million ways, and 3.2 million relationships as of today’s figures) shows how impressive all of this this is.
The maps and data you’ve created are a key element of what’s today loosely termed the geoweb, enabling startups to create maps at little or no cost, allowing some amazing cartography to be created, stimulating research projects, and allowing businesses to spring up to monetise all of this data — some successfully such as MapBox, some less successfully, such as CloudMade.
After reading all of this amazement and adoration, you’re probably expecting the next sentence to start with “But …”, and I’m afraid you’d be right.
But times change, and the mapping and location world we live in has changed rapidly and in unexpected ways since OSM started in 2004. In just over a decade the web has gone mobile with the explosive growth of sensor-laden smartphones, and location is big business — $3.8 billion’s worth of big in 2018 if you believe Berg Insight.
In 2004 if you wanted maps or mapping data then you either went to one of the national or cadastral mapping agencies — such as the UK’s Ordnance Survey — or you went to one of the global, automotive-focused, mapping providers; you went to NAVTEQ or to TeleAtlas. Maps and mapping data are expensive to make and expensive to maintain, and this expense was and continues to be reflected in the licensing charge you paid for mapping data, as well as in the restrictions around what you could and couldn’t do with that data. The high cost of data and the license restrictions were one of the key drivers for the establishment of OpenStreetMap in the first place.
Eleven years on, and the mapping industry landscape is a very different one. NAVTEQ and TeleAtlas are no longer independent entities — Chicago-based NAVTEQ was acquired by Nokia in 2008 after an EU antitrust investigation gave the deal the green light, and Amsterdam-based TeleAtlas has been part of TomTom since 2008. Both companies continue to license their mapping data and their services, with NAVTEQ — now known as HERE — powering the maps for Bing and Yahoo amongst others, and TomTom licensing their data to MapQuest and to Apple as part of the relaunch of Apple Maps in 2012.
There’s also been changes from the national and cadastral mapping agencies, with more and more data being released under various forms of open license — including the Ordnance Survey’s open data program, which in direct contrast to the old licensing regime is now under one of the most liberal of licenses.
In 2007 there was much attention paid to the NAVTEQ and TeleAtlas deals, citing uncertainty surrounding continued data supply to the maps and location industry. It was also predicted that the PND market — those personal navigation devices from TomTom and Garmin that sat on top of your car’s dashboard and announced “you have reached your destination” — would collapse rapidly as a result of the rapid growth of GPS-equipped smartphones.
These concerns and predictions got only half of the outcome right. The PND market did collapse, but the map data continued to flow, although it’s fair to say that as OSM matured and grew a reasonable chunk of revenue and strategic deals were lost — both directly and indirectly — to OSM itself, and to the organisations who act as a business-friendly face to OSM. But as Greek philosopher Heraclitus once said, everything changes and nothing stands still.
In the last few weeks it’s been widely reported that Nokia is looking to sell off HERE, the company formed by the (sometimes unwilling) union of NAVTEQ and Nokia Maps. Speculation runs rife as to who will become the new owner of HERE, with Uber seeming to be the pundits’ favourite buyer. But whoever does end up owning HERE’s mapping platform, the underlying map data, and the sizeable mapping and surveying fleet, it now seems to be clear that just as the days of NAVTEQ and TeleAtlas as independent mapping organisations came to an end, the days of HERE are coming to an end. This also has shone the spotlight onto TomTom, who whilst making inroads into NAVTEQ’s share of the automotive data market, seems reliant on their deal with Apple to keep revenue flowing in.
Even before the speculation around HERE’s new owner, there are really only three major sources of global mapping, location and geospatial data: NAVTEQ in their current HERE incarnation, TeleAtlas under the mantle of TomTom, and … OpenStreetMap.
When — rather than if — HERE changes ownership, there’s a very real risk than the new owners will turn the data flow and services built on that data inwards, for their own use and their use only, leaving just two major global maps sources.
Surely now is the moment for OpenStreetMap to accelerate adoption, usage and uptake? But why hasn’t this already happened? Why hasn’t the geospatial world run lovingly into OSM’s arms?
To my mind there’s two barriers to greater and more widespread adoption, both of which can be overcome if there’s sufficient will to overcome them within the OSM community as a whole. These barriers are, in no particular order … licensing, and OSM not being seen as (more) conducive to working with business.
Firstly I want to deal with making OSM more business-friendly, as this is probably the biggest barrier to wider-spread adoption over licensing. For anything other than a startup or SME with substantial geospatial competency already in-house, dealing with OSM and comprehending OSM can be a confusing proposition. What is OSM exactly? Is is the community? Is it the OpenStreetMap Foundation? Is it the Humanitarian OpenStreetMap Team? Is it one of the companies in the OSM ecosystem that offers services built on top of OSM? All of them? Some of them? None of them?
There’s no doubt that OSM has a vibrant and active map-making and developer-friendly ecosystem in the form of the OSM Wiki and mailing lists alone, even before you factor in the supporting, indirect ecosystem of individuals, community projects and organisations. But this isn’t enough. Business needs to be able to have a single point of contact to liaise with, actually it often insists on this and will look elsewhere if it can’t find this point of contact with anything more than the most cursory of searches. Whether it’s OSM in some shape or form itself, or a single organisation that stands for and represents OSM, this is the biggest barrier to continued OSM adoption that there is, although it may not necessarily be the one which requires the most work to overcome. For that barrier you need look no further than the ODbL, the Open Database License, under which OSM’s data is licensed.
This is a contentious issue and one which is usually met with a deep sigh and the muttering of “not this again“. Prior to 2010, OSM data was licensed under the Creative Commons Attribution ShareAlike license, normally shorted to CC-BY-SA. This license says that you are free to …
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material for any purpose, even commercially
But CC-BY-SA’s key weakness for OSM was that it is a license designed for the concept of “material“; for creative works and not specifically for data or for databases. This is understandable; at the time OSM adopted CC-BY-SA, such a data-centric license simply didn’t exist, and CC-BY-SA was the best option available. But in 2010, after much discussion and dissent, OSM switched to the data- and database-specific Open Database License. The ODbL maintains the same attribution and share-alike clauses, but phrased in legal language specifically for data sets. It seems like the perfect license for OSM, but it’s not.
The attribution clause in both CC-BY-SA and ODbL are not at issue. Such clauses mean that the efforts of those who have made OSM what it is are formally acknowledged. The issue is the share-alike clause in both licenses, although it’s fair to say that there are subtleties at play due to the many and varied ways in which OSM data can be “consumed“.
If your consumption of OSM data is a passive one, then the share-alike clause probably has little or no impact. By passive, I mean that as a user you are consuming data from OSM via some form of service provider, and your consumption takes the form of an immutable payload from that service, such as pre-rendered map tiles.
But what about if your consumption of OSM data still comes from a service, but takes the form of actual data — such as the results of a geocoder, or some other geospatial search? Such results are typically stored within a back-end data store, which means that by doing so the end result is a dataset which comprises the original data, plus the results of a search added to make a new dataset. Does this trigger the share-alike clause? This is still an ambiguous area, although current guidelines suggest that the resulting, aggregate data set is a produced work rather than a derived one and so are exempt from triggering the share-alike clause. But there is also a counter-argument that suggests that such an action is indeed a derived work, and so the share-alike clause does apply. This ambiguity alone needs to be resolved, one way or another, in order to make OSM an attractive proposition for business.
The final share-alike complication rears its head when your method of consuming OSM data is to merge one or more data sets with OSM to use the resultant data for some purpose. This sort of data aggregation is often called co-mingling in licensing and legal parlance.
If all the datasets you are dealing with are licensed under ODbL, then the share-alike clause potentially has little impact, as effectively ODbL plus ODbL equals … ODbL. Things are a little less certain when you co-mingle with datasets which are deemed to be licensed under a compatible license. Quite what a compatible license is hasn’t been defined. OpenDataCommons, the organisation behind the ODbL, only says that “any compatible license would, for example, have to contain similar share-alike provisions if it were to be compatible“, which while helpful isn’t a clear cut list of licenses that are compatible. At the time of writing I was unable to find any such list.
But if the data you want to co-mingle with OSM, or indeed with any ODbL licensed data, is data that you don’t want to share with the “community” — which of course will include your competitors — the only way to prevent this is not to use the ODbL-licensed data, which means not using OSM in this manner. To be blunt, mixing any data with a share-alike clause means you can lose control of your data, which probably is part of your organisation’s intellectual property and has cost time and money to put together. It’s acknowledged that not all co-mingling of datasets will trigger the share-alike clause; that there needs to be “the Extraction and Re-utilisation of the whole or a Substantial part of the Contents” in order for the share-alike, or indeed for the attribution clauses of the ODbL to kick in. The problem is that what’s classed as “substantial” isn’t defined at all, and OpenDataCommons notes that “the exact interpretation (of substantial) would remain with the courts“.
If you pause and re-read the last few paragraphs, you’ll notice that there’s words and phrases such as “ambiguity“, “isn’t defined” and “exact interpretation“. All of which adds up to an unattractive proposition for businesses considering using OSM or any open data license with a share-alike clause. For smaller businesses, finding the right path to navigate through licensing requires costly legal interpretation, and where money is tight such a path will simply be ignored. For larger businesses, often with an in-house legal team, a risk analysis will often result in an assessment that precludes using data with such a license as the risk is deemed too great.
OSM as a community, as a data set, as a maps and map data provider, and as an entity is at a crossroads. It’s been at this metaphorical crossroads for a while now, but with the way in which the industry is rapidly changing and evolving, this means that there’s these two challenges that OSM should be encouraged to overcome, if there’s a concerted will to do so.
In almost every one of my previous corporate roles I’ve tried to push usage and adoption of OSM to the business, with the notable exception of my time with Lokku and OpenCage Data, where OSM is already in active use. Initially, reaction is extremely positive: “This is amazing“, “Why didn’t we know about this before?“, and “This is just what we’re looking for” are common reactions.
But after the initial euphoria has worn off and the business looks at OSM’s proposition, the reaction is far from positive. “Who are we doing business with here? OSM or another organisation?“, “We can’t have a business relationship with a Wiki or a mailing list“, and “Legal have taken a look at the license, and the risk of using ODbL data is too great, I’m afraid” are paraphrased reactions I’ve heard so many times. To date, not one of the companies I’ve worked in has used OSM for anything other than the most trivial of base mapping tasks, which is such a loss of potential exposure for OSM to the wider geospatial and developer markets.
In short, the lack of a business-facing and business-friendly approach, coupled with the risks and ambiguity over licensing, are what is holding OSM back from achieving far more than it currently does. But it doesn’t have to be this way.
More than anything, OSM needs a business-friendly face. This doesn’t have to be provided by OSM itself; an existing organisation or a new one could provide this, hopefully with the blessing and assent of the OSMF and of enough of a majority of the OSM community. It’s also worth considering a consortium of existing OSM-based businesses, such as MapBox or GeoFabrik or OpenCage Data, getting together under an OSM For Business banner.
Coupled with the new approach to engaging with business, the licensing challenges could be solved by re-licensing OSM data under a license that retains the attribution clause but which removes the share-alike clause. Unlike the need for time to pass in order for the ODbL to be created to enable the transition from CC-BY-SA, such a license already exists in the form of the Open Data Commons Attribution license.
I do not claim for one second that making OSM business-friendly and re-licensing OSM are trivial matters, nor are there quick fixes to make this happen. I also do not doubt that some sections of the OSM community will be quick to explain why this isn’t needed and that OSM is doing very nicely as it stands, thank you very much. And I wouldn’t contest such views for a second. OSM is doing very nicely and will, I believe, continue to do so.
This isn’t about success or failure; OSM will continue to grow and will overcome future challenges. But OSM could be so much more than it currently is, and for that to happen there has to be change.
Place is a shared resource, and when you give all that power to a single entity, you are giving them the power not only to tell you about your location, but to shape it
These words rang true in early 2014 when Serge first published his post, and they ring doubly true in today’s world where the number of sources of global mapping data are being acquired, when the number of options available for getting and using mapping data are shrinking, and where there’s a very real possibility that the power to say what is on the map and what is under the map ends up in the hands of a very small, select group of companies and sources.
So, dear OSM, the world needs you now more than it needed you when you started out, and a lot more than it needed you in 2014. OSM will continue to be amazing, but with change OSM can achieve so much more than was ever dreamed when the first nodes, ways, and relationships were collected in 2004 — if you just get your community finger out and agree that you want to be more than you currently are.
(For non-British readers, “get your finger out” is a colloquial term for “stop procrastinating and get on with it”)