Category Archives: Geohipsters

Will Cadell: “Cities are people, and people are maps”

Will Cadell
Will Cadell
Will Cadell is the founder and CEO of Sparkgeo.com, a Prince George-based business which builds geospatial technology for some of the biggest companies on Earth. Since starting Sparkgeo, he has been helping startups, large enterprises, and non-profits across North America make the most of location and geospatial technology.

Leading a team of highly specialized, deeply skilled geospatial web engineers, Will has built products, won patents, and generally broken the rules. Holding a degree in Electronic and Electrical Engineering and a Masters in Environmental Remote Sensing, Will has worked in academia, government, and in the private sector on two different continents, making things better with technology. He is on the board of Innovation Central Society, a non-profit society committed to growing and supporting technology entrepreneurs in North Central BC.

I’m not really old enough to reflect on cartography and its “nature”, however I want to comment on a trend I see in the modern state of our art and suggest a pattern back to an old truism.

At Sparkgeo we have a unique position in the market. Let me clarify that position, we create web mapping products. Meaning cartographic or simply geographic products which are built for people to consume primarily via web browsers. Additionally, we are vendor agnostic and continue to push the idea of geographic excellence & client pragmatism rather than particular brands. We work with organizations as diverse as financial institutions, startups, big tech, satellite companies and non-profits. In essence we build a lot of geographic technology, for a lot of very different organizations. We have also created paper maps, but in the last half decade I haven’t created a paper product. Not because we haven’t pursued projects of this nature, but because no one has asked us to. To be clear, we have created signage, for trail networks and such, but our activity with personal mapping products has moved to the web almost completely.

Telling. But not entirely surprising given that maps are largely tools and tools evolve with available technology.

Our position in the market, therefore is as a company creating cartographic products using the medium which is most pertinent to the users of that product. In the vast majority, those users are on a computer or most likely a mobile device.

Maps are of course defined by their relationship between things and people. An art form which links people to events and things on our (or indeed any other) planet. People and places, my friends. This will be obvious to most of my readers here, but what may be less obvious is the linkage therefore that our industry must have to cities. More-so, that cities and indeed urbanization have and will continue to craft the art of cartography for our still young millennium.

I say this whilst flying from one highly urbanized place to another, but also whilst calling relative rurality home. I am a great fan of open space, but even I can see that large groups of people are sculpting the future of our industry. It could be argued that cartography was originally driven by the ideas of discovery & conquest. Conquest or our more modern equivalence, “defense” is still very much an industrial presence. Subsequently, it could be argued that ‘GIS’ was driven by the resource sector, indeed much effort is still being undertaken in this space. I would have, until the last half decade, still argued that geospatial was in the majority the domain of those in the defense trade and the resource sector. Not so now. We have become an urban animal and with that urbanization it is clear that the inhabitants and administrators of our cities will drive geospatial. Cities and their evolution into smart cities will determine how we understand digital geography.

Let’s take a look at some of the industrial ecology which has enabled this trend. My hope is to engender some argument and discussion. Feel free to dissent and challenge, we are all better for it. I want to talk briefly about 5 key features of our environment which have individually, but more-so together, altered the tide of our industry.

1. people

It is clear that the general trend has and is continuing to be for people to move toward cities (https://en.wikipedia.org/wiki/Urbanization_by_country). Now, though I dispute that this is necessary (https://www.linkedin.com/pulse/location-life-livelihood-will-cadell), I cannot ignore the evidence that clearly describes the mass migration of people of most nationalities towards the more urbanized areas of their worlds. Our pastoral days have been coming to an end for some time. We will of course always need food, but the vast majority of Earth’s population will be in or around cities. The likelihood of employment, economy, and *success* are central to this trend it seems.

Where there are people there is entrepreneurism, administration and now, devices. Entrepreneurism and devices mean data; administration and devices mean data.

2. devices

Our world is becoming urbanized and our urbanized world is connected. Our devices, our sensors, are helping to augment our realities with extra information. The weather of the place we are about to arrive at, the result of a presidential debate, the nearest vendor of my favorite coffee and opinions disputing the quality of my favorite coffee. Ah, the Internet. My reality is now much wider than it would have been without my device. Some might argue shallower too, but that is a different discussion. The central point here is that my device detects things about my personal universe and stores those data points in a variety of places. I now travel with three devices: a laptop, tablet and phone. This would have been ludicrous to me a decade ago, but much of what I do now would have been ludicrous a decade ago. We truly live in the future. Much of that future has been enabled by devices and our subsequently connected egos.

Devices capture data. Really, all a device is is a node attached to a variety of greater networks. Whether those networks are temperature gradients, a telephonic grid, home wifi, elevation or a rapid transit line, the device is simply trying to record its place in our multidimensional network and relay that in some meaningful way back to you and likely a software vendor. Devices capture and receive data on those networks. That data could be your voice or a location, and that data could be going A N Y W H E R E.

But, the fact that the data is multidimensional and likely has a location component is critical for the geospatially inclined amongst us. The crowd-sourced effect, coupled with the urbanization effect equal enormous amounts of location data. That is the basic social contract of consumer geospatial.

3. connectivity

Of course, the abilities of our devices are magnified by connectivity, wifi, or whatever. Although Sparkgeo is still creating online – offline solutions for data capture, these are becoming more an exception than the rule. Connectivity is a modern utility, it is a competitive advantage that urban centers have over rurality. With increased connectivity we have great access to data transfer, connectivity is thus enabling geographic data capture. Its presence encourages the use of devices which captures data which is often geographic. Urban areas have greater access to connectivity due to the better economies of scale for the cellular and cable companies (who are quickly becoming digital media distribution companies). It is simple really; more people in less area equals more money for less infrastructural investment. For the purposes of this article in reality we just need to concede that those multitude of devices talked about above are more connected for less money in cities than anywhere else.

4. compute

Compute is the ability to turn the data we collect into ‘more’, whatever that might mean; perhaps some data science, or ‘analysis’ like we used to call it, perhaps some machine learning. In essence compute is joining data to a process to achieve something greater. Amazon Web Services, and subsequently Microsoft’s Azure and Google’s Cloud platforms have provided us with amazing access to relatively inexpensive infrastructure which supports the ability to undertake meaningful compute on the web. Not enough can be said about the opportunity that increased compute on the web provides, but consider that GIS has typically be data limited and RAM limited. With access to robust cloud networks, those two limitations have been entirely removed.

5. data

People and devices mean data. Without doubt, lots of people and lots of devices mean lots of data, but there is also likely a multiplier effect here too as we become accustomed to creating data via communication and consumption of digital services. As an example, more ride-sharing happens in urbanized locations, so more data is created in that regard. Connectivity to various networks enabled those rides. Compute will be applied to those recorded data points to determine everything from the cost of the journey to the impact on a municipal transit network and congestion. At every step in that chain of events more data was created, obviously adding more data volume, but also greater opportunity for understanding, of what is yet to be seen. Beyond consumer applications however, city administration and their data also play deeply into this equation.

With these supportive trends we have seen two ends of our industry grow enormously. It is a wonderful, organic symbiosis really.

On one hand we have the idea of consumer geospatial (Google Maps, Mapbox), which has put robust mapping platforms in the hands of everyone with an appropriate device. Consumer geospatial has enabled activities like location based social networks (Nextdoor), location based advertising (Pokemon Go), ride sharing (Uber, Lyft), wearables (Fitbit, Apple watch), quantified self (Strava, Life360), connected vehicles (Tesla, Uber), digital realty (Zillow), and many others.

On another hand we have seen the rise in the availability of data, and in particular open data. Open data is the publishing of data sets describing features of potentially public interest such as financial reports, road networks, public health records, zip-code areas, census statistics, detected earthquakes, etc.

The great promises of open data are increased transparency and an enabling effect. The enabling of entrepreneurism based on the availability of data to which value can subsequently be added. Typically, bigger cities have more open data available. This is not always true, and the developing world is still approaching this problem, but in general terms a bigger population pays more tax which supports a bigger municipal infrastructure which therefore has the ability to do ‘more’. In recent discussions I am still asked if those promises are being kept, is the investment worth it? The idea of transparency is ‘above my pay grade’, but I can genuinely attest to the entrepreneurial benefit of open data. Though, that benefit might not be realized in the geographic community where the data is published. As a community of data consumers however, we do benefit through better navigational aids, more robust consumer geospatial platforms and ‘better technology’. As a company we at Sparkgeo have recently built a practice around the identification, assessment, cleansing and generalization of open data, because demand for this work never ceases. It’s clear that our open data revolution is in a somewhat chaotic (*ref) phase, but is very much here to stay.

Our geospatial technology industry has taken note too. Greater emphasis from Esri on opening municipal datasets through their http://opendata.arcgis.com/ program is an interesting way for cities who might easily already be using Esri products to get more data “out”. Additionally, Mapbox Cities (https://www.mapbox.com/blog/mapbox-cities/) is a program which is also looking at how to magnify the urban data effect. Clearly there is industrial interest in supporting cities in the management of ever growing data resources. Consider that Uber, an overtly urban company is building its very own map fabric.

If we combine the ideas of consumer geospatial and those of open data, what do we reveal? Amongst other things we can see that more & better data result in many benefits for the consumer, typically in the form of services and products. But we can also see that too much focus on the consumer & crowd based data can be problematic. Indeed, the very nature of the ‘crowd’ is to be less precise and more general. The ‘mob’ is not very nuanced, yet. For crowd based nuance, we can look to advances in machine learning and AI. In the meantime, it’s great to ask the crowd if something exists, but it’s terrible to ask the crowd where that thing is, precisely.

> “is there a new subdivision?” – “yes!”

> “When, exactly should my automated vehicle start to initiate its turn to enter that new subdivision?” – “Now, no wait, now… stop, go back”

Generalization and subsequent trend determination is the domain of the crowd; precision through complex environments is something much more tangible, especially if you miss the turn. As we move towards our automated vehicle future, once that vehicle knows a new subdivision exists, then conceivably it can use on-board LiDAR to provide highly detailed data back to whomever it may concern. This is really where smart cities need to enter our purview. Smart Cities will help join the consumer web to the municipal web, and indeed numerous other webs too. Not to be too facetious, but my notion of consumer geospatial could also be a loose description of an Internet of Things (IoT) application. Smart cities are in essence an expansive IoT problem set.

It’s clear that cities with their growing populations have in-part driven our understanding of people and digital geography through greater data volume. But as we push harder into what a future smart city will look like, we will also start to see even greater multiplier effects.

Maps and mappers of the 2016 calendar: Katie Kowalsky

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Katie Kowalsky

Q: Tell us about yourself.

A: katie_hi I’m Katie, a cartographer, hot sauce enthusiast, and recent San Francisco transplant. I work at Mapzen where I focus on building tutorials, writing documentation, and supporting our users through improving the usability of our products. This means in a given week I can be running user research testing, answering support questions or talking at a lot of events.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: I come from a family of artists and since I was little, art museums always feel like home to me. Some of my favorite pieces at the Milwaukee Art Museum (my hometown!) are by Roy Lichtenstein, including Crying Girl and Water Lily Pond Reflections. These two pieces have always been examples of his great use of primary colors and Ben-day dots. This color and texture palette has always stayed in the back of my mind. When I started learning about Tilemill and basemap design, I was inspired by how creative and unique the designers from Stamen and Mapbox were. While working at the Cartography Lab at UW-Madison, I had a chance to rebuild curriculum teaching basemap design and was inspired by my love of pop art to bring that into a basemap design to use as an example for the lab tutorial.

Q: Tell us about the tools, data, etc., you used to make the map.

A: This was built entirely in Mapbox Studio (now known as Classic), using Mapbox-Streets and their vector terrain source for the data. I built this interactive basemap (view it here) from zoom 1 to 22 using the glorious CartoCSS interface!

'Roy Lichtenstein-inspired map of DC' by Katie Kowalsky
‘Roy Lichtenstein-inspired map of DC’ by Katie Kowalsky
'Crying Girl' by Roy Lichtenstein
‘Crying Girl’ by Roy Lichtenstein
'Water Lily Pond Reflections' by Roy Lichtenstein
‘Water Lily Pond Reflections’ by Roy Lichtenstein

Maps and mappers of the 2016 calendar: Gretchen Peterson

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Gretchen Peterson

Gretchen Peterson’s most recent books are City Maps: A Coloring Book for Adults and QGIS Map Design. Peterson resides in Colorado and actively tweets via @petersongis on cartography.

A Cornell graduate in the natural resources field, Peterson can still be found spending part of the workweek absorbed in data analysis and mapping for the greater environmental good while reserving the rest of the workweek for broader mapping endeavors, which includes keeping up on the multitude of innovative map styles coming from all corners of the profession.

Peterson speaks frequently on the topic of modern cartographic design, and it was in one of these talks that the Ye Olde Pubs of London’s Square Mile map was not only shown off but also created on-the-spot as a live demo of the cartographic capabilities of the QGIS software. The FOSS4GNA 2015 conference talk went through the process of loading and styling data and then creating a print composer map layout.

Some highlights of the demo included the custom pub data repository created just for this map, the demonstration of the relatively new shapeburst capabilities of QGIS, and the technique for modifying image file (SVG) code in order to allow icon colors to be changed within the QGIS interface.

The map was also the focus of a QGIS cartography workshop held in Boulder, Colorado. The students at that workshop followed the instructions posted on github to create the map. It’s a great two-hour project for introducing the software and a few of the principles of cartographic design, and readers are encouraged to give it a try and supply any feedback you may have.

'Historic Pubs of London' by Gretchen Peterson
‘Historic Pubs of London’ by Gretchen Peterson

Maps and mappers of the 2016 calendar: Asger Sigurd Skovbo Petersen

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Asger Sigurd Skovbo Petersen

Q: Tell us about yourself

A: I work at a small Danish company called Septima which I also cofounded back in early 2013. I have been in the geo business since 2004 when I received my masters degree (MScE) from the Technical University of Denmark.

I do development, consulting, and data analysis. One of my primary interests is to find new ways of utilizing existing data. This interest really took off when I worked as the sole R&D engineer at a data acquisition company which had a massive collection of data just sitting there and waiting to be upcycled. At this job I got a lot of experience working with quite big LiDAR, raster, and vector datasets, and developing algorithms to process them effectively.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: When processing the second Danish LiDAR-based elevation model, the producing agency released some temporary point cloud data at a very early stage.

My curiousity was too big to leave these data alone, and with a LASmoons license of Martin Isenburg’s LAStools, it was easy to process the 400km^2 las files into 40cm DTM and DSM. And then the usual open source stack helped publishing a hillshaded version as an easy to use web map.

This web map was widely used and cited, as it was the only visible example of the coming national DEM for quite a while. The old model was 1.6m resolution, and with a new resolution of 0.4m a lot of details were revealed, which were not visible in the old model. In the following months we actually received quite a few notes from archaeologists, who had discovered exciting and previously unknown historic stuff just by browsing our map.

Hillshades are the go-to visualisation of DEMs. Probably because they can be easily processed by almost any raster-capable software, and because they are very easily interpreted. However they can also hide even very big structures depending on the general direction of the structure.

This made me want to find a better way to visualise the data so our archaeological friends could get even more information from the new data.

I then read a heap of papers on the subject and decided to try out a visualisation based on Sky View Factor. At the time I didn’t find any implementation that I was able to use, so I ended up implementing my own. (I later discovered that SAGA had a perfectly good implementation, so I could have just used QGIS. But hey, then I wouldn’t have had the fun implementing my own 🙂 )

I did a lot of tests using the Sky View Factor on the new DTM, but I couldn’t make it work as well as I had hoped. By coincidence I ran it on the DSM in an urban area, which gave a very interesting result. This effect is basically what makes the GeoHipster map look different from most other shaded DSMs.

Q: Tell us about the tools, data, etc., you used to make the map.

A: The map consists of several layers: a standard hillshade, a Sky View Factor, building footprints, and water bodies.

The Sky View Factor layer was made using a custom algorithm implemented in Python using rasterio and optimized for speed using Cython. As mentioned this could probably just as well have been processed using SAGA, for instance, through QGIS. The hillshade layer was made using GDAL and the vector layers did not require any special processing.

QGIS was used to symbolize and combine the layers using gradients, transparency and layer blending.

Data used are the national Danish DEM and the national Danish topological map called GeoDanmark. Both datasets are open and can be freely downloaded from Kortforsyningen. Sadly most of these sites are in Danish only – maybe some clever hidden trade barrier.

Here is an online version of my map. For the online version I had to change the symbolization a bit as producing tiles from QGIS Server doesn’t work very well with gradients.

After submitting the map to the GeoHipster 2016 calendar I have been working on coloring the vegetation to get a green component also. There are no datasets for vegetation which include single trees, bushes etc, so I made a python script to extract and filter this information from the classified LiDAR point cloud.

This new map can be seen here in a preliminary version.

'Copenhagen Illuminated' by Asger Sigurd Skovbo Petersen
‘Copenhagen Illuminated’ by Asger Sigurd Skovbo Petersen

Maps and mappers of the 2016 calendar: Jonah Adkins

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Jonah Adkins

Q: Tell us about yourself.

A:  I’m a cartographer from Newport News, Virginia and have been working in GIS since 1999. I enjoy tinkering with mapping tools, co-organizing @maptimehrva, and paid mapping I‘m most interested in map design, openstreetmap, and civic hacking.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: The Noland Trail is 5-mile escape from reality located at Mariners’ Museum Park in Newport News, Virginia. I’ve probably ran a gajillion miles out there over the last several years, and wanted to create  a great map of one of my favorite spots. I started with some pen & paper field mapping, upgraded to some data collection with the Fulcrum app, and made the first version back in 2013. This second version was an exercise in simplifying and refining the first map, it required minimal data updates and a lot more cartographic heart-burn.

Q: Tell us about the tools, data, etc., you used to make the map.

A: The second edition Noland Trail map was made with a combination of QGIS and Photoshop. I threw a ton of information on the first one, probably too much, and it had many ‘GIS-y’ type elements that were lost on the casual map viewer. With this second edition, I wanted to strip away the bulkiness of the original, maintain a high level of detail, and improve the original design. Since the data remained unchanged, with exception of few items, I was able to dedicate the majority of my time on design elements. I’ve also created some related projects like Open-Noland-Trail, an open data site for the trail, and Noland-Trail-GL , a Mapbox GL version of the map built in Mapbox Studio.

Regina Obe: “People spend too much time learning a programming language and forget what a programming language is for”

Regina Obe
Regina Obe
Regina Obe (@reginaobe) is a co-principal of Paragon Corporation, a database consulting company based in Boston. She has over 15 years of professional experience in various programming languages and database systems, with special focus on spatial databases. She is a member of the PostGIS steering committee and the PostGIS core development team. Regina holds a BS degree in mechanical engineering from the Massachusetts Institute of Technology where she wanted to build terminator robots but decided that wasn’t the best thing to do for Humanity. She co-authored PostGIS in Action and PostgreSQL: Up and Running.

Q: Regina Obe – so where are you in the world and what do you do?

A: I’m located in Boston, Massachusetts, USA. I’m a database application web developer, Consultant, PostGIS  project and steering committee team member, and technical author on PostGIS and PostgreSQL related books.

Q: So in my prep work I found you have a degree from MIT in Mechanical engineering with a focus in Bioelectrics and Control systems? What’s that about? How did you end up in working in databases?

A: Hah you just had to ask a hard one. It’s a bit long.

Bioelectronics and control was an amalgamation of all my interests and influences at that point.

My favorite shows growing up were the 6 million dollar man and bionic woman. Like many geeks I loved tinkering with electronic and mechanical gadgets and got into programming at the wee age of 9. I was also very attached to graph paper and would plot out on graph what cells my for loops would hit.

My mother was a forensic pathologist; basically she tore dead people apart to discover how they died and what could have been done to save them. I spent a lot of time reading her books and dreaming about human augmentation and control.

When I came to MIT I had ambitions of getting a BS in Electric Engineering or Mech E., moving on to a PhD, getting my MD, and focussing on orthopaedic limb surgery. MIT’s Mechanical Engineering department had a course track that allowed you to fashion your own major. You had to take X amount of Mech E. and could take any other courses you wanted as long as you could convince your advisor it followed some sort of roadmap you set out for yourself. So that said — what else would I fashion if given the opportunity. At the time MIT did not have a biomedical engineering major.

So my course work included classes in bio-electrical engineering like electrical foundation of the heart where I built and programmed electrical models of hearts and killed and revived rabbits. Basic EE courses with breadboards, class in Scheme programming, electro physiology of the brain etc. On the Mech E. side, I took standard stuff like Fluid Mechanics, Dynamics, Systems Control and for my thesis, programming a simulation package that allowed you to simulate systems with some pre-configured blocks. Most of which I can’t remember.

I looked around at other people who were following my dream and realized I’m way too lazy and not smart enough for that kind of life. When I got out of college, there were few engineering jobs requiring my particular skill set. I got a job as a consultant focussing on business rules management and control. Business rules were managed as data that could become actionable. There I got introduced to the big wild world of databases, then SQL, cool languages like Smalltalk, and trying to translate what people say ambiguously into what they actually mean non-ambiguously.

I found that I really enjoyed programming and thinking about programs as the rules to transition data and reason about data. It’s all about data in the end.

Q: So you dive into databases and SQL and this thing called PostGIS comes along. You’re on the Project Steering committee and Development team for PostGIS. What is PostGIS and how much work is it being a developer and a member of the project steering committee?

A: Yes I’m on the PostGIS steering committee and development team.

PostGIS is a PostgreSQL extender for managing objects in space. It provides a large body of functions and new database types for storing and querying the spatial properties of objects like buildings, property, cities. You can ask questions like what’s the area, what’s the perimeter, how far is this thing from these other things, what things are nearest to this thing, and also allows you to morph things into other things. With the raster support you can ask what’s the concentration of this chemical over here or average concentration over an arbitrary area.

Some people think of PostGIS as a tool for managing geographic systems, but it’s Post GIS. Space has no boundaries except the imagined. Geographic systems are just the common use case.

Remember my fondness for graph paper? It all comes full circle; space in the database. I like to think of PostGIS as a tool for managing things on a huge piece of graph paper and now it can manage things on a 3D piece of graph paper and a spheroidal graph paper too 🙂 . PostGIS is always learning new tricks.

Being a PostGIS developer and member of PSC is a fair amount of work, some programming, but mostly keeping bots happy, lots of testing, and banging people over the head when you feel it’s time to release. I think it’s as much work as you want to put into it though and I happen to enjoy working with PostGIS so spend a fair amount of time on it.

Q: I love PostGIS. I spend a lot of time in QGIS/PostGIS these days and people are constantly asking – ‘HEY WHEN DO WE GET SOMETHING LIKE ROUTING WHERE I CAN DO TIME/DISTANCE MEASUREMENTS?”. You’ve been working on a piece of software called pgRouting which does?

A: This is a leading question to talk about our new book coming out by LocatePress – http://locatepress.com/pgrouting.

Been working on is an over statement. My husband and I have been working writing the book pgRouting: a Practical Guide, publisher Locate Press. We hope to finish it this year. That’s probably biggest contribution we’d done for pgRouting aside from Windows stack builder packaging for pgRouting.

Most of the work for pgRouting is done by other contributors with GeoRepublic and iMapTools folks leading the direction.

My friend Vicky Vergara in particular has been doing a good chunk of work for recent versions of pgRouting 2.1-2.3 (including upgrading pgRoutingLayer for QGIS to support newest features and improving performance of osm2pgrouting) some neat things coming there. She’s been super busy with it. I greatly admire her dedication.

You’ll have to read our book to find out more.  Pre-released copies are available for purchase now and currently half off until it reaches feature completeness. We are about 70% there.

Q: With everything you are doing for work, what do you do for fun?  

A: Work is not fun, don’t tell me that? My goal in life is to achieve a state where I am always having fun and work is fun. I still have some unfun work I’d like to kill. Aside from work I sleep a lot. Like to go to restaurants. Never been big on hobbies unfortunately. Going to places causes me a bit of stress, so not much into travel I’m afraid.

Q: You’ve got a few books out and about. How hard is it to write a book for a technical audience? How hard is it to keep it up to date?

A: It’s much harder than you think and even harder to keep things up. Part of the issue with writing for technical audiences is you never know the right balance. I try to unlearn what I have learned so I can experience learning it again to write about it in a way that a new person coming to the topic can appreciate. I fail badly and always end up cramming too much. I’m an impatient learner.

I always thought 2nd and 3rd editions of books would be easier, but they have been so far just as hard if not harder than the first edition. We are also working on 3rd edition of PostgreSQL: Up and Running. Piece of cake right, what could have changed in 2 years. A fair amount from PostgreSQL 9.3 to 9.5. PostGIS in Action, 2nd edition was pretty much a rewrite. Might have been easier if we started from scratch on that one. So much changed between PostGIS 1.x and 2.x. That said I think we’ll try in future to maybe not write sequels and maybe tackle the subject at a different angle.

Leo wants to write SQL for Kids 🙂 .  He thinks it’s a shame children aren’t exposed to databases and set thinking in general at an early age.

Q: Leo wanting to do a SQL for kids brings up a good question. If you had a person come up and go “What should I learn?” what would you tell them? In the geo world we get beat over the head with Python. You are neck deep in databases. Programming language? Database? What makes the well rounded individual in your opinion?

A: You should first not learn anything. You should figure out what you want to program and then just do it and do it 3 times in 3 different languages and see which ones feel more natural to your objective. Think about how the language forced you to think about the problems in 3 different ways.

You might find you want to use all 3 languages at once in the same program, then use PostgreSQL (PL/Python, SQL, PL/JavaScript J ). That’s okay.

Forget about all the languages people are telling you you should know. I think people spend too much time learning to be good at using a programming language and forget what a programming language is for. It’s hard to survive with just one language these days (think HTML, JavaScript, SQL, Python, R – all different languages). A language is a way of expressing ideas succinctly in terms that an idiot-savantic computer can digest. First try to be stupid so you can talk to stupid machines and then appreciate those machines for their single-minded gifts.

The most amazing developers I’ve known have not thought of themselves as programmers.

They are very focused on a problem or tool they wanted to create, and I was always amazed how crappy their code was and yet could achieve amazing feats of productivity in their overall user-facing design. They had a vision of what they wanted for the finished product and never lost sight of that vision.

Q: Your PostgreSQL Up and Running book is published by O’Reilly. O’Reilly books always have some artwork on the front. Did you get to pick the animal on the front? For your PostGIS book you have a person on the front. Who is that?

A: We had a little say in the O’Reilly cover, but no say on the Manning cover. I have no idea who that woman is on PostGIS in Action. She’s a woman from Croatia. She looks very much like my sister is what I thought when I first saw it.

For O’Reilly they ran out of elephants because they just gave one to Hadoop. Can you believe it? Giving an elephant to Hadoop over PostgreSQL? So then they offered us an antelope. Leo was insulted, he wasn’t going to have some animal that people hunt on the cover of his book, besides antelopes look dumb and frail. I apologize to all the antelope lovers right now. Yes antelopes are cute cuddly and I’m sure loving. Just not the image we care to be remembered for. We wanted something more like an elephant and that is smart. So they offered us up an elephant shrew (aka sengi), which is a close relative of the elephant –  https://en.wikipedia.org/wiki/Afrotheria . It’s a very fast small creature, that looks like it’s made of bits and pieces of a lot of creatures. They blaze trails and are very monogamous. What could be more perfect to exemplify the traits of a database like PostgreSQL that can do everything and is faithful in its execution, aside from having to explain “What is that rodent looking creature on your cover?”.

Q: Way back when GeoHipster started we more or less decided thanks to a poll that a geohipster does things differently, shuns the mainstream, and marches to their own beat. Are you a geohipster?

A: Yes. I definitely shun the mainstream.  When mainstream starts acting like me it’s a signal I need to become more creative.

Q: I always leave the last question for you. Anything you want to tell the readers of GeoHipster that I didn’t cover or just anything in particular?

A: When will PostGIS 2.3 be out? I hope before PostgreSQL 9.6 comes out (slated for September). I’m hoping PostgreSQL 9.6 will be a little late to buy us more time.

Also – Where is the Free and Open Source Software for Geospatial International conference (FOSS4G) going to be held in 2017? In Boston August 14th – 18th. Mark your calendars and bookmark the site.

Hugh Saalmans: “No amount of machine learning could solve a 999999 error!”

Hugh Saalmans
Hugh Saalmans
Hugh Saalmans (@minus34) is a geogeek and IT professional that heads the Location Engineering team at IAG, Australia & New Zealand’s largest general insurer. He’s also one of the founders of GeoRabble -- an inclusive, zero-sales-pitch pub meetup for geogeeks to share their stories. His passion is hackfests & open data, and he’s big fan of open source and open standards.

Q: How did you end up in geospatial?

A: A love of maths and geography is the short answer. The long answer is I did a surveying degree that covered everything spatial from engineering to geodesy.

My first experience with GIS was ArcGIS on Solaris (circa 1990) in a Uni lab with a severely underpowered server. Out of the 12 workstations, only 10 of us could log in at any one time, and then just 6 of us could actually get ArcGIS to run. Just as well, considering most of the students who could get it to work, including myself, ballsed up our first lab assignment by turning some property boundaries into chopped liver.

Besides GIS, my least favourite subjects at Uni were GPS and geodesy. So naturally I chose a career in geospatial information.

Q: You work for IAG. What does the company do?

A: Being a general insurer, we cover about $2 trillion worth of homes, motor vehicles, farms, and businesses against bad things happening.

Geospatial is a big part of what we do. Knowing where those $2tn of assets are allows us to do fundamental things like providing individualised address level pricing — something common in Australia, but not so common in the US due to insurance pricing regulations. Knowing where assets are also allows us to help customers when something bad does happen. That goes to the core of what we do in insurance. That’s when we need to fulfill the promise we made to our customers when they took out a policy.

Q: What on Earth is Location Engineering?

A: We’re part of a movement that’s happening across a lot of domains that use geo-information: changing from traditional data-heavy, point & click delivery to scripting, automation, cloud, & APIs. We’re a team of geospatial analysts becoming a team of DevOps engineers that deliver geo-information services. So we needed a name to reflect that.

From a skills point of view — we’re moving from desktop analysis & publishing with a bit of SQL & Python to a lot of Bash, SQL, Python & Javascript with Git, JIRA, Bamboo, Docker and a few other tools & platforms that aren’t that well known in geo circles. We’re migrating from Windows to Linux, desktop to cloud, and licensed to open source. It’s both exciting and daunting to be doing it for an $11bn company!

Q: You’ve been working in the GIS industry for twenty years, how has that been?

A: It’s been great to be a part of 20+ years of geospatial evolutions and revolutions, witnessing geospatial going from specialist workstations to being a part of everyday life, accessible on any device. It’s also been exciting watching open source go from niche to mainstream, government data go from locked down to open, and watching proprietary standards being replaced with open ones.

It’s also been frustrating at times being part of an industry that has a broad definition, no defined start or end (“GIS is everywhere!”), and limited external recognition. In Australia we further muddy the waters by having university degrees and industry bodies that fuse land surveying and spatial sciences into a curious marriage of similar but sometimes opposing needs. Between the limited recognition of surveying as a profession and of geospatial being a separate stream within the IT industry, it’s no real surprise that our work remains a niche that needs to be constantly explained, even though what we do is fundamental to society. In the last 5 years we’ve tried to improve that through GeoRabble, creating a casual forum for anyone to share their story about location, regardless of their background or experience. We’ve made some good progress: almost 60 pub meetups in 8 cities across 3 countries (AU, NZ & SA), with 350 presentations and 4,500 attendees.

Q: How do you work in one industry for twenty years and keep innovating? Any tips on avoiding cynicism and keeping up with the trends?

A: It’s a cliche, but innovation is a mindset. Keep asking yourself and those around you two questions: Why? and Why Not? Asking why? will help you improve things by questioning the status quo or understanding a problem better, and getting focussed on how to fix or improve it. Saying why not? either gives you a reality check or lets you go exploring, researching and finding better ways of doing things to create new solutions.

Similarly, I try to beat cynicism by being curious, accepting that learning has no destination, and knowing there is information out there somewhere that can help fix the problem. Go back 15-20 years — it was easy to be cynical. If your chosen tool didn’t work the way you wanted it to, you either had to park the problem or come up with a preposterous workaround. Nowadays, you’ve got no real excuse if you put in the time to explore. There’s open source, GitHub and StackExchange to help you plough through the problem. Here’s one of our case studies as an example: desktop brand X takes 45 mins to tag several million points with a boundary id. Unsatisfied, we make the effort to learn Python, PostGIS and parallel processing through blogs, posts and online documentation. Now you’re cooking with gas in 45 seconds, not 45 minutes.

Another way to beat cynicism is to accept that things will change, and they will change faster than you want them to. They will leave you with yesterday’s architecture or process and you will be left with a choice to take the easy road and build up design debt into your systems (which will cost you at some point), or you take the hard road and learn as you go to future-proof the things you’re responsible for.

Q: What are some disruptive technologies that are on your watch list?

A: Autonomous vehicles are the big disruptor in insurance. KPMG estimate the motor insurance market will shrink by 60% in the next 25 years due to a reduction in crashes. How do we offset this loss of profitable income? By getting better at analysing our customers and their other assets, especially homes. Enter geospatial to start answering complicated questions like “how much damage will the neighbour’s house do to our insured’s house during a storm?”

The Internet of Things is also going to shake things up in insurance. Your doorbell can now photograph would-be burglars or detect hail. Your home weather sensor can alert you to damaging winds. Now imagine hundreds of thousands of these sensors in each city — imagine tracking burglars from house to house, or watching a storm hit a city, one neighbourhood at a time. Real-time, location-based sensor nets are going to change the way we protect our homes and how insurers respond in a time in crisis. Not to mention 100,000+ weather sensors could radically improve our ability to predict weather-related disasters. It’s not surprising IBM bought The Weather Channel’s online and B2B services arm last year, as they have one of the best crowdsourced weather services.

UAVs are also going to shake things up. We first used them last Christmas after a severe bushfire (wildfire) hit the Victorian coast. Due to asbestos contamination, the burnt out area was sealed off. Using UAVs to capture the damage was the only way at the time to give customers who had lost everything some certainty about their future. Jumping to the near future again — Intel brought their 100-drone lightshow to Sydney in early June. Whilst marvelling at a new artform, watching the drones glide and dance in beautiful formations, it dawned on me what autonomous UAVs will be capable of in the next few years — swarms of them capturing entire damaged neighbourhoods just a few hours after a weather event or bushfire has passed.

Q: What is the dirtiest dataset you’ve had to ingest, and what about the cleanest?

A: The thing about working for a large corporation with a 150-year history is your organisation knows how to put the L into legacy systems. We have systems that write 20-30 records for single customer transactions in a non-sequential manner; so you almost need a PhD to determine the current record. There are other systems that write proprietary BLOBs into our databases (seriously, in 2016!). Fortunately, we have a simplification program to clear up a lot of these types of issues.

As far as open data goes — that’d be the historical disaster data we used at GovHack in 2014.  Who knew one small CSV file could cause so much pain. Date fields with a combination of standard and American dates, inconsistent and incoherent disaster classifications, lat/longs with variable precisions.

I don’t know if there is such a thing as a clean dataset. All data requires some wrangling to make it productive, and all large datasets have quirks. G-NAF (Australia’s Geocoded National Address File) is pretty good on the quirk front, but at 31 tables and 39 foreign keys, it’s not exactly ready to roll in its raw form.

Q: You were very quick to release some tools to help people to work with the G-NAF dataset when it was released. What are some other datasets that you’d like to see made open?

A: It can’t be understated how good it was to see G-NAF being made open data. We’re one of the lucky few countries with an open, authoritative, geocoded national address file, thanks to 3 years of continual effort from the federal and state governments.

That said, we have the most piecemeal approach to natural peril data in Australia. Getting a national view of, say, flood risk isn’t possible due to the way the data is created and collected at the local and state government level. I’m obviously biased being in the insurance industry about wanting access to peril data, but having no holistic view of risk, nor having any data to share doesn’t help the federal government serve the community. It’s a far cry from the availability of FEMA’s data in the US.

Q: Uber drivers have robot cars, McDonald’s workers have robot cooks, what are geohipsters going to be replaced with?  

A: Who says we’re going to be replaced? No amount of machine learning could solve a 999999 error!

But if we are going to be replaced — on the data capture front it’ll probably be due to autonomous UAVs and machine learning. Consider aerial camera systems that can capture data at better than 5 cm resolution, but mounted on a winged, autonomous UAV that could fly 10,000s of sq km a day. Bung the data into an omnipotent machine learning feature extractor (like the ones Google et al have kind of got working), and entire 3D models of cities could be built regularly with only a few humans involved.

There’ll still be humans required to produce PDFs… oh sorry, you said what are geohipsters going to be replaced with. There’ll still be humans required to produce Leaflet+D3 web maps for a while before they work out how to automate it. Speaking of automation — one of the benefits of becoming a team of developers is the career future-proofing. If you’re worried about losing your job to automation, become the one writing the automation code!

Q: What are some startups (geo or non-geo) that you follow?

A: Mapbox and CartoDB are two of the most interesting geospatial companies to follow right now. Like Google before them, they’ve built a market right under the noses of the incumbent GIS vendors by focussing on the user and developer experience, not by trying to wedge as many tools or layers as they can into a single map.

In the geocoding and addressing space it’s hard to go past What3Words for ingenuity and for the traction they’ve got in changing how people around the World communicate their location.

In the insurance space, there’s a monumental amount of hot air surrounding Insuretech, but a few startups are starting to get their business models off the ground. Peer to peer and micro insurance are probably the most interesting spaces to watch. Companies like Friendsurance and Trov are starting to make headway here.

Q: And finally, what do you do in your free time that makes you a geohipster?

A: The other day I took my son to football (soccer) training. I sat on the sideline tinkering with a Leaflet+Python+PostGIS spatio-temporal predictive analytical map that a colleague and I put together the weekend prior for an emergency services hackathon. Apart from being a bad parent for not watching my son, I felt I’d achieved geohipster certification with that effort.

How a geohipster watches football (soccer) practice
How a geohipster watches football (soccer) practice

In all seriousness, being a geohipster is about adapting geospatial technology & trying something new to create something useful, something useless, something different. It’s what I love doing in my spare time. It’s my few hours a night to be as creative as I can be.

Terry Griffin: “Agricultural big data has evolved out of precision ag technology”

Terry Griffin, PhD
Terry Griffin, PhD
Dr. Terry Griffin (@SpacePlowboy) is the cropping systems economist specializing in big data and precision agriculture at Kansas State University. He earned his bachelor’s degree in agronomy and master’s degree in agricultural economics from the University of Arkansas, where he began using commercial GIS products in the late 1990s. While serving as a precision agriculture specialist for University of Illinois Extension, Terry expanded his GIS skills by adding open source software. He earned his Ph.D. in Agricultural Economics with emphasis in spatial econometrics from Purdue University. His doctoral research developed methods to analyze site-specific crop yield data from landscape-scale experiments using spatial statistical techniques, ultimately resulting in two patents regarding the automation of community data analysis, i.e. agricultural big data analytics. He has received the 2014 Pierre C. Robert International Precision Agriculture Young Scientist Award, the 2012 Conservation Systems Precision Ag Researcher of the Year, and the 2010 Precision Ag Awards of Excellence for Researchers/Educators. Terry is a member of the Site-Specific Agriculture Committees for the American Society of Agricultural and Biological Engineers. Currently Dr. Griffin serves as an advisor on the board of the Kansas Agricultural Research and Technology Association (KARTA). Terry and Dana have three wonderful children.

Q: Your background is in Agronomy and Agricultural Economics. When along this path did you discover spatial/GIS technologies, and how did you apply them for the first time?

A: During graduate school my thesis topic was in precision agriculture, or what could be described as information technology applied to production of crops. GPS was an enabling technology along with GIS and site-specific sensors. I was first exposed to GIS in the late 1990s when I mapped data from GPS-equipped yield monitors. I dived into GIS in the early 2000s as a tool to help manage and analyze the geo-spatial data generated from agricultural equipment and farm fields.

Q: Precision Agriculture is a huge market for all sorts of devices. How do you see spatial playing a role in the overall Precision Agriculture sandbox?

A: Precision Ag is a broad term, and many aspects of spatial technology have become common use on many farms. Some technology automates the steering of farm equipment in the field, and similar technology automatically shuts off sections of planter and sprayers to prevent overlap when the equipment has already performed its task. Other forms of precision ag seem to do the opposite — rather than automate a task they gather data that are not immediately usable until processed into decision-making information. These information-intensive technologies that are inseparable from GIS and spatial analysis have the greatest potential for increased utilization.

Q: What do you see as hurdles for spatial/data analytics firms who want to enter the Precision Agriculture space, and what advice would you give them?

A: One of the greatest hurdles, at least in the short run, is data privacy issues as it relates to ‘big data’ or aggregating farm-level data across regions. A tertiary obstacle is lack of wireless connectivity such as broadband internet via cellular technology in rural areas; without this technology agricultural big data is at a disadvantage.

Q: While there have been attempts at an open data standard for agriculture (agxml, and most recently SPADE), none have seemed to catch on.  Do you think this lack of a standard holds Precision Agriculture back, or does it really even need an open standard?

A: Data must have some sort of standardization, or at least a translation system such that each player in the industry can maintain their own system. Considerable work has been conducted in this area, and progress is being made; we can think of the MODUS project as the leading example. Standards have always been important even when precision ag technology was isolated to an individual farm; but now with the big data movement, the need for standardization has been put toward the front burner. Big data analytics relies on the network effect, specifically what economists refer to as network externalities; the value of participating in the system is a function of the number of participants. Therefore, the systems must be welcoming to all potential participants, but must also minimize the barriers to increase participation rates.

Q: What is your preferred spatial software, or programming language?

A: All my spatial econometric analysis and modeling is in R, and R is also where a considerable amount of GIS work is conducted. However, I use and recommend to many agricultural clients QGIS due to being more affordable when they are uncertain if they are ready to make a financial investment. For teaching I use Esri ArcGIS and GeoDa in addition to R.

Q: If money wasn’t an issue, what would be your dream Spatial/Big Data project?

A: Oddly enough I think I already am doing those things. I am fortunate to be working on several aspects of different projects that I hope will make a positive difference for agriculturalists. Many of the tools that I am building or have built are very data-hungry, requiring much more data than has been available. I am anxious for these tools to become useful when the ag data industry matures.

Q: You tend to speak at a number of Precision Agriculture conferences, you have spoken at a regional GIS group, have you ever considered speaking at one of the national conferences?

A: I’m always open to speaking at national or international conferences.

Q: Lastly, explain to our audience of geohipsters what is so hip about Precision Ag, Big Data and Spatial.

A: Agricultural big data has evolved out of precision ag technology, and in its fully functional form is likely to be one of the largest global networks of data collection, processing, archiving, and automated recommendation systems the world has ever known.

Mark Iliffe: “Maps show us where to direct our resources and improve the lives of people”

Mark Iliffe
Mark Iliffe
Mark Iliffe (@markiliffe) is a geographer/map geek working on mapping projects around the world. He leads Ramani Huria for the World Bank, is Neodemographic Research Fellow at the University of Nottingham after completing his PhD at the Horizon Institute, and a mentor for Geeks Without Bounds.

Q: Suitably for a geohipster, your OpenStreetMap profile says “I own a motorbike and have a liking to randomly spend weekends finding out ‘what is over there’”. What have you found?

A: I think I wrote that around a decade ago while getting into OSM, while on a foreign exchange trip in Nancy, France! I found out a lot of things, from that time trying to take a 125cc Yamaha (a hideously small and underpowered motorcycle — think Chimpanzee riding a tricycle) around Europe was slow and cold to new friendships. Also, a career path in maps and a love of all things geospatial, via counting flamingos in Kenya…

Q: Everyone has to start somewhere, and for you I believe that was mapping toilets (or places toilets should be). Indeed I think we first met when you presented your sanitation hack project Taarifa at #geomob by squatting on the table to demonstrate proper squat toilet technique. Tell us about Taarifa.

A: Taarifa is/was a platform for improving public service delivery in emerging countries. It came out of the London Water Hackathon in 2011, basically as an idea that we could do more with the data that is being generated by the many humanitarian mapping projects that had been enabled by OSM at the time, such as Map Kibera, Ramani Tandale and Haiti Earthquake mapping. As a community open-source project, it showed the potential of how feedback loops between citizens and service providers could be used to fix water points or toilets. We used Ushahidi as a base, adding workflow for reports; we tried to push these back to their community, but the core developers had other objectives — fair enough. We as the Taarifa community though we had something special regardless, but it was a hack, it wasn’t planned to be deployed anywhere.

In January 2012 I was in a meeting with a colleague at the World Bank who’d head that Taarifa had been suggested to fill a need on monitoring the construction of schools in Uganda. He arranged a meeting with the project manager for me, went along, and a week later I was coding on the plane to Uganda to pilot Taarifa across 4 districts around the country. Ultimately, it ended up being scaled to all 111 districts at the request of the Ugandan Ministry of Local Government.

From this the Taarifa community started to grow, expanding the small core of developers. In 2013 we won the Sanitation Hackathon Challenge, then received $100K World Bank innovation award to set up Taarifa in the Iringa Region of Tanzania. Taarifa and collaborators on that project, SNV, Geeks Without Bounds and ITC Twente then went on to win a DFID Human Development Innovation Fund award of £400,000. Since then it’s gone in a different direction, away from a technical community focus to one that concentrates on building the local social fabric that is wholly embedded and ran locally in Tanzania.

I feel that this was Taarifa’s most important contribution — not one of technology, but one which convenes development agencies and coders to innovate a little. Now, the main developers of the code haven’t worked on the main codebase for over a year, but Taarifa’s ideas of creating feedback loops in emerging countries still move on, in its grants, but also have been absorbed into other projects too.

Q: Actually I think I’m wrong, even before Taarifa you were an intern at Cloudmade, the first company to try to make money using OpenStreetMap. Founded by Steve Coast (and others), the VC-funded business hired many of the “famous” names of early OSM, before eventually fizzling out and moving into a different field. What was it like? Any especially interesting memories? What sort of impression did that experience leave on you? Also, what’s your take on modern VC-funded OpenStreetMap companies like Mapbox?

A: Cloudmade was fantastic, learned a lot from each of the OSMers that worked there — from Steve Coast, Andy Allen, Nick Black, Matt Amos, and Shaun McDonald. At Cloudmad, I wrote a routing engine for OSM — now common tools like PgRouting weren’t really around — I tried to build pgRouting from source, wasted three days, so started from scratch. In hindsight, I should have persevered with pgRouting, got involved in developing the existing tool instead of starting from scratch.

As it was my first tech company to work at, they were based in Central London and I was broke. I had to stay with my uncle in Slough about 30 miles away. I used to work quite late and slept in the office floor a few times. Once Nick was in early and caught me stuffing my sleeping bag back into the bottom drawer of my desk. The advice was to probably go home a bit more — advice that I’ve used selectively since, but I don’t sleep on my office floor anymore!

The VC situation is always going to be complex. I wasn’t too surprised when Cloudmade eventually pivoted, and their ideas and creations such as the “Map Style Editor” and Leaflet.js live on regardless of the company. At SoTM in Girona I made the comment that OSM was going through puberty. On reflection, I think it was a crude but accurate way to describe our project at that time. We didn’t know what OSM would or could become. OSM didn’t know how to deal with companies like Cloudmade, and neither did the companies know how to deal with OSM; to a certain extent I think we’re still learning, but getting better. Though at the time, like teenagers having to deal with new hormones, emotions ran riot. This all without realising that in the same way OSM has changed the world, OSM also is changed by it — and this is a good thing. Gary Gale has also mused extensively on this.

Now with the generation of companies after — CartoDB, Mapbox etc. — I think that they are much more perceptive to supporting and evolving the OSM ecosystem. Mapbox Humanitarian is one of them, but also their support for developing the ID Editor. In turn, the OSM community is growing as well, especially in the humanitarian space, with Humanitarian OpenStreetMap Team (HOT) supporting numerous projects around the world and acting as a useful interface to OSM for global institutions.

Q: Did you ever think back then that OSM would get as big and as global as it has?

A: TL;DR: Yes.

Recently, I had a discussion with a friend in a very British National Mapping Agency about the nature of exploration. Explorers of old would crisscross the world charting new things, sometimes for their own pleasure, but mostly for economic gain. These people then formed the mapping agencies that data from OSM ‘competes’ with today.

By working with the numerous army of volunteers, OSM embodies the same exploratory spirit — whether mapping their communities, or supporting disaster relief efforts. But instead of the privileged few, it’s the many. Now OSM is making tools and gaining access to data that make it easier than ever before to contribute, whether map data or any other contribution to the community.

Q: Despite those humble beginnings I believe you are now Doctor Mark Iliffe, having very recently defended your PhD thesis in Geography at the University of Nottingham. Congrats! Nevertheless though, doesn’t fancy book lernin’ like that reduce your geohipster credibility? In the just-fucking-do-it neogeo age is a formal background in geography still relevant? Is it something you’d recommend to kids starting their geo careers?

A: Thanks! Doing a PhD was by far the worst thing I’ve ever done, and will ever probably do — to myself, friends, and family. But it wasn’t through book learning, I did it in the field. Most of the thesis itself was written at 36,000ft via Qatar/British Airways and not the library (nb. This was/is a stupid idea, do it in the library).

Hopefully the geohipster cred should still be strong, but I wouldn’t recommend a PhD to kids starting their careers. Bed in for a few years, work out what you want to do, get comfortable, and then see if a PhD is for you. When I started my PhD, I’d done a small amount of work with Map Kibera and other places, and knew I wanted to work in the humanitarian mapping space but full time jobs didn’t exist. Doing a PhD gave the space (and a bit of money) to do that. Now these jobs, organisations, and career paths exist. Five years ago they didn’t.

Q: Though you live in the UK, for the last few years you’ve been working a lot in Tanzania, most recently with the World Bank. A lot of the work has been about helping build the local community to map unmapped (but nevertheless heavily populated) areas like Tandale. Indeed this work was also the basis for your PhD thesis. Give us the details on what you’ve been working on, who you’ve been working with, and most of all what makes it hip?

A: Ramani Huria takes up a lot of my time… It’s a community mapping project, with the Government of Tanzania, universities, and civil society organisations, supported by the World Bank and Red Cross. Ramani Huria has mapped over 25 communities in Dar es Salaam, covering around 1.3 million people. Dar es Salaam suffers from quite severe flooding, partly due as Dar es Salaam is the fastest growing city in Africa with a population of over 5.5 million.

https://www.youtube.com/watch?v=Lz75aHQpmf8

Ramani Huria is powered by a cadre of volunteers, pulling together 160+ university students, 100s community members to collect data on roads, water points, hospitals, and schools, among other attributes. One of the key maps are of the extent of flooding, this is being done by residents of flood prone communities sketching on maps. Now that these maps exist, flood mitigation strategies can be put in place by community leaders — this could either be through building new drains, or ensuring existing infrastructure is maintained. That’s the hip part of Ramani Huria, the local community is leading the mapping, with ourselves as the international community in support.

Ramani Huria -- a community mapping project
Ramani Huria — a community mapping project

Q: Over the last years there has been a big push by HOT and Missing Maps to get volunteers remote mapping in less developed areas like Tanzania. Some OSMers view this as a bad thing, as they perceive that it can inhibit the growth of a local community. As someone who’s been “in the field”, what’s your take? Is remote mapping helpful or harmful?

A: The only accurate map of the world is the world itself. With the objective of mapping the world, let’s work on doing that as fast as possible. Then we can focus on using that map to improve our world. Remote mapping is critical for that — but how can we be smarter at doing it?

To make a map of flood extents, so much time and effort goes into its creation. But a lot of it is basic, for example digitising roads and buildings. This is time-consuming — it doesn’t matter who does it, but it has to be done. But the knowledge of flooding is only held by those communities, nowhere else. The faster you can do this, the faster these challenges can be mitigated. Remote mapping gives a valuable head-start.

In Ramani Huria, we run “Maptime” events for the emerging local OSM community at the Buni Innovation Hub — these events grow the local community. Personally, I think we should move towards optimising our mapping as much as possible — whether that’s through remote mapping or image recognition — but that may be a step too far for the time being. I’d love to see interfaces to digitise Mapillary street view data, it’s something we’ve collected a lot of over the past year. Can we start to digitise drains from Mapillary imagery in the same way Missing Maps uses satellite imagery?

Q: You’ve recently been in Dunkirk in the refugee camps with Mapfugees, what was it like?

A: Mapfugees is a project to help map the La Linière refugee camp around Dunkirk, France. Jorieke Vyncke and I met up in Dunkirk to discuss with the refugee’s council — made up of the refugees themselves — and the camp administrators to see how maps could help. The refugees themselves wished to have maps of the local area for safe passage in/out of the camp. The camp itself is surrounded by a motorway and a railway, making passage in and out quite dangerous. Other ‘Mapfugees’ volunteers worked with mapping the surrounding areas with the refugees, leading local amenities and safe routes were identified.

At the same time, the camp itself was mapped, providing an understanding of camp amenities, so services to the camp can be improved. This is very similar to my experience of community mapping elsewhere — the map is a good way of discussing what needs to be done and can empower people to make changes.

Q: As you no doubt know, here at GeoHipster we’re not scared to ask the real questions. So let’s get into it. On Twitter you’re not infrequently part of a raging debate — which is better: #geobeers or #geosambuca? How will we ever settle this?

A: #Geobeer now has my vote. I’m way too old for #geobuccas as the hangovers are getting worse!

Q: So what’s next Mark? I mean both for you personally now that you’ve crossed the PhD off the list and also for OSM in places like Africa and in organizations like the World Bank.

A: For me, in a few months I’m going to take a long holiday and work out what’s next. I’m open to suggestions on a postcard!

Looking back, OSM is just past a decade old and is still changing the world for the better. In OSM, projects like Ramani Huria, but also mapping projects in Indonesia and others are at the forefront of this, but more can be done. I believe that organisations like the UN and World Bank need to move away from projects to supporting a global geospatial ecosystem. This isn’t a technical problem, but a societal and policy based concern.

This doesn’t sound sexy and isn’t. But at the moment, there are over a billion people that live in extreme poverty. Maps show us where to direct our resources and improve the lives of people, the human and financial resources required to map our world will be immense, moving well past the hundreds of thousands of dollars and spent on mapping cities like Dar es Salaam and Jakarta. To build this, we need to work at a high policy level to really embed geo and maps at the core of the Global Development Agenda with the Sustainable Development Goals. Projects like UN GGIM are moving in that direction, but will need support from geohipsters to make it happen.

Maps and geo are crucial to resolve the problems our world faces, to solve this problem we should use our natural geohipster instincts… JFDI.

Q: Any closing thoughts for all the geohipsters out there?

A: Get out there — you never know where you’ll go.

Maps and mappers of the 2016 calendar: Stephen Smith

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Stephen Smith

Q: Tell us about yourself.

A: I’m a cartographer by night and a GIS Project Supervisor by day. I work for the Vermont Agency of Transportation where I help our rail section use GIS to manage state-owned rail assets and property. Most of the time my work entails empowering users to more easily access and use their GIS data. I’ve used Esri tools on a daily basis since 2008, but recently I’ve been playing with new tools whenever I get the chance. I attended SOTMUS 2014 in DC (my first non-Esri conference) and was really excited about everything happening around the open source geo community. I got some help installing “Tilemill 2” from GitHub and I haven’t looked back. Since then the majority of the maps I’ve made have been using open source tools and data. Lately I’ve been heavily involved in The Spatial Community, a Slack community of 800+ GIS professionals who collaborate to solve each other’s problems and share GIFs. I’m also starting a “mastermind” for GIS professionals who want to work together and help one another take their careers to the next level.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: This map was a gift for my cousin who is part Native American and works in DC as an attorney for the National Indian Gaming Commission. His wife told me that he really liked my Natural Resources map and she wanted me to make him something similar to the US Census American Indian maps but in a “retro” style. I took the opportunity to explore the cartographic capabilities of QGIS and was very impressed.

Q: Tell us about the tools, data, etc., you used to make the map.

A: I’ve done a full writeup of the creation of the map including the data, style inspirations, fonts, challenges, and specific QGIS settings used on my website. You can also download a high resolution version perfect for a desktop wallpaper.

'Native American Lands' by Stephen Smith
‘Native American Lands’ by Stephen Smith