Monthly Archives: June 2016

Hugh Saalmans: “No amount of machine learning could solve a 999999 error!”

Hugh Saalmans
Hugh Saalmans
Hugh Saalmans (@minus34) is a geogeek and IT professional that heads the Location Engineering team at IAG, Australia & New Zealand’s largest general insurer. He’s also one of the founders of GeoRabble -- an inclusive, zero-sales-pitch pub meetup for geogeeks to share their stories. His passion is hackfests & open data, and he’s big fan of open source and open standards.

Q: How did you end up in geospatial?

A: A love of maths and geography is the short answer. The long answer is I did a surveying degree that covered everything spatial from engineering to geodesy.

My first experience with GIS was ArcGIS on Solaris (circa 1990) in a Uni lab with a severely underpowered server. Out of the 12 workstations, only 10 of us could log in at any one time, and then just 6 of us could actually get ArcGIS to run. Just as well, considering most of the students who could get it to work, including myself, ballsed up our first lab assignment by turning some property boundaries into chopped liver.

Besides GIS, my least favourite subjects at Uni were GPS and geodesy. So naturally I chose a career in geospatial information.

Q: You work for IAG. What does the company do?

A: Being a general insurer, we cover about $2 trillion worth of homes, motor vehicles, farms, and businesses against bad things happening.

Geospatial is a big part of what we do. Knowing where those $2tn of assets are allows us to do fundamental things like providing individualised address level pricing — something common in Australia, but not so common in the US due to insurance pricing regulations. Knowing where assets are also allows us to help customers when something bad does happen. That goes to the core of what we do in insurance. That’s when we need to fulfill the promise we made to our customers when they took out a policy.

Q: What on Earth is Location Engineering?

A: We’re part of a movement that’s happening across a lot of domains that use geo-information: changing from traditional data-heavy, point & click delivery to scripting, automation, cloud, & APIs. We’re a team of geospatial analysts becoming a team of DevOps engineers that deliver geo-information services. So we needed a name to reflect that.

From a skills point of view — we’re moving from desktop analysis & publishing with a bit of SQL & Python to a lot of Bash, SQL, Python & Javascript with Git, JIRA, Bamboo, Docker and a few other tools & platforms that aren’t that well known in geo circles. We’re migrating from Windows to Linux, desktop to cloud, and licensed to open source. It’s both exciting and daunting to be doing it for an $11bn company!

Q: You’ve been working in the GIS industry for twenty years, how has that been?

A: It’s been great to be a part of 20+ years of geospatial evolutions and revolutions, witnessing geospatial going from specialist workstations to being a part of everyday life, accessible on any device. It’s also been exciting watching open source go from niche to mainstream, government data go from locked down to open, and watching proprietary standards being replaced with open ones.

It’s also been frustrating at times being part of an industry that has a broad definition, no defined start or end (“GIS is everywhere!”), and limited external recognition. In Australia we further muddy the waters by having university degrees and industry bodies that fuse land surveying and spatial sciences into a curious marriage of similar but sometimes opposing needs. Between the limited recognition of surveying as a profession and of geospatial being a separate stream within the IT industry, it’s no real surprise that our work remains a niche that needs to be constantly explained, even though what we do is fundamental to society. In the last 5 years we’ve tried to improve that through GeoRabble, creating a casual forum for anyone to share their story about location, regardless of their background or experience. We’ve made some good progress: almost 60 pub meetups in 8 cities across 3 countries (AU, NZ & SA), with 350 presentations and 4,500 attendees.

Q: How do you work in one industry for twenty years and keep innovating? Any tips on avoiding cynicism and keeping up with the trends?

A: It’s a cliche, but innovation is a mindset. Keep asking yourself and those around you two questions: Why? and Why Not? Asking why? will help you improve things by questioning the status quo or understanding a problem better, and getting focussed on how to fix or improve it. Saying why not? either gives you a reality check or lets you go exploring, researching and finding better ways of doing things to create new solutions.

Similarly, I try to beat cynicism by being curious, accepting that learning has no destination, and knowing there is information out there somewhere that can help fix the problem. Go back 15-20 years — it was easy to be cynical. If your chosen tool didn’t work the way you wanted it to, you either had to park the problem or come up with a preposterous workaround. Nowadays, you’ve got no real excuse if you put in the time to explore. There’s open source, GitHub and StackExchange to help you plough through the problem. Here’s one of our case studies as an example: desktop brand X takes 45 mins to tag several million points with a boundary id. Unsatisfied, we make the effort to learn Python, PostGIS and parallel processing through blogs, posts and online documentation. Now you’re cooking with gas in 45 seconds, not 45 minutes.

Another way to beat cynicism is to accept that things will change, and they will change faster than you want them to. They will leave you with yesterday’s architecture or process and you will be left with a choice to take the easy road and build up design debt into your systems (which will cost you at some point), or you take the hard road and learn as you go to future-proof the things you’re responsible for.

Q: What are some disruptive technologies that are on your watch list?

A: Autonomous vehicles are the big disruptor in insurance. KPMG estimate the motor insurance market will shrink by 60% in the next 25 years due to a reduction in crashes. How do we offset this loss of profitable income? By getting better at analysing our customers and their other assets, especially homes. Enter geospatial to start answering complicated questions like “how much damage will the neighbour’s house do to our insured’s house during a storm?”

The Internet of Things is also going to shake things up in insurance. Your doorbell can now photograph would-be burglars or detect hail. Your home weather sensor can alert you to damaging winds. Now imagine hundreds of thousands of these sensors in each city — imagine tracking burglars from house to house, or watching a storm hit a city, one neighbourhood at a time. Real-time, location-based sensor nets are going to change the way we protect our homes and how insurers respond in a time in crisis. Not to mention 100,000+ weather sensors could radically improve our ability to predict weather-related disasters. It’s not surprising IBM bought The Weather Channel’s online and B2B services arm last year, as they have one of the best crowdsourced weather services.

UAVs are also going to shake things up. We first used them last Christmas after a severe bushfire (wildfire) hit the Victorian coast. Due to asbestos contamination, the burnt out area was sealed off. Using UAVs to capture the damage was the only way at the time to give customers who had lost everything some certainty about their future. Jumping to the near future again — Intel brought their 100-drone lightshow to Sydney in early June. Whilst marvelling at a new artform, watching the drones glide and dance in beautiful formations, it dawned on me what autonomous UAVs will be capable of in the next few years — swarms of them capturing entire damaged neighbourhoods just a few hours after a weather event or bushfire has passed.

Q: What is the dirtiest dataset you’ve had to ingest, and what about the cleanest?

A: The thing about working for a large corporation with a 150-year history is your organisation knows how to put the L into legacy systems. We have systems that write 20-30 records for single customer transactions in a non-sequential manner; so you almost need a PhD to determine the current record. There are other systems that write proprietary BLOBs into our databases (seriously, in 2016!). Fortunately, we have a simplification program to clear up a lot of these types of issues.

As far as open data goes — that’d be the historical disaster data we used at GovHack in 2014.  Who knew one small CSV file could cause so much pain. Date fields with a combination of standard and American dates, inconsistent and incoherent disaster classifications, lat/longs with variable precisions.

I don’t know if there is such a thing as a clean dataset. All data requires some wrangling to make it productive, and all large datasets have quirks. G-NAF (Australia’s Geocoded National Address File) is pretty good on the quirk front, but at 31 tables and 39 foreign keys, it’s not exactly ready to roll in its raw form.

Q: You were very quick to release some tools to help people to work with the G-NAF dataset when it was released. What are some other datasets that you’d like to see made open?

A: It can’t be understated how good it was to see G-NAF being made open data. We’re one of the lucky few countries with an open, authoritative, geocoded national address file, thanks to 3 years of continual effort from the federal and state governments.

That said, we have the most piecemeal approach to natural peril data in Australia. Getting a national view of, say, flood risk isn’t possible due to the way the data is created and collected at the local and state government level. I’m obviously biased being in the insurance industry about wanting access to peril data, but having no holistic view of risk, nor having any data to share doesn’t help the federal government serve the community. It’s a far cry from the availability of FEMA’s data in the US.

Q: Uber drivers have robot cars, McDonald’s workers have robot cooks, what are geohipsters going to be replaced with?  

A: Who says we’re going to be replaced? No amount of machine learning could solve a 999999 error!

But if we are going to be replaced — on the data capture front it’ll probably be due to autonomous UAVs and machine learning. Consider aerial camera systems that can capture data at better than 5 cm resolution, but mounted on a winged, autonomous UAV that could fly 10,000s of sq km a day. Bung the data into an omnipotent machine learning feature extractor (like the ones Google et al have kind of got working), and entire 3D models of cities could be built regularly with only a few humans involved.

There’ll still be humans required to produce PDFs… oh sorry, you said what are geohipsters going to be replaced with. There’ll still be humans required to produce Leaflet+D3 web maps for a while before they work out how to automate it. Speaking of automation — one of the benefits of becoming a team of developers is the career future-proofing. If you’re worried about losing your job to automation, become the one writing the automation code!

Q: What are some startups (geo or non-geo) that you follow?

A: Mapbox and CartoDB are two of the most interesting geospatial companies to follow right now. Like Google before them, they’ve built a market right under the noses of the incumbent GIS vendors by focussing on the user and developer experience, not by trying to wedge as many tools or layers as they can into a single map.

In the geocoding and addressing space it’s hard to go past What3Words for ingenuity and for the traction they’ve got in changing how people around the World communicate their location.

In the insurance space, there’s a monumental amount of hot air surrounding Insuretech, but a few startups are starting to get their business models off the ground. Peer to peer and micro insurance are probably the most interesting spaces to watch. Companies like Friendsurance and Trov are starting to make headway here.

Q: And finally, what do you do in your free time that makes you a geohipster?

A: The other day I took my son to football (soccer) training. I sat on the sideline tinkering with a Leaflet+Python+PostGIS spatio-temporal predictive analytical map that a colleague and I put together the weekend prior for an emergency services hackathon. Apart from being a bad parent for not watching my son, I felt I’d achieved geohipster certification with that effort.

How a geohipster watches football (soccer) practice
How a geohipster watches football (soccer) practice

In all seriousness, being a geohipster is about adapting geospatial technology & trying something new to create something useful, something useless, something different. It’s what I love doing in my spare time. It’s my few hours a night to be as creative as I can be.

Terry Griffin: “Agricultural big data has evolved out of precision ag technology”

Terry Griffin, PhD
Terry Griffin, PhD
Dr. Terry Griffin (@SpacePlowboy) is the cropping systems economist specializing in big data and precision agriculture at Kansas State University. He earned his bachelor’s degree in agronomy and master’s degree in agricultural economics from the University of Arkansas, where he began using commercial GIS products in the late 1990s. While serving as a precision agriculture specialist for University of Illinois Extension, Terry expanded his GIS skills by adding open source software. He earned his Ph.D. in Agricultural Economics with emphasis in spatial econometrics from Purdue University. His doctoral research developed methods to analyze site-specific crop yield data from landscape-scale experiments using spatial statistical techniques, ultimately resulting in two patents regarding the automation of community data analysis, i.e. agricultural big data analytics. He has received the 2014 Pierre C. Robert International Precision Agriculture Young Scientist Award, the 2012 Conservation Systems Precision Ag Researcher of the Year, and the 2010 Precision Ag Awards of Excellence for Researchers/Educators. Terry is a member of the Site-Specific Agriculture Committees for the American Society of Agricultural and Biological Engineers. Currently Dr. Griffin serves as an advisor on the board of the Kansas Agricultural Research and Technology Association (KARTA). Terry and Dana have three wonderful children.

Q: Your background is in Agronomy and Agricultural Economics. When along this path did you discover spatial/GIS technologies, and how did you apply them for the first time?

A: During graduate school my thesis topic was in precision agriculture, or what could be described as information technology applied to production of crops. GPS was an enabling technology along with GIS and site-specific sensors. I was first exposed to GIS in the late 1990s when I mapped data from GPS-equipped yield monitors. I dived into GIS in the early 2000s as a tool to help manage and analyze the geo-spatial data generated from agricultural equipment and farm fields.

Q: Precision Agriculture is a huge market for all sorts of devices. How do you see spatial playing a role in the overall Precision Agriculture sandbox?

A: Precision Ag is a broad term, and many aspects of spatial technology have become common use on many farms. Some technology automates the steering of farm equipment in the field, and similar technology automatically shuts off sections of planter and sprayers to prevent overlap when the equipment has already performed its task. Other forms of precision ag seem to do the opposite — rather than automate a task they gather data that are not immediately usable until processed into decision-making information. These information-intensive technologies that are inseparable from GIS and spatial analysis have the greatest potential for increased utilization.

Q: What do you see as hurdles for spatial/data analytics firms who want to enter the Precision Agriculture space, and what advice would you give them?

A: One of the greatest hurdles, at least in the short run, is data privacy issues as it relates to ‘big data’ or aggregating farm-level data across regions. A tertiary obstacle is lack of wireless connectivity such as broadband internet via cellular technology in rural areas; without this technology agricultural big data is at a disadvantage.

Q: While there have been attempts at an open data standard for agriculture (agxml, and most recently SPADE), none have seemed to catch on.  Do you think this lack of a standard holds Precision Agriculture back, or does it really even need an open standard?

A: Data must have some sort of standardization, or at least a translation system such that each player in the industry can maintain their own system. Considerable work has been conducted in this area, and progress is being made; we can think of the MODUS project as the leading example. Standards have always been important even when precision ag technology was isolated to an individual farm; but now with the big data movement, the need for standardization has been put toward the front burner. Big data analytics relies on the network effect, specifically what economists refer to as network externalities; the value of participating in the system is a function of the number of participants. Therefore, the systems must be welcoming to all potential participants, but must also minimize the barriers to increase participation rates.

Q: What is your preferred spatial software, or programming language?

A: All my spatial econometric analysis and modeling is in R, and R is also where a considerable amount of GIS work is conducted. However, I use and recommend to many agricultural clients QGIS due to being more affordable when they are uncertain if they are ready to make a financial investment. For teaching I use Esri ArcGIS and GeoDa in addition to R.

Q: If money wasn’t an issue, what would be your dream Spatial/Big Data project?

A: Oddly enough I think I already am doing those things. I am fortunate to be working on several aspects of different projects that I hope will make a positive difference for agriculturalists. Many of the tools that I am building or have built are very data-hungry, requiring much more data than has been available. I am anxious for these tools to become useful when the ag data industry matures.

Q: You tend to speak at a number of Precision Agriculture conferences, you have spoken at a regional GIS group, have you ever considered speaking at one of the national conferences?

A: I’m always open to speaking at national or international conferences.

Q: Lastly, explain to our audience of geohipsters what is so hip about Precision Ag, Big Data and Spatial.

A: Agricultural big data has evolved out of precision ag technology, and in its fully functional form is likely to be one of the largest global networks of data collection, processing, archiving, and automated recommendation systems the world has ever known.

Mark Iliffe: “Maps show us where to direct our resources and improve the lives of people”

Mark Iliffe
Mark Iliffe
Mark Iliffe (@markiliffe) is a geographer/map geek working on mapping projects around the world. He leads Ramani Huria for the World Bank, is Neodemographic Research Fellow at the University of Nottingham after completing his PhD at the Horizon Institute, and a mentor for Geeks Without Bounds.

Q: Suitably for a geohipster, your OpenStreetMap profile says “I own a motorbike and have a liking to randomly spend weekends finding out ‘what is over there’”. What have you found?

A: I think I wrote that around a decade ago while getting into OSM, while on a foreign exchange trip in Nancy, France! I found out a lot of things, from that time trying to take a 125cc Yamaha (a hideously small and underpowered motorcycle — think Chimpanzee riding a tricycle) around Europe was slow and cold to new friendships. Also, a career path in maps and a love of all things geospatial, via counting flamingos in Kenya…

Q: Everyone has to start somewhere, and for you I believe that was mapping toilets (or places toilets should be). Indeed I think we first met when you presented your sanitation hack project Taarifa at #geomob by squatting on the table to demonstrate proper squat toilet technique. Tell us about Taarifa.

A: Taarifa is/was a platform for improving public service delivery in emerging countries. It came out of the London Water Hackathon in 2011, basically as an idea that we could do more with the data that is being generated by the many humanitarian mapping projects that had been enabled by OSM at the time, such as Map Kibera, Ramani Tandale and Haiti Earthquake mapping. As a community open-source project, it showed the potential of how feedback loops between citizens and service providers could be used to fix water points or toilets. We used Ushahidi as a base, adding workflow for reports; we tried to push these back to their community, but the core developers had other objectives — fair enough. We as the Taarifa community though we had something special regardless, but it was a hack, it wasn’t planned to be deployed anywhere.

In January 2012 I was in a meeting with a colleague at the World Bank who’d head that Taarifa had been suggested to fill a need on monitoring the construction of schools in Uganda. He arranged a meeting with the project manager for me, went along, and a week later I was coding on the plane to Uganda to pilot Taarifa across 4 districts around the country. Ultimately, it ended up being scaled to all 111 districts at the request of the Ugandan Ministry of Local Government.

From this the Taarifa community started to grow, expanding the small core of developers. In 2013 we won the Sanitation Hackathon Challenge, then received $100K World Bank innovation award to set up Taarifa in the Iringa Region of Tanzania. Taarifa and collaborators on that project, SNV, Geeks Without Bounds and ITC Twente then went on to win a DFID Human Development Innovation Fund award of £400,000. Since then it’s gone in a different direction, away from a technical community focus to one that concentrates on building the local social fabric that is wholly embedded and ran locally in Tanzania.

I feel that this was Taarifa’s most important contribution — not one of technology, but one which convenes development agencies and coders to innovate a little. Now, the main developers of the code haven’t worked on the main codebase for over a year, but Taarifa’s ideas of creating feedback loops in emerging countries still move on, in its grants, but also have been absorbed into other projects too.

Q: Actually I think I’m wrong, even before Taarifa you were an intern at Cloudmade, the first company to try to make money using OpenStreetMap. Founded by Steve Coast (and others), the VC-funded business hired many of the “famous” names of early OSM, before eventually fizzling out and moving into a different field. What was it like? Any especially interesting memories? What sort of impression did that experience leave on you? Also, what’s your take on modern VC-funded OpenStreetMap companies like Mapbox?

A: Cloudmade was fantastic, learned a lot from each of the OSMers that worked there — from Steve Coast, Andy Allen, Nick Black, Matt Amos, and Shaun McDonald. At Cloudmad, I wrote a routing engine for OSM — now common tools like PgRouting weren’t really around — I tried to build pgRouting from source, wasted three days, so started from scratch. In hindsight, I should have persevered with pgRouting, got involved in developing the existing tool instead of starting from scratch.

As it was my first tech company to work at, they were based in Central London and I was broke. I had to stay with my uncle in Slough about 30 miles away. I used to work quite late and slept in the office floor a few times. Once Nick was in early and caught me stuffing my sleeping bag back into the bottom drawer of my desk. The advice was to probably go home a bit more — advice that I’ve used selectively since, but I don’t sleep on my office floor anymore!

The VC situation is always going to be complex. I wasn’t too surprised when Cloudmade eventually pivoted, and their ideas and creations such as the “Map Style Editor” and Leaflet.js live on regardless of the company. At SoTM in Girona I made the comment that OSM was going through puberty. On reflection, I think it was a crude but accurate way to describe our project at that time. We didn’t know what OSM would or could become. OSM didn’t know how to deal with companies like Cloudmade, and neither did the companies know how to deal with OSM; to a certain extent I think we’re still learning, but getting better. Though at the time, like teenagers having to deal with new hormones, emotions ran riot. This all without realising that in the same way OSM has changed the world, OSM also is changed by it — and this is a good thing. Gary Gale has also mused extensively on this.

Now with the generation of companies after — CartoDB, Mapbox etc. — I think that they are much more perceptive to supporting and evolving the OSM ecosystem. Mapbox Humanitarian is one of them, but also their support for developing the ID Editor. In turn, the OSM community is growing as well, especially in the humanitarian space, with Humanitarian OpenStreetMap Team (HOT) supporting numerous projects around the world and acting as a useful interface to OSM for global institutions.

Q: Did you ever think back then that OSM would get as big and as global as it has?

A: TL;DR: Yes.

Recently, I had a discussion with a friend in a very British National Mapping Agency about the nature of exploration. Explorers of old would crisscross the world charting new things, sometimes for their own pleasure, but mostly for economic gain. These people then formed the mapping agencies that data from OSM ‘competes’ with today.

By working with the numerous army of volunteers, OSM embodies the same exploratory spirit — whether mapping their communities, or supporting disaster relief efforts. But instead of the privileged few, it’s the many. Now OSM is making tools and gaining access to data that make it easier than ever before to contribute, whether map data or any other contribution to the community.

Q: Despite those humble beginnings I believe you are now Doctor Mark Iliffe, having very recently defended your PhD thesis in Geography at the University of Nottingham. Congrats! Nevertheless though, doesn’t fancy book lernin’ like that reduce your geohipster credibility? In the just-fucking-do-it neogeo age is a formal background in geography still relevant? Is it something you’d recommend to kids starting their geo careers?

A: Thanks! Doing a PhD was by far the worst thing I’ve ever done, and will ever probably do — to myself, friends, and family. But it wasn’t through book learning, I did it in the field. Most of the thesis itself was written at 36,000ft via Qatar/British Airways and not the library (nb. This was/is a stupid idea, do it in the library).

Hopefully the geohipster cred should still be strong, but I wouldn’t recommend a PhD to kids starting their careers. Bed in for a few years, work out what you want to do, get comfortable, and then see if a PhD is for you. When I started my PhD, I’d done a small amount of work with Map Kibera and other places, and knew I wanted to work in the humanitarian mapping space but full time jobs didn’t exist. Doing a PhD gave the space (and a bit of money) to do that. Now these jobs, organisations, and career paths exist. Five years ago they didn’t.

Q: Though you live in the UK, for the last few years you’ve been working a lot in Tanzania, most recently with the World Bank. A lot of the work has been about helping build the local community to map unmapped (but nevertheless heavily populated) areas like Tandale. Indeed this work was also the basis for your PhD thesis. Give us the details on what you’ve been working on, who you’ve been working with, and most of all what makes it hip?

A: Ramani Huria takes up a lot of my time… It’s a community mapping project, with the Government of Tanzania, universities, and civil society organisations, supported by the World Bank and Red Cross. Ramani Huria has mapped over 25 communities in Dar es Salaam, covering around 1.3 million people. Dar es Salaam suffers from quite severe flooding, partly due as Dar es Salaam is the fastest growing city in Africa with a population of over 5.5 million.

https://www.youtube.com/watch?v=Lz75aHQpmf8

Ramani Huria is powered by a cadre of volunteers, pulling together 160+ university students, 100s community members to collect data on roads, water points, hospitals, and schools, among other attributes. One of the key maps are of the extent of flooding, this is being done by residents of flood prone communities sketching on maps. Now that these maps exist, flood mitigation strategies can be put in place by community leaders — this could either be through building new drains, or ensuring existing infrastructure is maintained. That’s the hip part of Ramani Huria, the local community is leading the mapping, with ourselves as the international community in support.

Ramani Huria -- a community mapping project
Ramani Huria — a community mapping project

Q: Over the last years there has been a big push by HOT and Missing Maps to get volunteers remote mapping in less developed areas like Tanzania. Some OSMers view this as a bad thing, as they perceive that it can inhibit the growth of a local community. As someone who’s been “in the field”, what’s your take? Is remote mapping helpful or harmful?

A: The only accurate map of the world is the world itself. With the objective of mapping the world, let’s work on doing that as fast as possible. Then we can focus on using that map to improve our world. Remote mapping is critical for that — but how can we be smarter at doing it?

To make a map of flood extents, so much time and effort goes into its creation. But a lot of it is basic, for example digitising roads and buildings. This is time-consuming — it doesn’t matter who does it, but it has to be done. But the knowledge of flooding is only held by those communities, nowhere else. The faster you can do this, the faster these challenges can be mitigated. Remote mapping gives a valuable head-start.

In Ramani Huria, we run “Maptime” events for the emerging local OSM community at the Buni Innovation Hub — these events grow the local community. Personally, I think we should move towards optimising our mapping as much as possible — whether that’s through remote mapping or image recognition — but that may be a step too far for the time being. I’d love to see interfaces to digitise Mapillary street view data, it’s something we’ve collected a lot of over the past year. Can we start to digitise drains from Mapillary imagery in the same way Missing Maps uses satellite imagery?

Q: You’ve recently been in Dunkirk in the refugee camps with Mapfugees, what was it like?

A: Mapfugees is a project to help map the La Linière refugee camp around Dunkirk, France. Jorieke Vyncke and I met up in Dunkirk to discuss with the refugee’s council — made up of the refugees themselves — and the camp administrators to see how maps could help. The refugees themselves wished to have maps of the local area for safe passage in/out of the camp. The camp itself is surrounded by a motorway and a railway, making passage in and out quite dangerous. Other ‘Mapfugees’ volunteers worked with mapping the surrounding areas with the refugees, leading local amenities and safe routes were identified.

At the same time, the camp itself was mapped, providing an understanding of camp amenities, so services to the camp can be improved. This is very similar to my experience of community mapping elsewhere — the map is a good way of discussing what needs to be done and can empower people to make changes.

Q: As you no doubt know, here at GeoHipster we’re not scared to ask the real questions. So let’s get into it. On Twitter you’re not infrequently part of a raging debate — which is better: #geobeers or #geosambuca? How will we ever settle this?

A: #Geobeer now has my vote. I’m way too old for #geobuccas as the hangovers are getting worse!

Q: So what’s next Mark? I mean both for you personally now that you’ve crossed the PhD off the list and also for OSM in places like Africa and in organizations like the World Bank.

A: For me, in a few months I’m going to take a long holiday and work out what’s next. I’m open to suggestions on a postcard!

Looking back, OSM is just past a decade old and is still changing the world for the better. In OSM, projects like Ramani Huria, but also mapping projects in Indonesia and others are at the forefront of this, but more can be done. I believe that organisations like the UN and World Bank need to move away from projects to supporting a global geospatial ecosystem. This isn’t a technical problem, but a societal and policy based concern.

This doesn’t sound sexy and isn’t. But at the moment, there are over a billion people that live in extreme poverty. Maps show us where to direct our resources and improve the lives of people, the human and financial resources required to map our world will be immense, moving well past the hundreds of thousands of dollars and spent on mapping cities like Dar es Salaam and Jakarta. To build this, we need to work at a high policy level to really embed geo and maps at the core of the Global Development Agenda with the Sustainable Development Goals. Projects like UN GGIM are moving in that direction, but will need support from geohipsters to make it happen.

Maps and geo are crucial to resolve the problems our world faces, to solve this problem we should use our natural geohipster instincts… JFDI.

Q: Any closing thoughts for all the geohipsters out there?

A: Get out there — you never know where you’ll go.

Maps and mappers of the 2016 calendar: Stephen Smith

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Stephen Smith

Q: Tell us about yourself.

A: I’m a cartographer by night and a GIS Project Supervisor by day. I work for the Vermont Agency of Transportation where I help our rail section use GIS to manage state-owned rail assets and property. Most of the time my work entails empowering users to more easily access and use their GIS data. I’ve used Esri tools on a daily basis since 2008, but recently I’ve been playing with new tools whenever I get the chance. I attended SOTMUS 2014 in DC (my first non-Esri conference) and was really excited about everything happening around the open source geo community. I got some help installing “Tilemill 2” from GitHub and I haven’t looked back. Since then the majority of the maps I’ve made have been using open source tools and data. Lately I’ve been heavily involved in The Spatial Community, a Slack community of 800+ GIS professionals who collaborate to solve each other’s problems and share GIFs. I’m also starting a “mastermind” for GIS professionals who want to work together and help one another take their careers to the next level.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: This map was a gift for my cousin who is part Native American and works in DC as an attorney for the National Indian Gaming Commission. His wife told me that he really liked my Natural Resources map and she wanted me to make him something similar to the US Census American Indian maps but in a “retro” style. I took the opportunity to explore the cartographic capabilities of QGIS and was very impressed.

Q: Tell us about the tools, data, etc., you used to make the map.

A: I’ve done a full writeup of the creation of the map including the data, style inspirations, fonts, challenges, and specific QGIS settings used on my website. You can also download a high resolution version perfect for a desktop wallpaper.

'Native American Lands' by Stephen Smith
‘Native American Lands’ by Stephen Smith