Maps and mappers of the 2016 calendar: Asger Sigurd Skovbo Petersen

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Asger Sigurd Skovbo Petersen

Q: Tell us about yourself

A: I work at a small Danish company called Septima which I also cofounded back in early 2013. I have been in the geo business since 2004 when I received my masters degree (MScE) from the Technical University of Denmark.

I do development, consulting, and data analysis. One of my primary interests is to find new ways of utilizing existing data. This interest really took off when I worked as the sole R&D engineer at a data acquisition company which had a massive collection of data just sitting there and waiting to be upcycled. At this job I got a lot of experience working with quite big LiDAR, raster, and vector datasets, and developing algorithms to process them effectively.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: When processing the second Danish LiDAR-based elevation model, the producing agency released some temporary point cloud data at a very early stage.

My curiousity was too big to leave these data alone, and with a LASmoons license of Martin Isenburg’s LAStools, it was easy to process the 400km^2 las files into 40cm DTM and DSM. And then the usual open source stack helped publishing a hillshaded version as an easy to use web map.

This web map was widely used and cited, as it was the only visible example of the coming national DEM for quite a while. The old model was 1.6m resolution, and with a new resolution of 0.4m a lot of details were revealed, which were not visible in the old model. In the following months we actually received quite a few notes from archaeologists, who had discovered exciting and previously unknown historic stuff just by browsing our map.

Hillshades are the go-to visualisation of DEMs. Probably because they can be easily processed by almost any raster-capable software, and because they are very easily interpreted. However they can also hide even very big structures depending on the general direction of the structure.

This made me want to find a better way to visualise the data so our archaeological friends could get even more information from the new data.

I then read a heap of papers on the subject and decided to try out a visualisation based on Sky View Factor. At the time I didn’t find any implementation that I was able to use, so I ended up implementing my own. (I later discovered that SAGA had a perfectly good implementation, so I could have just used QGIS. But hey, then I wouldn’t have had the fun implementing my own 🙂 )

I did a lot of tests using the Sky View Factor on the new DTM, but I couldn’t make it work as well as I had hoped. By coincidence I ran it on the DSM in an urban area, which gave a very interesting result. This effect is basically what makes the GeoHipster map look different from most other shaded DSMs.

Q: Tell us about the tools, data, etc., you used to make the map.

A: The map consists of several layers: a standard hillshade, a Sky View Factor, building footprints, and water bodies.

The Sky View Factor layer was made using a custom algorithm implemented in Python using rasterio and optimized for speed using Cython. As mentioned this could probably just as well have been processed using SAGA, for instance, through QGIS. The hillshade layer was made using GDAL and the vector layers did not require any special processing.

QGIS was used to symbolize and combine the layers using gradients, transparency and layer blending.

Data used are the national Danish DEM and the national Danish topological map called GeoDanmark. Both datasets are open and can be freely downloaded from Kortforsyningen. Sadly most of these sites are in Danish only – maybe some clever hidden trade barrier.

Here is an online version of my map. For the online version I had to change the symbolization a bit as producing tiles from QGIS Server doesn’t work very well with gradients.

After submitting the map to the GeoHipster 2016 calendar I have been working on coloring the vegetation to get a green component also. There are no datasets for vegetation which include single trees, bushes etc, so I made a python script to extract and filter this information from the classified LiDAR point cloud.

This new map can be seen here in a preliminary version.

'Copenhagen Illuminated' by Asger Sigurd Skovbo Petersen
‘Copenhagen Illuminated’ by Asger Sigurd Skovbo Petersen

Maps and mappers of the 2016 calendar: Jonah Adkins

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Jonah Adkins

Q: Tell us about yourself.

A:  I’m a cartographer from Newport News, Virginia and have been working in GIS since 1999. I enjoy tinkering with mapping tools, co-organizing @maptimehrva, and paid mapping I‘m most interested in map design, openstreetmap, and civic hacking.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: The Noland Trail is 5-mile escape from reality located at Mariners’ Museum Park in Newport News, Virginia. I’ve probably ran a gajillion miles out there over the last several years, and wanted to create  a great map of one of my favorite spots. I started with some pen & paper field mapping, upgraded to some data collection with the Fulcrum app, and made the first version back in 2013. This second version was an exercise in simplifying and refining the first map, it required minimal data updates and a lot more cartographic heart-burn.

Q: Tell us about the tools, data, etc., you used to make the map.

The second edition Noland Trail map was made with a combination of QGIS and Photoshop. I threw a ton of information on the first one, probably too much, and it had many ‘GIS-y’ type elements that were lost on the casual map viewer. With this second edition, I wanted to strip away the bulkiness of the original, maintain a high level of detail, and improve the original design. Since the data remained unchanged, with exception of few items, I was able to dedicate the majority of my time on design elements. I’ve also created some related projects like Open-Noland-Trail, an open data site for the trail, and Noland-Trail-GL , a Mapbox GL version of the map built in Mapbox Studio.

Regina Obe: “People spend too much time learning a programming language and forget what a programming language is for”

Regina Obe
Regina Obe
Regina Obe (@reginaobe) is a co-principal of Paragon Corporation, a database consulting company based in Boston. She has over 15 years of professional experience in various programming languages and database systems, with special focus on spatial databases. She is a member of the PostGIS steering committee and the PostGIS core development team. Regina holds a BS degree in mechanical engineering from the Massachusetts Institute of Technology where she wanted to build terminator robots but decided that wasn’t the best thing to do for Humanity. She co-authored PostGIS in Action and PostgreSQL: Up and Running.

Q: Regina Obe – so where are you in the world and what do you do?

A: I’m located in Boston, Massachusetts, USA. I’m a database application web developer, Consultant, PostGIS  project and steering committee team member, and technical author on PostGIS and PostgreSQL related books.

Q: So in my prep work I found you have a degree from MIT in Mechanical engineering with a focus in Bioelectrics and Control systems? What’s that about? How did you end up in working in databases?

A: Hah you just had to ask a hard one. It’s a bit long.

Bioelectronics and control was an amalgamation of all my interests and influences at that point.

My favorite shows growing up were the 6 million dollar man and bionic woman. Like many geeks I loved tinkering with electronic and mechanical gadgets and got into programming at the wee age of 9. I was also very attached to graph paper and would plot out on graph what cells my for loops would hit.

My mother was a forensic pathologist; basically she tore dead people apart to discover how they died and what could have been done to save them. I spent a lot of time reading her books and dreaming about human augmentation and control.

When I came to MIT I had ambitions of getting a BS in Electric Engineering or Mech E., moving on to a PhD, getting my MD, and focussing on orthopaedic limb surgery. MIT’s Mechanical Engineering department had a course track that allowed you to fashion your own major. You had to take X amount of Mech E. and could take any other courses you wanted as long as you could convince your advisor it followed some sort of roadmap you set out for yourself. So that said — what else would I fashion if given the opportunity. At the time MIT did not have a biomedical engineering major.

So my course work included classes in bio-electrical engineering like electrical foundation of the heart where I built and programmed electrical models of hearts and killed and revived rabbits. Basic EE courses with breadboards, class in Scheme programming, electro physiology of the brain etc. On the Mech E. side, I took standard stuff like Fluid Mechanics, Dynamics, Systems Control and for my thesis, programming a simulation package that allowed you to simulate systems with some pre-configured blocks. Most of which I can’t remember.

I looked around at other people who were following my dream and realized I’m way too lazy and not smart enough for that kind of life. When I got out of college, there were few engineering jobs requiring my particular skill set. I got a job as a consultant focussing on business rules management and control. Business rules were managed as data that could become actionable. There I got introduced to the big wild world of databases, then SQL, cool languages like Smalltalk, and trying to translate what people say ambiguously into what they actually mean non-ambiguously.

I found that I really enjoyed programming and thinking about programs as the rules to transition data and reason about data. It’s all about data in the end.

Q: So you dive into databases and SQL and this thing called PostGIS comes along. You’re on the Project Steering committee and Development team for PostGIS. What is PostGIS and how much work is it being a developer and a member of the project steering committee?

A: Yes I’m on the PostGIS steering committee and development team.

PostGIS is a PostgreSQL extender for managing objects in space. It provides a large body of functions and new database types for storing and querying the spatial properties of objects like buildings, property, cities. You can ask questions like what’s the area, what’s the perimeter, how far is this thing from these other things, what things are nearest to this thing, and also allows you to morph things into other things. With the raster support you can ask what’s the concentration of this chemical over here or average concentration over an arbitrary area.

Some people think of PostGIS as a tool for managing geographic systems, but it’s Post GIS. Space has no boundaries except the imagined. Geographic systems are just the common use case.

Remember my fondness for graph paper? It all comes full circle; space in the database. I like to think of PostGIS as a tool for managing things on a huge piece of graph paper and now it can manage things on a 3D piece of graph paper and a spheroidal graph paper too 🙂 . PostGIS is always learning new tricks.

Being a PostGIS developer and member of PSC is a fair amount of work, some programming, but mostly keeping bots happy, lots of testing, and banging people over the head when you feel it’s time to release. I think it’s as much work as you want to put into it though and I happen to enjoy working with PostGIS so spend a fair amount of time on it.

Q: I love PostGIS. I spend a lot of time in QGIS/PostGIS these days and people are constantly asking – ‘HEY WHEN DO WE GET SOMETHING LIKE ROUTING WHERE I CAN DO TIME/DISTANCE MEASUREMENTS?”. You’ve been working on a piece of software called pgRouting which does?

A: This is a leading question to talk about our new book coming out by LocatePress – http://locatepress.com/pgrouting.

Been working on is an over statement. My husband and I have been working writing the book pgRouting: a Practical Guide, publisher Locate Press. We hope to finish it this year. That’s probably biggest contribution we’d done for pgRouting aside from Windows stack builder packaging for pgRouting.

Most of the work for pgRouting is done by other contributors with GeoRepublic and iMapTools folks leading the direction.

My friend Vicky Vergara in particular has been doing a good chunk of work for recent versions of pgRouting 2.1-2.3 (including upgrading pgRoutingLayer for QGIS to support newest features and improving performance of osm2pgrouting) some neat things coming there. She’s been super busy with it. I greatly admire her dedication.

You’ll have to read our book to find out more.  Pre-released copies are available for purchase now and currently half off until it reaches feature completeness. We are about 70% there.

Q: With everything you are doing for work, what do you do for fun?  

A: Work is not fun, don’t tell me that? My goal in life is to achieve a state where I am always having fun and work is fun. I still have some unfun work I’d like to kill. Aside from work I sleep a lot. Like to go to restaurants. Never been big on hobbies unfortunately. Going to places causes me a bit of stress, so not much into travel I’m afraid.

Q: You’ve got a few books out and about. How hard is it to write a book for a technical audience? How hard is it to keep it up to date?

A: It’s much harder than you think and even harder to keep things up. Part of the issue with writing for technical audiences is you never know the right balance. I try to unlearn what I have learned so I can experience learning it again to write about it in a way that a new person coming to the topic can appreciate. I fail badly and always end up cramming too much. I’m an impatient learner.

I always thought 2nd and 3rd editions of books would be easier, but they have been so far just as hard if not harder than the first edition. We are also working on 3rd edition of PostgreSQL: Up and Running. Piece of cake right, what could have changed in 2 years. A fair amount from PostgreSQL 9.3 to 9.5. PostGIS in Action, 2nd edition was pretty much a rewrite. Might have been easier if we started from scratch on that one. So much changed between PostGIS 1.x and 2.x. That said I think we’ll try in future to maybe not write sequels and maybe tackle the subject at a different angle.

Leo wants to write SQL for Kids 🙂 .  He thinks it’s a shame children aren’t exposed to databases and set thinking in general at an early age.

Q: Leo wanting to do a SQL for kids brings up a good question. If you had a person come up and go “What should I learn?” what would you tell them? In the geo world we get beat over the head with Python. You are neck deep in databases. Programming language? Database? What makes the well rounded individual in your opinion?

A: You should first not learn anything. You should figure out what you want to program and then just do it and do it 3 times in 3 different languages and see which ones feel more natural to your objective. Think about how the language forced you to think about the problems in 3 different ways.

You might find you want to use all 3 languages at once in the same program, then use PostgreSQL (PL/Python, SQL, PL/JavaScript J ). That’s okay.

Forget about all the languages people are telling you you should know. I think people spend too much time learning to be good at using a programming language and forget what a programming language is for. It’s hard to survive with just one language these days (think HTML, JavaScript, SQL, Python, R – all different languages). A language is a way of expressing ideas succinctly in terms that an idiot-savantic computer can digest. First try to be stupid so you can talk to stupid machines and then appreciate those machines for their single-minded gifts.

The most amazing developers I’ve known have not thought of themselves as programmers.

They are very focused on a problem or tool they wanted to create, and I was always amazed how crappy their code was and yet could achieve amazing feats of productivity in their overall user-facing design. They had a vision of what they wanted for the finished product and never lost sight of that vision.

Q: Your PostgreSQL Up and Running book is published by O’Reilly. O’Reilly books always have some artwork on the front. Did you get to pick the animal on the front? For your PostGIS book you have a person on the front. Who is that?

A: We had a little say in the O’Reilly cover, but no say on the Manning cover. I have no idea who that woman is on PostGIS in Action. She’s a woman from Croatia. She looks very much like my sister is what I thought when I first saw it.

For O’Reilly they ran out of elephants because they just gave one to Hadoop. Can you believe it? Giving an elephant to Hadoop over PostgreSQL? So then they offered us an antelope. Leo was insulted, he wasn’t going to have some animal that people hunt on the cover of his book, besides antelopes look dumb and frail. I apologize to all the antelope lovers right now. Yes antelopes are cute cuddly and I’m sure loving. Just not the image we care to be remembered for. We wanted something more like an elephant and that is smart. So they offered us up an elephant shrew (aka sengi), which is a close relative of the elephant –  https://en.wikipedia.org/wiki/Afrotheria . It’s a very fast small creature, that looks like it’s made of bits and pieces of a lot of creatures. They blaze trails and are very monogamous. What could be more perfect to exemplify the traits of a database like PostgreSQL that can do everything and is faithful in its execution, aside from having to explain “What is that rodent looking creature on your cover?”.

Q: Way back when GeoHipster started we more or less decided thanks to a poll that a geohipster does things differently, shuns the mainstream, and marches to their own beat. Are you a geohipster?

A: Yes. I definitely shun the mainstream.  When mainstream starts acting like me it’s a signal I need to become more creative.

Q: I always leave the last question for you. Anything you want to tell the readers of GeoHipster that I didn’t cover or just anything in particular?

A: When will PostGIS 2.3 be out? I hope before PostgreSQL 9.6 comes out (slated for September). I’m hoping PostgreSQL 9.6 will be a little late to buy us more time.

Also – Where is the Free and Open Source Software for Geospatial International conference (FOSS4G) going to be held in 2017? In Boston August 14th – 18th. Mark your calendars and bookmark the site.

Hugh Saalmans: “No amount of machine learning could solve a 999999 error!”

Hugh Saalmans
Hugh Saalmans
Hugh Saalmans (@minus34) is a geogeek and IT professional that heads the Location Engineering team at IAG, Australia & New Zealand’s largest general insurer. He’s also one of the founders of GeoRabble -- an inclusive, zero-sales-pitch pub meetup for geogeeks to share their stories. His passion is hackfests & open data, and he’s big fan of open source and open standards.

Q: How did you end up in geospatial?

A: A love of maths and geography is the short answer. The long answer is I did a surveying degree that covered everything spatial from engineering to geodesy.

My first experience with GIS was ArcGIS on Solaris (circa 1990) in a Uni lab with a severely underpowered server. Out of the 12 workstations, only 10 of us could log in at any one time, and then just 6 of us could actually get ArcGIS to run. Just as well, considering most of the students who could get it to work, including myself, ballsed up our first lab assignment by turning some property boundaries into chopped liver.

Besides GIS, my least favourite subjects at Uni were GPS and geodesy. So naturally I chose a career in geospatial information.

Q: You work for IAG. What does the company do?

A: Being a general insurer, we cover about $2 trillion worth of homes, motor vehicles, farms, and businesses against bad things happening.

Geospatial is a big part of what we do. Knowing where those $2tn of assets are allows us to do fundamental things like providing individualised address level pricing — something common in Australia, but not so common in the US due to insurance pricing regulations. Knowing where assets are also allows us to help customers when something bad does happen. That goes to the core of what we do in insurance. That’s when we need to fulfill the promise we made to our customers when they took out a policy.

Q: What on Earth is Location Engineering?

A: We’re part of a movement that’s happening across a lot of domains that use geo-information: changing from traditional data-heavy, point & click delivery to scripting, automation, cloud, & APIs. We’re a team of geospatial analysts becoming a team of DevOps engineers that deliver geo-information services. So we needed a name to reflect that.

From a skills point of view — we’re moving from desktop analysis & publishing with a bit of SQL & Python to a lot of Bash, SQL, Python & Javascript with Git, JIRA, Bamboo, Docker and a few other tools & platforms that aren’t that well known in geo circles. We’re migrating from Windows to Linux, desktop to cloud, and licensed to open source. It’s both exciting and daunting to be doing it for an $11bn company!

Q: You’ve been working in the GIS industry for twenty years, how has that been?

A: It’s been great to be a part of 20+ years of geospatial evolutions and revolutions, witnessing geospatial going from specialist workstations to being a part of everyday life, accessible on any device. It’s also been exciting watching open source go from niche to mainstream, government data go from locked down to open, and watching proprietary standards being replaced with open ones.

It’s also been frustrating at times being part of an industry that has a broad definition, no defined start or end (“GIS is everywhere!”), and limited external recognition. In Australia we further muddy the waters by having university degrees and industry bodies that fuse land surveying and spatial sciences into a curious marriage of similar but sometimes opposing needs. Between the limited recognition of surveying as a profession and of geospatial being a separate stream within the IT industry, it’s no real surprise that our work remains a niche that needs to be constantly explained, even though what we do is fundamental to society. In the last 5 years we’ve tried to improve that through GeoRabble, creating a casual forum for anyone to share their story about location, regardless of their background or experience. We’ve made some good progress: almost 60 pub meetups in 8 cities across 3 countries (AU, NZ & SA), with 350 presentations and 4,500 attendees.

Q: How do you work in one industry for twenty years and keep innovating? Any tips on avoiding cynicism and keeping up with the trends?

A: It’s a cliche, but innovation is a mindset. Keep asking yourself and those around you two questions: Why? and Why Not? Asking why? will help you improve things by questioning the status quo or understanding a problem better, and getting focussed on how to fix or improve it. Saying why not? either gives you a reality check or lets you go exploring, researching and finding better ways of doing things to create new solutions.

Similarly, I try to beat cynicism by being curious, accepting that learning has no destination, and knowing there is information out there somewhere that can help fix the problem. Go back 15-20 years — it was easy to be cynical. If your chosen tool didn’t work the way you wanted it to, you either had to park the problem or come up with a preposterous workaround. Nowadays, you’ve got no real excuse if you put in the time to explore. There’s open source, GitHub and StackExchange to help you plough through the problem. Here’s one of our case studies as an example: desktop brand X takes 45 mins to tag several million points with a boundary id. Unsatisfied, we make the effort to learn Python, PostGIS and parallel processing through blogs, posts and online documentation. Now you’re cooking with gas in 45 seconds, not 45 minutes.

Another way to beat cynicism is to accept that things will change, and they will change faster than you want them to. They will leave you with yesterday’s architecture or process and you will be left with a choice to take the easy road and build up design debt into your systems (which will cost you at some point), or you take the hard road and learn as you go to future-proof the things you’re responsible for.

Q: What are some disruptive technologies that are on your watch list?

A: Autonomous vehicles are the big disruptor in insurance. KPMG estimate the motor insurance market will shrink by 60% in the next 25 years due to a reduction in crashes. How do we offset this loss of profitable income? By getting better at analysing our customers and their other assets, especially homes. Enter geospatial to start answering complicated questions like “how much damage will the neighbour’s house do to our insured’s house during a storm?”

The Internet of Things is also going to shake things up in insurance. Your doorbell can now photograph would-be burglars or detect hail. Your home weather sensor can alert you to damaging winds. Now imagine hundreds of thousands of these sensors in each city — imagine tracking burglars from house to house, or watching a storm hit a city, one neighbourhood at a time. Real-time, location-based sensor nets are going to change the way we protect our homes and how insurers respond in a time in crisis. Not to mention 100,000+ weather sensors could radically improve our ability to predict weather-related disasters. It’s not surprising IBM bought The Weather Channel’s online and B2B services arm last year, as they have one of the best crowdsourced weather services.

UAVs are also going to shake things up. We first used them last Christmas after a severe bushfire (wildfire) hit the Victorian coast. Due to asbestos contamination, the burnt out area was sealed off. Using UAVs to capture the damage was the only way at the time to give customers who had lost everything some certainty about their future. Jumping to the near future again — Intel brought their 100-drone lightshow to Sydney in early June. Whilst marvelling at a new artform, watching the drones glide and dance in beautiful formations, it dawned on me what autonomous UAVs will be capable of in the next few years — swarms of them capturing entire damaged neighbourhoods just a few hours after a weather event or bushfire has passed.

Q: What is the dirtiest dataset you’ve had to ingest, and what about the cleanest?

A: The thing about working for a large corporation with a 150-year history is your organisation knows how to put the L into legacy systems. We have systems that write 20-30 records for single customer transactions in a non-sequential manner; so you almost need a PhD to determine the current record. There are other systems that write proprietary BLOBs into our databases (seriously, in 2016!). Fortunately, we have a simplification program to clear up a lot of these types of issues.

As far as open data goes — that’d be the historical disaster data we used at GovHack in 2014.  Who knew one small CSV file could cause so much pain. Date fields with a combination of standard and American dates, inconsistent and incoherent disaster classifications, lat/longs with variable precisions.

I don’t know if there is such a thing as a clean dataset. All data requires some wrangling to make it productive, and all large datasets have quirks. G-NAF (Australia’s Geocoded National Address File) is pretty good on the quirk front, but at 31 tables and 39 foreign keys, it’s not exactly ready to roll in its raw form.

Q: You were very quick to release some tools to help people to work with the G-NAF dataset when it was released. What are some other datasets that you’d like to see made open?

A: It can’t be understated how good it was to see G-NAF being made open data. We’re one of the lucky few countries with an open, authoritative, geocoded national address file, thanks to 3 years of continual effort from the federal and state governments.

That said, we have the most piecemeal approach to natural peril data in Australia. Getting a national view of, say, flood risk isn’t possible due to the way the data is created and collected at the local and state government level. I’m obviously biased being in the insurance industry about wanting access to peril data, but having no holistic view of risk, nor having any data to share doesn’t help the federal government serve the community. It’s a far cry from the availability of FEMA’s data in the US.

Q: Uber drivers have robot cars, McDonald’s workers have robot cooks, what are geohipsters going to be replaced with?  

A: Who says we’re going to be replaced? No amount of machine learning could solve a 999999 error!

But if we are going to be replaced — on the data capture front it’ll probably be due to autonomous UAVs and machine learning. Consider aerial camera systems that can capture data at better than 5 cm resolution, but mounted on a winged, autonomous UAV that could fly 10,000s of sq km a day. Bung the data into an omnipotent machine learning feature extractor (like the ones Google et al have kind of got working), and entire 3D models of cities could be built regularly with only a few humans involved.

There’ll still be humans required to produce PDFs… oh sorry, you said what are geohipsters going to be replaced with. There’ll still be humans required to produce Leaflet+D3 web maps for a while before they work out how to automate it. Speaking of automation — one of the benefits of becoming a team of developers is the career future-proofing. If you’re worried about losing your job to automation, become the one writing the automation code!

Q: What are some startups (geo or non-geo) that you follow?

A: Mapbox and CartoDB are two of the most interesting geospatial companies to follow right now. Like Google before them, they’ve built a market right under the noses of the incumbent GIS vendors by focussing on the user and developer experience, not by trying to wedge as many tools or layers as they can into a single map.

In the geocoding and addressing space it’s hard to go past What3Words for ingenuity and for the traction they’ve got in changing how people around the World communicate their location.

In the insurance space, there’s a monumental amount of hot air surrounding Insuretech, but a few startups are starting to get their business models off the ground. Peer to peer and micro insurance are probably the most interesting spaces to watch. Companies like Friendsurance and Trov are starting to make headway here.

Q: And finally, what do you do in your free time that makes you a geohipster?

A: The other day I took my son to football (soccer) training. I sat on the sideline tinkering with a Leaflet+Python+PostGIS spatio-temporal predictive analytical map that a colleague and I put together the weekend prior for an emergency services hackathon. Apart from being a bad parent for not watching my son, I felt I’d achieved geohipster certification with that effort.

How a geohipster watches football (soccer) practice
How a geohipster watches football (soccer) practice

In all seriousness, being a geohipster is about adapting geospatial technology & trying something new to create something useful, something useless, something different. It’s what I love doing in my spare time. It’s my few hours a night to be as creative as I can be.

Terry Griffin: “Agricultural big data has evolved out of precision ag technology”

Terry Griffin, PhD
Terry Griffin, PhD
Dr. Terry Griffin (@SpacePlowboy) is the cropping systems economist specializing in big data and precision agriculture at Kansas State University. He earned his bachelor’s degree in agronomy and master’s degree in agricultural economics from the University of Arkansas, where he began using commercial GIS products in the late 1990s. While serving as a precision agriculture specialist for University of Illinois Extension, Terry expanded his GIS skills by adding open source software. He earned his Ph.D. in Agricultural Economics with emphasis in spatial econometrics from Purdue University. His doctoral research developed methods to analyze site-specific crop yield data from landscape-scale experiments using spatial statistical techniques, ultimately resulting in two patents regarding the automation of community data analysis, i.e. agricultural big data analytics. He has received the 2014 Pierre C. Robert International Precision Agriculture Young Scientist Award, the 2012 Conservation Systems Precision Ag Researcher of the Year, and the 2010 Precision Ag Awards of Excellence for Researchers/Educators. Terry is a member of the Site-Specific Agriculture Committees for the American Society of Agricultural and Biological Engineers. Currently Dr. Griffin serves as an advisor on the board of the Kansas Agricultural Research and Technology Association (KARTA). Terry and Dana have three wonderful children.

Q: Your background is in Agronomy and Agricultural Economics. When along this path did you discover spatial/GIS technologies, and how did you apply them for the first time?

A: During graduate school my thesis topic was in precision agriculture, or what could be described as information technology applied to production of crops. GPS was an enabling technology along with GIS and site-specific sensors. I was first exposed to GIS in the late 1990s when I mapped data from GPS-equipped yield monitors. I dived into GIS in the early 2000s as a tool to help manage and analyze the geo-spatial data generated from agricultural equipment and farm fields.

Q: Precision Agriculture is a huge market for all sorts of devices. How do you see spatial playing a role in the overall Precision Agriculture sandbox?

A: Precision Ag is a broad term, and many aspects of spatial technology have become common use on many farms. Some technology automates the steering of farm equipment in the field, and similar technology automatically shuts off sections of planter and sprayers to prevent overlap when the equipment has already performed its task. Other forms of precision ag seem to do the opposite — rather than automate a task they gather data that are not immediately usable until processed into decision-making information. These information-intensive technologies that are inseparable from GIS and spatial analysis have the greatest potential for increased utilization.

Q: What do you see as hurdles for spatial/data analytics firms who want to enter the Precision Agriculture space, and what advice would you give them?

A: One of the greatest hurdles, at least in the short run, is data privacy issues as it relates to ‘big data’ or aggregating farm-level data across regions. A tertiary obstacle is lack of wireless connectivity such as broadband internet via cellular technology in rural areas; without this technology agricultural big data is at a disadvantage.

Q: While there have been attempts at an open data standard for agriculture (agxml, and most recently SPADE), none have seemed to catch on.  Do you think this lack of a standard holds Precision Agriculture back, or does it really even need an open standard?

A: Data must have some sort of standardization, or at least a translation system such that each player in the industry can maintain their own system. Considerable work has been conducted in this area, and progress is being made; we can think of the MODUS project as the leading example. Standards have always been important even when precision ag technology was isolated to an individual farm; but now with the big data movement, the need for standardization has been put toward the front burner. Big data analytics relies on the network effect, specifically what economists refer to as network externalities; the value of participating in the system is a function of the number of participants. Therefore, the systems must be welcoming to all potential participants, but must also minimize the barriers to increase participation rates.

Q: What is your preferred spatial software, or programming language?

A: All my spatial econometric analysis and modeling is in R, and R is also where a considerable amount of GIS work is conducted. However, I use and recommend to many agricultural clients QGIS due to being more affordable when they are uncertain if they are ready to make a financial investment. For teaching I use Esri ArcGIS and GeoDa in addition to R.

Q: If money wasn’t an issue, what would be your dream Spatial/Big Data project?

A: Oddly enough I think I already am doing those things. I am fortunate to be working on several aspects of different projects that I hope will make a positive difference for agriculturalists. Many of the tools that I am building or have built are very data-hungry, requiring much more data than has been available. I am anxious for these tools to become useful when the ag data industry matures.

Q: You tend to speak at a number of Precision Agriculture conferences, you have spoken at a regional GIS group, have you ever considered speaking at one of the national conferences?

A: I’m always open to speaking at national or international conferences.

Q: Lastly, explain to our audience of geohipsters what is so hip about Precision Ag, Big Data and Spatial.

A: Agricultural big data has evolved out of precision ag technology, and in its fully functional form is likely to be one of the largest global networks of data collection, processing, archiving, and automated recommendation systems the world has ever known.

Mark Iliffe: “Maps show us where to direct our resources and improve the lives of people”

Mark Iliffe
Mark Iliffe
Mark Iliffe (@markiliffe) is a geographer/map geek working on mapping projects around the world. He leads Ramani Huria for the World Bank, is Neodemographic Research Fellow at the University of Nottingham after completing his PhD at the Horizon Institute, and a mentor for Geeks Without Bounds.

Q: Suitably for a geohipster, your OpenStreetMap profile says “I own a motorbike and have a liking to randomly spend weekends finding out ‘what is over there’”. What have you found?

A: I think I wrote that around a decade ago while getting into OSM, while on a foreign exchange trip in Nancy, France! I found out a lot of things, from that time trying to take a 125cc Yamaha (a hideously small and underpowered motorcycle — think Chimpanzee riding a tricycle) around Europe was slow and cold to new friendships. Also, a career path in maps and a love of all things geospatial, via counting flamingos in Kenya…

Q: Everyone has to start somewhere, and for you I believe that was mapping toilets (or places toilets should be). Indeed I think we first met when you presented your sanitation hack project Taarifa at #geomob by squatting on the table to demonstrate proper squat toilet technique. Tell us about Taarifa.

A: Taarifa is/was a platform for improving public service delivery in emerging countries. It came out of the London Water Hackathon in 2011, basically as an idea that we could do more with the data that is being generated by the many humanitarian mapping projects that had been enabled by OSM at the time, such as Map Kibera, Ramani Tandale and Haiti Earthquake mapping. As a community open-source project, it showed the potential of how feedback loops between citizens and service providers could be used to fix water points or toilets. We used Ushahidi as a base, adding workflow for reports; we tried to push these back to their community, but the core developers had other objectives — fair enough. We as the Taarifa community though we had something special regardless, but it was a hack, it wasn’t planned to be deployed anywhere.

In January 2012 I was in a meeting with a colleague at the World Bank who’d head that Taarifa had been suggested to fill a need on monitoring the construction of schools in Uganda. He arranged a meeting with the project manager for me, went along, and a week later I was coding on the plane to Uganda to pilot Taarifa across 4 districts around the country. Ultimately, it ended up being scaled to all 111 districts at the request of the Ugandan Ministry of Local Government.

From this the Taarifa community started to grow, expanding the small core of developers. In 2013 we won the Sanitation Hackathon Challenge, then received $100K World Bank innovation award to set up Taarifa in the Iringa Region of Tanzania. Taarifa and collaborators on that project, SNV, Geeks Without Bounds and ITC Twente then went on to win a DFID Human Development Innovation Fund award of £400,000. Since then it’s gone in a different direction, away from a technical community focus to one that concentrates on building the local social fabric that is wholly embedded and ran locally in Tanzania.

I feel that this was Taarifa’s most important contribution — not one of technology, but one which convenes development agencies and coders to innovate a little. Now, the main developers of the code haven’t worked on the main codebase for over a year, but Taarifa’s ideas of creating feedback loops in emerging countries still move on, in its grants, but also have been absorbed into other projects too.

Q: Actually I think I’m wrong, even before Taarifa you were an intern at Cloudmade, the first company to try to make money using OpenStreetMap. Founded by Steve Coast (and others), the VC-funded business hired many of the “famous” names of early OSM, before eventually fizzling out and moving into a different field. What was it like? Any especially interesting memories? What sort of impression did that experience leave on you? Also, what’s your take on modern VC-funded OpenStreetMap companies like Mapbox?

A: Cloudmade was fantastic, learned a lot from each of the OSMers that worked there — from Steve Coast, Andy Allen, Nick Black, Matt Amos, and Shaun McDonald. At Cloudmad, I wrote a routing engine for OSM — now common tools like PgRouting weren’t really around — I tried to build pgRouting from source, wasted three days, so started from scratch. In hindsight, I should have persevered with pgRouting, got involved in developing the existing tool instead of starting from scratch.

As it was my first tech company to work at, they were based in Central London and I was broke. I had to stay with my uncle in Slough about 30 miles away. I used to work quite late and slept in the office floor a few times. Once Nick was in early and caught me stuffing my sleeping bag back into the bottom drawer of my desk. The advice was to probably go home a bit more — advice that I’ve used selectively since, but I don’t sleep on my office floor anymore!

The VC situation is always going to be complex. I wasn’t too surprised when Cloudmade eventually pivoted, and their ideas and creations such as the “Map Style Editor” and Leaflet.js live on regardless of the company. At SoTM in Girona I made the comment that OSM was going through puberty. On reflection, I think it was a crude but accurate way to describe our project at that time. We didn’t know what OSM would or could become. OSM didn’t know how to deal with companies like Cloudmade, and neither did the companies know how to deal with OSM; to a certain extent I think we’re still learning, but getting better. Though at the time, like teenagers having to deal with new hormones, emotions ran riot. This all without realising that in the same way OSM has changed the world, OSM also is changed by it — and this is a good thing. Gary Gale has also mused extensively on this.

Now with the generation of companies after — CartoDB, Mapbox etc. — I think that they are much more perceptive to supporting and evolving the OSM ecosystem. Mapbox Humanitarian is one of them, but also their support for developing the ID Editor. In turn, the OSM community is growing as well, especially in the humanitarian space, with Humanitarian OpenStreetMap Team (HOT) supporting numerous projects around the world and acting as a useful interface to OSM for global institutions.

Q: Did you ever think back then that OSM would get as big and as global as it has?

A: TL;DR: Yes.

Recently, I had a discussion with a friend in a very British National Mapping Agency about the nature of exploration. Explorers of old would crisscross the world charting new things, sometimes for their own pleasure, but mostly for economic gain. These people then formed the mapping agencies that data from OSM ‘competes’ with today.

By working with the numerous army of volunteers, OSM embodies the same exploratory spirit — whether mapping their communities, or supporting disaster relief efforts. But instead of the privileged few, it’s the many. Now OSM is making tools and gaining access to data that make it easier than ever before to contribute, whether map data or any other contribution to the community.

Q: Despite those humble beginnings I believe you are now Doctor Mark Iliffe, having very recently defended your PhD thesis in Geography at the University of Nottingham. Congrats! Nevertheless though, doesn’t fancy book lernin’ like that reduce your geohipster credibility? In the just-fucking-do-it neogeo age is a formal background in geography still relevant? Is it something you’d recommend to kids starting their geo careers?

A: Thanks! Doing a PhD was by far the worst thing I’ve ever done, and will ever probably do — to myself, friends, and family. But it wasn’t through book learning, I did it in the field. Most of the thesis itself was written at 36,000ft via Qatar/British Airways and not the library (nb. This was/is a stupid idea, do it in the library).

Hopefully the geohipster cred should still be strong, but I wouldn’t recommend a PhD to kids starting their careers. Bed in for a few years, work out what you want to do, get comfortable, and then see if a PhD is for you. When I started my PhD, I’d done a small amount of work with Map Kibera and other places, and knew I wanted to work in the humanitarian mapping space but full time jobs didn’t exist. Doing a PhD gave the space (and a bit of money) to do that. Now these jobs, organisations, and career paths exist. Five years ago they didn’t.

Q: Though you live in the UK, for the last few years you’ve been working a lot in Tanzania, most recently with the World Bank. A lot of the work has been about helping build the local community to map unmapped (but nevertheless heavily populated) areas like Tandale. Indeed this work was also the basis for your PhD thesis. Give us the details on what you’ve been working on, who you’ve been working with, and most of all what makes it hip?

A: Ramani Huria takes up a lot of my time… It’s a community mapping project, with the Government of Tanzania, universities, and civil society organisations, supported by the World Bank and Red Cross. Ramani Huria has mapped over 25 communities in Dar es Salaam, covering around 1.3 million people. Dar es Salaam suffers from quite severe flooding, partly due as Dar es Salaam is the fastest growing city in Africa with a population of over 5.5 million.

https://www.youtube.com/watch?v=Lz75aHQpmf8

Ramani Huria is powered by a cadre of volunteers, pulling together 160+ university students, 100s community members to collect data on roads, water points, hospitals, and schools, among other attributes. One of the key maps are of the extent of flooding, this is being done by residents of flood prone communities sketching on maps. Now that these maps exist, flood mitigation strategies can be put in place by community leaders — this could either be through building new drains, or ensuring existing infrastructure is maintained. That’s the hip part of Ramani Huria, the local community is leading the mapping, with ourselves as the international community in support.

Ramani Huria -- a community mapping project
Ramani Huria — a community mapping project

Q: Over the last years there has been a big push by HOT and Missing Maps to get volunteers remote mapping in less developed areas like Tanzania. Some OSMers view this as a bad thing, as they perceive that it can inhibit the growth of a local community. As someone who’s been “in the field”, what’s your take? Is remote mapping helpful or harmful?

A: The only accurate map of the world is the world itself. With the objective of mapping the world, let’s work on doing that as fast as possible. Then we can focus on using that map to improve our world. Remote mapping is critical for that — but how can we be smarter at doing it?

To make a map of flood extents, so much time and effort goes into its creation. But a lot of it is basic, for example digitising roads and buildings. This is time-consuming — it doesn’t matter who does it, but it has to be done. But the knowledge of flooding is only held by those communities, nowhere else. The faster you can do this, the faster these challenges can be mitigated. Remote mapping gives a valuable head-start.

In Ramani Huria, we run “Maptime” events for the emerging local OSM community at the Buni Innovation Hub — these events grow the local community. Personally, I think we should move towards optimising our mapping as much as possible — whether that’s through remote mapping or image recognition — but that may be a step too far for the time being. I’d love to see interfaces to digitise Mapillary street view data, it’s something we’ve collected a lot of over the past year. Can we start to digitise drains from Mapillary imagery in the same way Missing Maps uses satellite imagery?

Q: You’ve recently been in Dunkirk in the refugee camps with Mapfugees, what was it like?

A: Mapfugees is a project to help map the La Linière refugee camp around Dunkirk, France. Jorieke Vyncke and I met up in Dunkirk to discuss with the refugee’s council — made up of the refugees themselves — and the camp administrators to see how maps could help. The refugees themselves wished to have maps of the local area for safe passage in/out of the camp. The camp itself is surrounded by a motorway and a railway, making passage in and out quite dangerous. Other ‘Mapfugees’ volunteers worked with mapping the surrounding areas with the refugees, leading local amenities and safe routes were identified.

At the same time, the camp itself was mapped, providing an understanding of camp amenities, so services to the camp can be improved. This is very similar to my experience of community mapping elsewhere — the map is a good way of discussing what needs to be done and can empower people to make changes.

Q: As you no doubt know, here at GeoHipster we’re not scared to ask the real questions. So let’s get into it. On Twitter you’re not infrequently part of a raging debate — which is better: #geobeers or #geosambuca? How will we ever settle this?

A: #Geobeer now has my vote. I’m way too old for #geobuccas as the hangovers are getting worse!

Q: So what’s next Mark? I mean both for you personally now that you’ve crossed the PhD off the list and also for OSM in places like Africa and in organizations like the World Bank.

A: For me, in a few months I’m going to take a long holiday and work out what’s next. I’m open to suggestions on a postcard!

Looking back, OSM is just past a decade old and is still changing the world for the better. In OSM, projects like Ramani Huria, but also mapping projects in Indonesia and others are at the forefront of this, but more can be done. I believe that organisations like the UN and World Bank need to move away from projects to supporting a global geospatial ecosystem. This isn’t a technical problem, but a societal and policy based concern.

This doesn’t sound sexy and isn’t. But at the moment, there are over a billion people that live in extreme poverty. Maps show us where to direct our resources and improve the lives of people, the human and financial resources required to map our world will be immense, moving well past the hundreds of thousands of dollars and spent on mapping cities like Dar es Salaam and Jakarta. To build this, we need to work at a high policy level to really embed geo and maps at the core of the Global Development Agenda with the Sustainable Development Goals. Projects like UN GGIM are moving in that direction, but will need support from geohipsters to make it happen.

Maps and geo are crucial to resolve the problems our world faces, to solve this problem we should use our natural geohipster instincts… JFDI.

Q: Any closing thoughts for all the geohipsters out there?

A: Get out there — you never know where you’ll go.

Maps and mappers of the 2016 calendar: Stephen Smith

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Stephen Smith

Q: Tell us about yourself.

A: I’m a cartographer by night and a GIS Project Supervisor by day. I work for the Vermont Agency of Transportation where I help our rail section use GIS to manage state-owned rail assets and property. Most of the time my work entails empowering users to more easily access and use their GIS data. I’ve used Esri tools on a daily basis since 2008, but recently I’ve been playing with new tools whenever I get the chance. I attended SOTMUS 2014 in DC (my first non-Esri conference) and was really excited about everything happening around the open source geo community. I got some help installing “Tilemill 2” from GitHub and I haven’t looked back. Since then the majority of the maps I’ve made have been using open source tools and data. Lately I’ve been heavily involved in The Spatial Community, a Slack community of 800+ GIS professionals who collaborate to solve each other’s problems and share GIFs. I’m also starting a “mastermind” for GIS professionals who want to work together and help one another take their careers to the next level.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: This map was a gift for my cousin who is part Native American and works in DC as an attorney for the National Indian Gaming Commission. His wife told me that he really liked my Natural Resources map and she wanted me to make him something similar to the US Census American Indian maps but in a “retro” style. I took the opportunity to explore the cartographic capabilities of QGIS and was very impressed.

Q: Tell us about the tools, data, etc., you used to make the map.

A: I’ve done a full writeup of the creation of the map including the data, style inspirations, fonts, challenges, and specific QGIS settings used on my website. You can also download a high resolution version perfect for a desktop wallpaper.

'Native American Lands' by Stephen Smith
‘Native American Lands’ by Stephen Smith

Achim Tack & Patrick Stotz: “More and more nonsensical things are mapped just for the sake of mapping”

Achim Tack (top) and Patrick Stotz
Achim Tack (top) and Patrick Stotz
Besides other engagements, Achim and Patrick work as data journalists at Germany-based Spiegel Online, one of the most widely read German-speaking news sites. They're also the creative duo behind the information visualization blog mappable.info, where they share new tools and spare-time projects around their passion for maps and geospatial data. Here they speak about their background, their interests, and inspirations.

Q: Hi Achim and Patrick, where are you based and what do(es each of) you do?

A: Patrick: We both live and work in Hamburg, Germany and feel lucky to have jobs where we spend a fair amount of our time working with geospatial data and making maps. I’ve joined Spiegel Online (a major German news site) last year and work there as a data journalist. We’re still a very small team and my responsibility ranges from gathering, cleaning and analyzing data to making maps and other kinds of (mostly interactive) visualizations.

Achim: In my main job I work as an analyst for a consulting firm (GGR) that focuses on topics like accessibility of public services and communities’ adaptation to demographic change. Applying our models, we produce a lot of datasets and while looking for appealing ways to present them I got in touch with the local data-driven journalism (DDJ) community. Journos know a lot about storytelling that researchers and analysts often don’t. About a year ago, I was offered to join the Spiegel Online data journalism team besides my job at GGR.

Q: I see. How does your two-job situation work for you, Achim, I imagine it can be stressful?

A: Achim: Better than I’ve expected, to be quite honest. I work on a fixed four days GGR / one day Spiegel Online schedule. On the technical level, data journalism and urban analytics have a lot in common: both fields center on the generation, cleansing, and analysis of data. I am very grateful that both employers show some flexibility, but I also believe that they both benefit from the knowledge transfer between jobs.

Q: Of course, I also had a glance at your Twitter bios: Among other things, Achim’s says “a passion for #maps, #geodata and #ddj”, Patrick’s says “data journalism & dataviz” and “map nerd”. Is the spatial theme a unifying one for you, and if so, how come?

A: Patrick: Definitely, it’s our common interests that got the two of us in touch in the first place. We share a deep fascination for understanding the geography of places, as well as for the beauty of maps. At least that’s what led both of us to studying urban planning at university (not in the same city though) and later on to starting our blog mappable.info.

Achim and I got to know each other around four years ago. Back then, Achim already worked at GGR, and I was a research fellow at HafenCity University Hamburg. We were working together on a joint research project building a GIS-based tool for predicting the (fiscal, ecological, social…) consequences of urban planning projects. The final version was a set of custom-built ArcGIS Toolboxes programmed with Python (ArcPy) scripts. The whole approach of programmatically controlling a GIS, designing your own interfaces (within the given limitations) and working on a final product fit for usage in public administration was new to us and held a lot of challenges.

Achim: While working on the project, we quickly realized that we share the same passion and interest for maps and geospatial data. If I think about it now, starting mappable.info maybe was a counterreaction to working on often complicated tasks. It was our fun place to explore some new tools and simply publish small projects online without many restrictions.

Q: To what degree can each of you mix and match the skills that you acquire in your various activities?

A: Achim: Starting from a relatively similar university education, we both got different skillsets. I gained some experience in the fields of scraping and data handling, whereas Patrick deepened his knowledge in data visualization and front end design with D3.js, etc. But at least from my point of view, these days you can’t train for one job anymore but you have to broaden and adapt your skillset on a per-project basis. My urban analytics job benefits from skills learned in DDJ and vice versa. It’s getting more common to consult with universities or other research institutions when doing DDJ.

Patrick: I agree. I’d say that the lines between DDJ and cartography (and other disciplines) have become quite blurry recently. In my eyes some of the best cartography nowadays is done by the New York Times graphics department. At the same time bloggers, cartographers, and geohipsters (or however you want to call them) might analyze and visualize data and turn it into a story that’s far superior to a lot of the things we’re used to seeing in journalism.

Mappable -- Limited accessibility
Mappable — Limited accessibility

 

Q: What then is your take on the interplay between more recent developments in general like the open data movement and digital humanities on one hand and more traditional fields such as journalism and GIS on each other?

A: Achim: I’m excited to see the changes in both fields — traditional GIS and journalism — in recent years. Today open (geo)data is used in most of our projects. And I’m not only talking about “fun projects” or civic hacking, but serious consulting projects. Open data has not only the value of being free, but of being quickly and easily accessible.

On the software side, I think in the coming years we will see a trend toward more open software such as QGIS in public administration. Mostly because of budget reasons, but I believe the influx of younger employees could lead to them bringing their open source toolstack with them as well.

Q: What are some projects you’re excited about or working on right now (if this is not giving away too much)?

A: Achim: We have to differentiate between professional projects and private “afterwork projects”. At GGR for example, we just finished a multi-year federal research project about tradable land-use certificates. We developed a WebGIS platform to conduct a semi-automated fiscal impact analysis for close to a hundred clients.

Like every geo-nerd, I still have a few datasets sitting around that I always wanted to play with. One idea is a spatio-temporal analysis of business locations and the transfer of this data into a predictive model for urban retail areas… But I might need some more free weekends for that 😉

Patrick: Oh yes, that idea has been on our list for a long time. I have to admit though that after turning my hobby into my everyday job, I’ve become a bit reluctant towards ‘just for fun’ mapping projects. I’d definitely like to keep mappable.info as well as its little spin-off project travel score running as before, it’s just hard to find the time besides a full-time job and the other stuff I want to do in my spare time.

Patrick at the world's longest-named place
Patrick at the world’s longest-named place

Q: You’ve mentioned your shared website mappable.info a few times: is that URL an implicit mission statement (as in “map all the info!”)?

A: Patrick: No, that’s not quite it. We started the blog at the beginning of 2013. At least from our perspective, that was a time when publishing maps online got a lot easier. Mapbox and CartoDB skyrocketed, and we were thrilled about all the new possibilities that were coming up. Of course, there had been tons of great examples before, but for us, coming from an urban planning perspective and a rather narrow Esri ArcGIS-centered view and education, this was like a small revolution. Our first project was mapping all hotels, hostels, and airbnb apartments in Hamburg. We scraped the data from various sources. Putting them on a map and styling in CartoDB was super easy. That’s what mappable was about in the beginning. Trying out new tools, playing with the aesthetics of maps, and bringing data onto a map that you hadn’t seen mapped before.

Achim: I’d like to add one more point of view: Recent years have seen the generation of very large datasets, which have a direct or indirect spatial reference, and therefore are “mappable”. Previously, when thinking about spatial data, classic things like land-use-parcels or streets came to mind. But today it also means live tracking data of taxis, whales, or planes; retail sales of store locations, or the opening date of every Starbucks in the country. What’s new is that a number of those datasets – while clearly having a spatial component – were not generated to be spatially analyzed. I like to speak of recyclable datasets, since what we deal with are often residues from other processes, stored away in databases. Their spatial relevance becomes clear only on the second or third sight. Analyzing and mapping those datasets can lead to completely new insights.

Mappable car-sharing timelapse
Mappable car-sharing timelapse

Q: What do you make of such skeptics like Brian Timoney (and me) who keep surfacing things that maybe shouldn’t have been mapped? Do you have a good rule to go by that you could share when it comes to maps or other forms of information visualization?

A: Achim: Yea, we quickly found out that not everything should be mapped – openly available but legally protected car sharing data for example 😉 But seriously: I agree that there is a danger that more and more nonsensical things are mapped just for the sake of mapping. Currently, interactive maps in most cases guarantee quite high user interaction rates. From the journalism standpoint this poses a challenge: You always have to ask yourself if the nice and well-designed map you could roll out really adds to the story or is simply done to gain more page impressions.

That’s why I expect to see a lot of maps coming from PR departments in the next years. You could compare this situation to the field of infographics. They got a lot of attention in the past years so now it feels like more or less every advertising agency publishes 2-3 infographics per week. But at least from my personal view the effect wears off. I see a lot less infographics being shared in my social media feeds compared to maybe a year ago. I fear this will happen with maps, if too many nonsensical ones are published.

Patrick: Sure, just because making online maps nowadays is easier than ever, doesn’t mean that everything that’s mappable should be mapped. The first question we always ask if someone wants a map visualization is if the spatial component of the data set is actually the most important thing. Sounds pretty straightforward, but obviously isn’t understood by everyone. We actually put together a small checklist on making geodata visualizations when we first got invited to speak at a journalist conference about making maps. In our everyday work, we must admit though, that we don’t really work with a fixed set of rules. One thing we always try to achieve is to keep our visualizations as simple as possible. That’s probably the influence of reading some books by Edward Tufte.

Q: Speaking of Tufte, who else would you consider a source of inspiration for your work, and how did you learn about them?

A: Patrick: Difficult question, there are just so many. I think we both draw a lot of inspiration from our twitter feed and from quite a long list of blogs we follow. To name a few (and risking to forget a lot more who do awesome work in the process): Alberto Cairo, Nathan Yau, Gregor Aisch, Lynn Cherny, Andy Kirk, Mike Bostock, Maarten Lambrechts and John Burn-Murdoch when it comes to dataviz and Andy Woodruff, Lyzi Diamond, Hannes Kröger and Alan McConchie when it comes to mapping. It just amazes me again and again how much their openness and the openness of the whole mapping and the dataviz community helps in learning stuff and keeping up to date. It’s not like I don’t appreciate my university education, but it’s that openness that enabled me to learn new tools and points of view and finally to switch fields and get a job where I can do what I’m passionate about.

Q: And do you know already where the path will take the two of you? Will we see more maps by you?

A: Achim: Although it was very quiet on mappable in recent months, we still do create lots of maps of course. Unfortunately the ones I create for our customers at GGR are not open to the public in most cases. But on the other hand hundreds of thousands of Spiegel Online visitors have already viewed our maps – that’s a good compromise that makes me quite happy 🙂

Q: Finally, any words of advice for us geohipsters or the world at large out there?

A: Patrick: To the geohipsters: Read this blog post on the future(s) of OpenStreetMap by Alan McConchie and help OSM moving towards what he calls singularity. To the rest, and this might sound awkward considering I’m now a journalist myself: don’t let all the bad news cloud your world view. Check out the works of Hans Rosling and Max Roser and acknowledge the long-term positive trends, too.

Achim: I can only agree on this and would like to add just one thought: Given the fact that we see so many technical improvements to mapping like HD-satellite videos or high precision maps for autonomous driving, we sometimes should take a moment to appreciate the craftsmanship of mapping like the globes from Bellerby Globe Makers, or even pick up a pencil and a piece of paper and start doodling.

Vasile Cotovanu: “Better to publish a (not perfect) map than having a masterpiece … unpublished”

Vasile Cotovanu
Vasile Cotovanu
Vasile Cotovanu is the author of the “SwissTrains” railway map -- one of the first animated public transport maps on the web. He has worked as a software engineer on the mobile team of local.ch, one of the most used websites in Switzerland.

Q: Hi Vasile, I’ve looked up your Twitter bio: “Husband and father, neogeographer, hacker and hiker, mobile software engineer, curious, never stop exploring”. I understand you’re currently between jobs and indeed exploring. What is it that you did in your old job?

A: The Twitter bio is outdated, but at the same time current 🙂 About the exploration bit: I am always doing that, no matter whether I’m hiking the Swiss mountains or I’m playing with new technologies. With regard to the job, yes, I am taking a few months’ time off to explore some pet projects that I’ve never had time to work on 🙂 Before this, my previous position was software engineer in the mobile team of local.ch, one of the most used websites in Switzerland.

Q: You have an education as a geomatics engineer. How did you get interested in that specifically?

A: My passion for geography started in early childhood, when I was reading Jules Verne books side-by-side with the world atlas 🙂 Geomatics came in 1997, before graduating high school, when I had to make a choice about which university to attend. I stumbled upon a technology newspaper that had a GIS software article and presented a classic solution for finding a location for a new company branch (I think it was a bank) based on some criteria like target population, existing branch offices, easy access, etc.

So I wanted to find a university that was dealing with such problems and the closest I could find was to study Geodesy at Technical University of Civil Engineering in Bucharest, Romania. After graduation in 2002, I worked as a technical analyst dealing with surveying and photogrammetry projects with a strong focus on the GIS-related programming part. It was the time of learning and discovering Geomedia, ArcGIS and FME.

However, in 2005 with the online maps revolution that started with Google Maps I joined also this “train” and started to work as a freelancer and created various map mashups. Since then I didn’t work anymore in the classic field of geomatics but embarked on a rather different path: I was doing web-programming with a focus on geo-related projects – which I’m still doing today. Except that I ditched the “web-” prefix, because with time I got experienced in working in the whole production pipeline from raw geo data to presenting them on a map, no matter if on web or (more recently) on mobile.

Q: Did your work for your old employer also relate in any way to maps, geo data or GIS, or have these things been hobbies for you?

A: One of the goals of my former company was to provide the best local search experience in Switzerland. So you can imagine it was dealing with a lot of geodata! I joined them when they needed someone to help them migrate to a new online maps provider, and also to grab (read crawl) the Swiss public transport data for the local search directory.

My colleagues there called me “Mr. Geo”, as most of the geo-related questions were routed to me, but also because I had this passion, or obsession even, for geo-quizzes. So they were always testing me by showing me photos from their trips and asking me to guess the location 🙂

Over the last two years at local.ch I wasn’t exposed to maps and geo-related topics as much any more as I wanted, but more to mobile technologies. So this was a perfect opportunity for me to foster learning how to do native development on mobile platforms (iOS and Android) and apply my geo-related knowledge also there.

Q: You’re famous for your “SwissTrains” railway map. In my mind you were one of the first, if not the first, to build animated public transport maps on the web. When did you start this project, and why this fascination with trains? Are you actually what we call a ‘ferrophile’ in German, i.e. a train lover, rather than a geohipster?

A: Actually both! In the railway department, call me trainbuff, railfan, trainspotter, whatever fits 🙂 Yes, I love trains! As a little child I used to go to a friend and play with his model trains. Later I studied building tracks and railway models. And today I am still playing with trains, except that I am drawing and animating them on web and mobile platforms 🙂

I started the SwissTrains project in 2007 as a challenge to visualize the impressive network of Swiss Railways which consists of 13,000 trains on 5,000 km of tracks, generating 150,000 timetable stops across 1,800 stations. In the early years I did a lot of the heavy-lifting myself, because no vector data were available; so I manually digitized the whole Swiss rail network and mined the timetable data from the official providers.

After some really nice press coverage and a lot of feedback from not only train enthusiasts, I pushed the project to the next level: I open-sourced it, helped others to implement it for their regions, and from time to time I improved its codebase by updating the APIs or adding new tools to it.

Q: I imagine your railway maps have experienced quite some technological development over time. What technologies did you use, do you use, and why?

A: Back in 2007 I was still working on a Windows machine, and for my projects I used commercial GIS tools like GeoMedia (to digitize and analyse the network topology) and MySQL (to store and query the timetable data). The two “databases” were integrated with a pair of custom-made scripts and FME workspaces that didn’t always run smoothly. Thus, there was also a bit of manual work involved, which obviously wasn’t great. On the (web)client the things weren’t smoother: the old GMaps v2 wasn’t too fast and I was doing a lot of rendering, e.g. the network polylines were loaded via the client instead of deferring to a tile-service of some sort. I don’t even want to mention some aspects of the UX or performance in mobile browsers at the time 🙂

So as you can imagine I needed to do something to make SwissTrains scale and perform nicely. I cut all commercial software dependency and started relying on open source tools exclusively. For instance, for capturing and analysing network topology changes I started using a custom online editor based on the GMaps API and later the GeoAdmin API. For managing timetable information I started using SQLite and custom-made integration scripts written in Ruby or Python. On the client side, I took the benefit of GMaps v3 which had a mobile first approach, so it rendered nicely on small screens. Additionally, I delegated the load of extra layers to tools like Fusion Tables.

However, any time I revisit the project, I am tempted to refactor big parts of the code. But I try to keep this urge that every programmer has (right?) in check and focus on what is really broken. Or I add wholly new features like realtime imagery updates of the trains, because, remember, my ultimate goal is to play with model trains, but in the browser 🙂

Q: What are you currently working on? I think I’ve seen a tweet of yours where you were looking for beta testers recently?

A: SwissTrains, what else? Jokes aside, yes, I plan to release it on native mobile platforms like iOS and Android. As a matter of fact, I’m testing it in a beta phase for iOS, so anyone can enroll here via Testflight.

SwissTrains iOS v2
SwissTrains iOS v2

Besides that, as said in the beginning, I have some other pet-projects that are in the idea or alpha phase with focus on geo and mobile platforms. They will certainly keep me busy. Plus I have a 4 year old toddler that develops slowly but surely into a trainspotter like his dad 🙂

Vlad -- future trainspotter
Vlad — future trainspotter

Q: And do you know already where the path will take you? Will we see more web maps by you?

A: That’s not yet defined, but I know one thing: I want to work on more geo-related projects, ideally in a company that deals also with native mobile technologies. Therefore I’m currently fostering my iOS and Android experience and preparing myself for the next employer.

However, I will still have a soft spot for web and maps in general, so I will not stop publishing those completely. On the contrary, like in the past, I plan to continue rolling out maps on my personal website. Most of them are listed here: http://www.vasile.ch/work

Q: On closing, what words of advice would you like to give to all the geohipsters out there, or to the world in general?

A: I usually don’t give advice to strangers, but since you asked, here we go: Keep exploring and have fun with your map projects. If you get blocked by whatever reasons, let’s say technical, then you can park the idea for later but not for too late because better to publish a (not perfect) map than having a masterpiece … unpublished. And please, don’t call it MVP 🙂

Maps and mappers of the 2016 calendar: Andrew Zolnai

In our series “Maps and mappers of the 2016 calendar” we will present throughout 2016 the mapmakers who submitted their creations for inclusion in the 2016 GeoHipster calendar.

***

Andrew Zolnai

Q: Tell us about yourself.

A: I’m a geologist who turned to computer mapping 30 years ago and GIS 20 yrs ago – high school Latin helped me transition to coding just short of programming – and I now started my third business and assisted two others. I’m taking a ‘business process first’ approach, using mind mapping as a ‘talking point’ to help firms help themselves, which will determine workflows in resources planning that may invoke web maps. My Volunteered Geographic Information also helps individuals and academics put themselves on the map.

Q: Tell us the story behind your map (what inspired you to make it, what did you learn while making it, or any other aspects of the map or its creation you would like people to know).

A: Ken Field’s hexagon maps featured on the BBC during UK elections this spring inspired me to do the same in the US Gulf of Mexico: 50K oil wells taxed arcgis.com, so binning the data points allowed to show progressively more detail at large scales as you zoom in. It clearly shows for example the march of wells further offshore with time, in a way that speaks to stakeholders and public as well as engineers and mappers.

Q: Tell us about the tools, data, etc., you used to make the map.

A: Esri ArcGIS for Desktop Standard and Model Builder, scripts adapted from Esri’s Ken Field for US Gulf of Mexico wells, posted on ArcGIS Online.

'Hexagon binning, US Gulf of Mexico oilwells' by Andrew Zolnai
‘Hexagon binning, US Gulf of Mexico oilwells’ by Andrew Zolnai