Tag Archives: opinion

Will Cadell: “Cities are people, and people are maps”

Will Cadell
Will Cadell
Will Cadell is the founder and CEO of Sparkgeo.com, a Prince George-based business which builds geospatial technology for some of the biggest companies on Earth. Since starting Sparkgeo, he has been helping startups, large enterprises, and non-profits across North America make the most of location and geospatial technology.

Leading a team of highly specialized, deeply skilled geospatial web engineers, Will has built products, won patents, and generally broken the rules. Holding a degree in Electronic and Electrical Engineering and a Masters in Environmental Remote Sensing, Will has worked in academia, government, and in the private sector on two different continents, making things better with technology. He is on the board of Innovation Central Society, a non-profit society committed to growing and supporting technology entrepreneurs in North Central BC.

I’m not really old enough to reflect on cartography and its “nature”, however I want to comment on a trend I see in the modern state of our art and suggest a pattern back to an old truism.

At Sparkgeo we have a unique position in the market. Let me clarify that position, we create web mapping products. Meaning cartographic or simply geographic products which are built for people to consume primarily via web browsers. Additionally, we are vendor agnostic and continue to push the idea of geographic excellence & client pragmatism rather than particular brands. We work with organizations as diverse as financial institutions, startups, big tech, satellite companies and non-profits. In essence we build a lot of geographic technology, for a lot of very different organizations. We have also created paper maps, but in the last half decade I haven’t created a paper product. Not because we haven’t pursued projects of this nature, but because no one has asked us to. To be clear, we have created signage, for trail networks and such, but our activity with personal mapping products has moved to the web almost completely.

Telling. But not entirely surprising given that maps are largely tools and tools evolve with available technology.

Our position in the market, therefore is as a company creating cartographic products using the medium which is most pertinent to the users of that product. In the vast majority, those users are on a computer or most likely a mobile device.

Maps are of course defined by their relationship between things and people. An art form which links people to events and things on our (or indeed any other) planet. People and places, my friends. This will be obvious to most of my readers here, but what may be less obvious is the linkage therefore that our industry must have to cities. More-so, that cities and indeed urbanization have and will continue to craft the art of cartography for our still young millennium.

I say this whilst flying from one highly urbanized place to another, but also whilst calling relative rurality home. I am a great fan of open space, but even I can see that large groups of people are sculpting the future of our industry. It could be argued that cartography was originally driven by the ideas of discovery & conquest. Conquest or our more modern equivalence, “defense” is still very much an industrial presence. Subsequently, it could be argued that ‘GIS’ was driven by the resource sector, indeed much effort is still being undertaken in this space. I would have, until the last half decade, still argued that geospatial was in the majority the domain of those in the defense trade and the resource sector. Not so now. We have become an urban animal and with that urbanization it is clear that the inhabitants and administrators of our cities will drive geospatial. Cities and their evolution into smart cities will determine how we understand digital geography.

Let’s take a look at some of the industrial ecology which has enabled this trend. My hope is to engender some argument and discussion. Feel free to dissent and challenge, we are all better for it. I want to talk briefly about 5 key features of our environment which have individually, but more-so together, altered the tide of our industry.

1. people

It is clear that the general trend has and is continuing to be for people to move toward cities (https://en.wikipedia.org/wiki/Urbanization_by_country). Now, though I dispute that this is necessary (https://www.linkedin.com/pulse/location-life-livelihood-will-cadell), I cannot ignore the evidence that clearly describes the mass migration of people of most nationalities towards the more urbanized areas of their worlds. Our pastoral days have been coming to an end for some time. We will of course always need food, but the vast majority of Earth’s population will be in or around cities. The likelihood of employment, economy, and *success* are central to this trend it seems.

Where there are people there is entrepreneurism, administration and now, devices. Entrepreneurism and devices mean data; administration and devices mean data.

2. devices

Our world is becoming urbanized and our urbanized world is connected. Our devices, our sensors, are helping to augment our realities with extra information. The weather of the place we are about to arrive at, the result of a presidential debate, the nearest vendor of my favorite coffee and opinions disputing the quality of my favorite coffee. Ah, the Internet. My reality is now much wider than it would have been without my device. Some might argue shallower too, but that is a different discussion. The central point here is that my device detects things about my personal universe and stores those data points in a variety of places. I now travel with three devices: a laptop, tablet and phone. This would have been ludicrous to me a decade ago, but much of what I do now would have been ludicrous a decade ago. We truly live in the future. Much of that future has been enabled by devices and our subsequently connected egos.

Devices capture data. Really, all a device is is a node attached to a variety of greater networks. Whether those networks are temperature gradients, a telephonic grid, home wifi, elevation or a rapid transit line, the device is simply trying to record its place in our multidimensional network and relay that in some meaningful way back to you and likely a software vendor. Devices capture and receive data on those networks. That data could be your voice or a location, and that data could be going A N Y W H E R E.

But, the fact that the data is multidimensional and likely has a location component is critical for the geospatially inclined amongst us. The crowd-sourced effect, coupled with the urbanization effect equal enormous amounts of location data. That is the basic social contract of consumer geospatial.

3. connectivity

Of course, the abilities of our devices are magnified by connectivity, wifi, or whatever. Although Sparkgeo is still creating online – offline solutions for data capture, these are becoming more an exception than the rule. Connectivity is a modern utility, it is a competitive advantage that urban centers have over rurality. With increased connectivity we have great access to data transfer, connectivity is thus enabling geographic data capture. Its presence encourages the use of devices which captures data which is often geographic. Urban areas have greater access to connectivity due to the better economies of scale for the cellular and cable companies (who are quickly becoming digital media distribution companies). It is simple really; more people in less area equals more money for less infrastructural investment. For the purposes of this article in reality we just need to concede that those multitude of devices talked about above are more connected for less money in cities than anywhere else.

4. compute

Compute is the ability to turn the data we collect into ‘more’, whatever that might mean; perhaps some data science, or ‘analysis’ like we used to call it, perhaps some machine learning. In essence compute is joining data to a process to achieve something greater. Amazon Web Services, and subsequently Microsoft’s Azure and Google’s Cloud platforms have provided us with amazing access to relatively inexpensive infrastructure which supports the ability to undertake meaningful compute on the web. Not enough can be said about the opportunity that increased compute on the web provides, but consider that GIS has typically be data limited and RAM limited. With access to robust cloud networks, those two limitations have been entirely removed.

5. data

People and devices mean data. Without doubt, lots of people and lots of devices mean lots of data, but there is also likely a multiplier effect here too as we become accustomed to creating data via communication and consumption of digital services. As an example, more ride-sharing happens in urbanized locations, so more data is created in that regard. Connectivity to various networks enabled those rides. Compute will be applied to those recorded data points to determine everything from the cost of the journey to the impact on a municipal transit network and congestion. At every step in that chain of events more data was created, obviously adding more data volume, but also greater opportunity for understanding, of what is yet to be seen. Beyond consumer applications however, city administration and their data also play deeply into this equation.

With these supportive trends we have seen two ends of our industry grow enormously. It is a wonderful, organic symbiosis really.

On one hand we have the idea of consumer geospatial (Google Maps, Mapbox), which has put robust mapping platforms in the hands of everyone with an appropriate device. Consumer geospatial has enabled activities like location based social networks (Nextdoor), location based advertising (Pokemon Go), ride sharing (Uber, Lyft), wearables (Fitbit, Apple watch), quantified self (Strava, Life360), connected vehicles (Tesla, Uber), digital realty (Zillow), and many others.

On another hand we have seen the rise in the availability of data, and in particular open data. Open data is the publishing of data sets describing features of potentially public interest such as financial reports, road networks, public health records, zip-code areas, census statistics, detected earthquakes, etc.

The great promises of open data are increased transparency and an enabling effect. The enabling of entrepreneurism based on the availability of data to which value can subsequently be added. Typically, bigger cities have more open data available. This is not always true, and the developing world is still approaching this problem, but in general terms a bigger population pays more tax which supports a bigger municipal infrastructure which therefore has the ability to do ‘more’. In recent discussions I am still asked if those promises are being kept, is the investment worth it? The idea of transparency is ‘above my pay grade’, but I can genuinely attest to the entrepreneurial benefit of open data. Though, that benefit might not be realized in the geographic community where the data is published. As a community of data consumers however, we do benefit through better navigational aids, more robust consumer geospatial platforms and ‘better technology’. As a company we at Sparkgeo have recently built a practice around the identification, assessment, cleansing and generalization of open data, because demand for this work never ceases. It’s clear that our open data revolution is in a somewhat chaotic (*ref) phase, but is very much here to stay.

Our geospatial technology industry has taken note too. Greater emphasis from Esri on opening municipal datasets through their http://opendata.arcgis.com/ program is an interesting way for cities who might easily already be using Esri products to get more data “out”. Additionally, Mapbox Cities (https://www.mapbox.com/blog/mapbox-cities/) is a program which is also looking at how to magnify the urban data effect. Clearly there is industrial interest in supporting cities in the management of ever growing data resources. Consider that Uber, an overtly urban company is building its very own map fabric.

If we combine the ideas of consumer geospatial and those of open data, what do we reveal? Amongst other things we can see that more & better data result in many benefits for the consumer, typically in the form of services and products. But we can also see that too much focus on the consumer & crowd based data can be problematic. Indeed, the very nature of the ‘crowd’ is to be less precise and more general. The ‘mob’ is not very nuanced, yet. For crowd based nuance, we can look to advances in machine learning and AI. In the meantime, it’s great to ask the crowd if something exists, but it’s terrible to ask the crowd where that thing is, precisely.

> “is there a new subdivision?” – “yes!”

> “When, exactly should my automated vehicle start to initiate its turn to enter that new subdivision?” – “Now, no wait, now… stop, go back”

Generalization and subsequent trend determination is the domain of the crowd; precision through complex environments is something much more tangible, especially if you miss the turn. As we move towards our automated vehicle future, once that vehicle knows a new subdivision exists, then conceivably it can use on-board LiDAR to provide highly detailed data back to whomever it may concern. This is really where smart cities need to enter our purview. Smart Cities will help join the consumer web to the municipal web, and indeed numerous other webs too. Not to be too facetious, but my notion of consumer geospatial could also be a loose description of an Internet of Things (IoT) application. Smart cities are in essence an expansive IoT problem set.

It’s clear that cities with their growing populations have in-part driven our understanding of people and digital geography through greater data volume. But as we push harder into what a future smart city will look like, we will also start to see even greater multiplier effects.

Gary Gale: “Dear OSM, it’s time to get your finger out”

Gary Gale
Gary Gale
A self-professed map addict, Gary Gale has worked in the mapping and location space for over 20 years through a combination of luck and occasional good judgement. He is co-founder and director of Malstow Geospatial, a consultancy firm offering bespoke consulting and services in the geospatial, geotechnology, maps and location based services fields. A Fellow of the Royal Geographical Society, he tweets about maps, writes about them, and even occasionally makes them.

This is very much an opinion piece of writing, and as such I want to start with a disclaimer. In the past I’ve worked on Yahoo’s maps, on Ovi/Nokia/HERE maps, and these days I’m freelancing, which means the Ordnance Survey — the United Kingdom’s national mapping agency — is my current employer. What follows is my opinion and views, not those of my current employer, not those of previous employers, and certainly not those of future employers. It’s just me. So with that out of the way and stated upfront, I want to opine on OpenStreetMap

***

Dear OpenStreetMap, you are truly amazing. Since you started in 2004 with those first few nodes, ways and relationships, you have — to paraphrase a certain Dr. Eldon Tyrell — burned so very, very brightly. (Those of you who know your Blade Runner quotes will know that just after saying this, Tyrell was killed by Roy Baty; I’m not suggesting that anyone should take this literally.)

Just looking at the latest set of database statistics (over 4.6 billion GPS points, over 2.8 billion nodes, over 282.5 million ways, and 3.2 million relationships as of today’s figures) shows how impressive all of this this is.

The maps and data you’ve created are a key element of what’s today loosely termed the geoweb, enabling startups to create maps at little or no cost, allowing some amazing cartography to be created, stimulating research projects, and allowing businesses to spring up to monetise all of this data — some successfully such as MapBox, some less successfully, such as CloudMade.

After reading all of this amazement and adoration, you’re probably expecting the next sentence to start with “But …”, and I’m afraid you’d be right.

But times change, and the mapping and location world we live in has changed rapidly and in unexpected ways since OSM started in 2004. In just over a decade the web has gone mobile with the explosive growth of sensor-laden smartphones, and location is big business — $3.8 billion’s worth of big in 2018 if you believe Berg Insight.

United Nations of smartphones
United Nations of smartphones

In 2004 if you wanted maps or mapping data then you either went to one of the national or cadastral mapping agencies — such as the UK’s Ordnance Survey — or you went to one of the global, automotive-focused, mapping providers; you went to NAVTEQ or to TeleAtlas. Maps and mapping data are expensive to make and expensive to maintain, and this expense was and continues to be reflected in the licensing charge you paid for mapping data, as well as in the restrictions around what you could and couldn’t do with that data. The high cost of data and the license restrictions were one of the key drivers for the establishment of OpenStreetMap in the first place.

Eleven years on, and the mapping industry landscape is a very different one. NAVTEQ and TeleAtlas are no longer independent entities — Chicago-based NAVTEQ was acquired by Nokia in 2008 after an EU antitrust investigation gave the deal the green light, and Amsterdam-based TeleAtlas has been part of TomTom since 2008. Both companies continue to license their mapping data and their services, with NAVTEQ — now known as HERE — powering the maps for Bing and Yahoo amongst others, and TomTom licensing their data to MapQuest and to Apple as part of the relaunch of Apple Maps in 2012.

There’s also been changes from the national and cadastral mapping agencies, with more and more data being released under various forms of open license — including the Ordnance Survey’s open data program, which in direct contrast to the old licensing regime is now under one of the most liberal of licenses.

In 2007 there was much attention paid to the NAVTEQ and TeleAtlas deals, citing uncertainty surrounding continued data supply to the maps and location industry. It was also predicted that the PND market — those personal navigation devices from TomTom and Garmin that sat on top of your car’s dashboard and announced “you have reached your destination” — would collapse rapidly as a result of the rapid growth of GPS-equipped smartphones.

These concerns and predictions got only half of the outcome right. The PND market did collapse, but the map data continued to flow, although it’s fair to say that as OSM matured and grew a reasonable chunk of revenue and strategic deals were lost — both directly and indirectly — to OSM itself, and to the organisations who act as a business-friendly face to OSM. But as Greek philosopher Heraclitus once said, everything changes and nothing stands still.

In the last few weeks it’s been widely reported that Nokia is looking to sell off HERE, the company formed by the (sometimes unwilling) union of NAVTEQ and Nokia Maps. Speculation runs rife as to who will become the new owner of HERE, with Uber seeming to be the pundits’ favourite buyer. But whoever does end up owning HERE’s mapping platform, the underlying map data, and the sizeable mapping and surveying fleet, it now seems to be clear that just as the days of NAVTEQ and TeleAtlas as independent mapping organisations came to an end, the days of HERE are coming to an end. This also has shone the spotlight onto TomTom, who whilst making inroads into NAVTEQ’s share of the automotive data market, seems reliant on their deal with Apple to keep revenue flowing in.

Even before the speculation around HERE’s new owner, there are really only three major sources of global mapping, location and geospatial data: NAVTEQ in their current HERE incarnation, TeleAtlas under the mantle of TomTom, and … OpenStreetMap.

When — rather than if — HERE changes ownership, there’s a very real risk than the new owners will turn the data flow and services built on that data inwards, for their own use and their use only, leaving just two major global maps sources.

Surely now is the moment for OpenStreetMap to accelerate adoption, usage and uptake? But why hasn’t this already happened? Why hasn’t the geospatial world run lovingly into OSM’s arms?

Acceleration
Acceleration

To my mind there’s two barriers to greater and more widespread adoption, both of which can be overcome if there’s sufficient will to overcome them within the OSM community as a whole. These barriers are, in no particular order … licensing, and OSM not being seen as (more) conducive to working with business.

Firstly I want to deal with making OSM more business-friendly, as this is probably the biggest barrier to wider-spread adoption over licensing. For anything other than a startup or SME with substantial geospatial competency already in-house, dealing with OSM and comprehending OSM can be a confusing proposition. What is OSM exactly? Is is the community? Is it the OpenStreetMap Foundation? Is it the Humanitarian OpenStreetMap Team? Is it one of the companies in the OSM ecosystem that offers services built on top of OSM? All of them? Some of them? None of them?

There’s no doubt that OSM has a vibrant and active map-making and developer-friendly ecosystem in the form of the OSM Wiki and mailing lists alone, even before you factor in the supporting, indirect ecosystem of individuals, community projects and organisations. But this isn’t enough. Business needs to be able to have a single point of contact to liaise with, actually it often insists on this and will look elsewhere if it can’t find this point of contact with anything more than the most cursory of searches. Whether it’s OSM in some shape or form itself, or a single organisation that stands for and represents OSM, this is the biggest barrier to continued OSM adoption that there is, although it may not necessarily be the one which requires the most work to overcome. For that barrier you need look no further than the ODbL, the Open Database License, under which OSM’s data is licensed.

This is a contentious issue and one which is usually met with a deep sigh and the muttering of “not this again“. Prior to 2010, OSM data was licensed under the Creative Commons Attribution ShareAlike license, normally shorted to CC-BY-SA. This license says that you are free to …

Share — copy and redistribute the material in any medium or format

Adapt — remix, transform, and build upon the material for any purpose, even commercially

With hindsight, adoption of CC-BY-SA made a lot of sense, preserving and acknowledging the stupendous input made by all OSM contributors. It’s not for nothing that the credits for using OSM are “© OpenStreetMap contributors“.

But CC-BY-SA’s key weakness for OSM was that it is a license designed for the concept of “material“; for creative works and not specifically for data or for databases. This is understandable; at the time OSM adopted CC-BY-SA, such a data-centric license simply didn’t exist, and CC-BY-SA was the best option available. But in 2010, after much discussion and dissent, OSM switched to the data- and database-specific Open Database License. The ODbL maintains the same attribution and share-alike clauses, but phrased in legal language specifically for data sets. It seems like the perfect license for OSM, but it’s not.

The attribution clause in both CC-BY-SA and ODbL are not at issue. Such clauses mean that the efforts of those who have made OSM what it is are formally acknowledged. The issue is the share-alike clause in both licenses, although it’s fair to say that there are subtleties at play due to the many and varied ways in which OSM data can be “consumed“.

Be Afraid. Consume.
Be Afraid. Consume.

If your consumption of OSM data is a passive one, then the share-alike clause probably has little or no impact. By passive, I mean that as a user you are consuming data from OSM via some form of service provider, and your consumption takes the form of an immutable payload from that service, such as pre-rendered map tiles.

But what about if your consumption of OSM data still comes from a service, but takes the form of actual data — such as the results of a geocoder, or some other geospatial search? Such results are typically stored within a back-end data store, which means that by doing so the end result is a dataset which comprises the original data, plus the results of a search added to make a new dataset. Does this trigger the share-alike clause? This is still an ambiguous area, although current guidelines suggest that the resulting, aggregate data set is a produced work rather than a derived one and so are exempt from triggering the share-alike clause. But there is also a counter-argument that suggests that such an action is indeed a derived work, and so the share-alike clause does apply. This ambiguity alone needs to be resolved, one way or another, in order to make OSM an attractive proposition for business.

The final share-alike complication rears its head when your method of consuming OSM data is to merge one or more data sets with OSM to use the resultant data for some purpose. This sort of data aggregation is often called co-mingling in licensing and legal parlance.

If all the datasets you are dealing with are licensed under ODbL, then the share-alike clause potentially has little impact, as effectively ODbL plus ODbL equals … ODbL. Things are a little less certain when you co-mingle with datasets which are deemed to be licensed under a compatible license. Quite what a compatible license is hasn’t been defined. OpenDataCommons, the organisation behind the ODbL, only says that “any compatible license would, for example, have to contain similar share-alike provisions if it were to be compatible“, which while helpful isn’t a clear cut list of licenses that are compatible. At the time of writing I was unable to find any such list.

But if the data you want to co-mingle with OSM, or indeed with any ODbL licensed data, is data that you don’t want to share with the “community” — which of course will include your competitors — the only way to prevent this is not to use the ODbL-licensed data, which means not using OSM in this manner. To be blunt, mixing any data with a share-alike clause means you can lose control of your data, which probably is part of your organisation’s intellectual property and has cost time and money to put together. It’s acknowledged that not all co-mingling of datasets will trigger the share-alike clause; that there needs to be “the Extraction and Re-utilisation of the whole or a Substantial part of the Contents” in order for the share-alike, or indeed for the attribution clauses of the ODbL to kick in. The problem is that what’s classed as “substantial” isn’t defined at all, and OpenDataCommons notes that “the exact interpretation (of substantial) would remain with the courts“.

If you pause and re-read the last few paragraphs, you’ll notice that there’s words and phrases such as “ambiguity“, “isn’t defined” and “exact interpretation“. All of which adds up to an unattractive proposition for businesses considering using OSM or any open data license with a share-alike clause. For smaller businesses, finding the right path to navigate through licensing requires costly legal interpretation, and where money is tight such a path will simply be ignored. For larger businesses, often with an in-house legal team, a risk analysis will often result in an assessment that precludes using data with such a license as the risk is deemed too great.

Crossroads
Crossroads

OSM as a community, as a data set, as a maps and map data provider, and as an entity is at a crossroads. It’s been at this metaphorical crossroads for a while now, but with the way in which the industry is rapidly changing and evolving, this means that there’s these two challenges that OSM should be encouraged to overcome, if there’s a concerted will to do so.

In almost every one of my previous corporate roles I’ve tried to push usage and adoption of OSM to the business, with the notable exception of my time with Lokku and OpenCage Data, where OSM is already in active use. Initially, reaction is extremely positive: “This is amazing“, “Why didn’t we know about this before?“, and “This is just what we’re looking for” are common reactions.

But after the initial euphoria has worn off and the business looks at OSM’s proposition, the reaction is far from positive. “Who are we doing business with here? OSM or another organisation?“, “We can’t have a business relationship with a Wiki or a mailing list“, and “Legal have taken a look at the license, and the risk of using ODbL data is too great, I’m afraid” are paraphrased reactions I’ve heard so many times. To date, not one of the companies I’ve worked in has used OSM for anything other than the most trivial of base mapping tasks, which is such a loss of potential exposure for OSM to the wider geospatial and developer markets.

In short, the lack of a business-facing and business-friendly approach, coupled with the risks and ambiguity over licensing, are what is holding OSM back from achieving far more than it currently does. But it doesn’t have to be this way.

More than anything, OSM needs a business-friendly face. This doesn’t have to be provided by OSM itself; an existing organisation or a new one could provide this, hopefully with the blessing and assent of the OSMF and of enough of a majority of the OSM community. It’s also worth considering a consortium of existing OSM-based businesses, such as MapBox or GeoFabrik or OpenCage Data, getting together under an OSM For Business banner.

Coupled with the new approach to engaging with business, the licensing challenges could be solved by re-licensing OSM data under a license that retains the attribution clause but which removes the share-alike clause. Unlike the need for time to pass in order for the ODbL to be created to enable the transition from CC-BY-SA, such a license already exists in the form of the Open Data Commons Attribution license.

I do not claim for one second that making OSM business-friendly and re-licensing OSM are trivial matters, nor are there quick fixes to make this happen. I also do not doubt that some sections of the OSM community will be quick to explain why this isn’t needed and that OSM is doing very nicely as it stands, thank you very much. And I wouldn’t contest such views for a second. OSM is doing very nicely and will, I believe, continue to do so.

This isn’t about success or failure; OSM will continue to grow and will overcome future challenges. But OSM could be so much more than it currently is, and for that to happen there has to be change.

In his widely shared and syndicated post Why The World Needs OpenStreetMap, Serge Wroclawski wrote …

Place is a shared resource, and when you give all that power to a single entity, you are giving them the power not only to tell you about your location, but to shape it

These words rang true in early 2014 when Serge first published his post, and they ring doubly true in today’s world where the number of sources of global mapping data are being acquired, when the number of options available for getting and using mapping data are shrinking, and where there’s a very real possibility that the power to say what is on the map and what is under the map ends up in the hands of a very small, select group of companies and sources.

So, dear OSM, the world needs you now more than it needed you when you started out, and a lot more than it needed you in 2014. OSM will continue to be amazing, but with change OSM can achieve so much more than was ever dreamed when the first nodes, ways, and relationships were collected in 2004 — if you just get your community finger out and agree that you want to be more than you currently are.

(For non-British readers, “get your finger out” is a colloquial term for “stop procrastinating and get on with it”)

Image credits: Acceleration by Alexander Granholm, CC-BY. United Nations of smartphone operating systems by Jon Fingas, CC-BY-ND. Be Afraid. Consume by What What, CC-BY-NC-SA. Crossroads by Lori Greig, CC-BY-NC-ND.