The Ethical Navigator
For the directionally challenged among us, the advent of the GPS has proven revolutionary. Never before has it been so easy to figure out how to get somewhere and, with apps like Google Maps, how long it will take to do so. In this light, such apps provide a crucial source of information for millions of users. But what if these navigational apps could provide an ethical benefit, as well?
For the creators of the navigational app Safetipin, it would appear that would already be the case. Created to address high levels of sexual violence in India, their mobile app aggregates user reviews of certain neighborhoods and overlays them onto a navigational map, allowing users to avoid areas deemed unsafe. This approach, the creators have said, is meant to help keep women safe, especially in a country where a rape is reported every 22 minutes. By including an emergency contact button and information on nearby emergency services, the app provides significant promise in combating the safety issues that many Indian women face.
In the case of Safetipin, democratized navigation acts a powerful tool for combating a social ill. However, the ethical benefits of crowdsourced apps like Safetipin are not always so clear-cut. Take, for example, Microsoft’s choice to implement a similar feature for GPs products in 2012. Similar to Safetipin, Microsoft’s app used violent crime statistics to reroute users away from unsafe neighborhoods. However, the app quickly came under fire for essentially acting as an “avoid ghetto” measure. Similar concerns have been raised about apps like SketchFactor, a 2014 app that allowed users to rate the “sketchiness” of areas and aggregated perceptions about certain neighborhoods. Many were concerned that such apps would lead to racist results, with people recommending that users avoid areas predominantly lived in by people of color.
In light of such ethical missteps, where does the blame lie? On one hand, it is the developers who provide the architecture for data in the first place. However, in cases like SketchFactor’s, much of this data is provided by the user. As such, any unethical data on an app could ultimately be traced back to a user’s contribution. Who, then, is responsible for unethical results of an app like SketchFactor?
While both hold some responsibility, I would argue that this blame ultimately lies with developers. Certainly, users have a responsibility to make ethical contributions to such services. However, developers are responsible for creating the architecture for user data. If a given architecture is built to highlight discriminatory data, then the developer faces more ethical blame than the users who created that data in the first place.
Ultimately, such democratized forms of navigation rely on their undemocratic foundations to avoid ethical pitfalls. Only the developers of an app are in control of how it will aggregate and represent data, with the results as ethically variable as the depicted geography itself. The results can either be positive, in the case of Safetipin’s aims, or negative, as the controversy around SketchFactor has shown.
As such, developers must consider the ethical implications of their projects, especially when they stand to shape users’ interactions with the world so fundamentally. This argument is not only relevant for those crowdsourcing navigation; tech developers as a whole must be cognizant that, despite their attempts to foster democratic technologies, the ethical ramifications of such technologies will ultimately be set and maintained by them.