A version of this article appeared on the Tow Center for Digital Journalism blog on 10/29/14.
On October 20, 2014, Creative Commons Science convened a workshop involving open hardware/software developers, lawyers, funders, researchers, entrepreneurs, and grassroots science activists around a discussion about the certification of open sensors.
To clarify some terminology, a sensor can either be “closed” or “open.” Whereas closed technologies are constrained by an explicitly stated intended use and design (e.g. an arsenic sensor you buy at Home Depot), open technologies are intended for modification and not restricted to a particular use or environment (e.g. a sensor you can build at home, based on a schematic you find online).
Over the course of the workshop, attendees listened to sessions led by practitioners who are actively thinking about whether and how a certification process for open hardware might mitigate some of the tensions that have arisen within the field, namely around the reliability of open sensor tools and the current challenges of open licensing. As we may gather from the Tow Center’s Sensors & Journalism report, these tensions become especially relevant to newsrooms thinking of adapting open sensors for collecting data in support of journalistic inquiry. Anxieties about data provenance, sensor calibration, and best practices on reporting sensor data also permeate this discussion; this workshop provided a space to begin articulating what the needs are in order for sensor journalism to move forward.
Below, I’ll try to locate the key points of discussion around open sensor certification, especially as it relates to the evolution of sensor journalism.
Challenges of open sensors
How, when, and why do we “trust” a sensor? For example, when we use a thermometer, do we think about how well or often it has been tested, who manufactured it, or what standards were used to calibrate it? Most of the time, the answer is no. The division of labor that brings the thermometer to you is mostly invisible, yet you inherently trust that the reading it gives you is an accurate reflection of what you seek to measure. So, what is it that instantiates this automatic trust, and what would have to happen around open sensors in order for people to trust in them likewise?
At the workshop, Sonaar Luthra of Water Canary led a session about the complexities and challenges that accompany open sensors today. Most concerns revolve around accuracy, both of the sensor itself and the data that it produces. One reason for this is that both the manufacture and integration of sensors are separate processes (e.g. InvenSense will manufacture an accelerometer and Apple will integrate it into the iPhone). Similarly, within the open source community, the development and design of a sensor and its software can potentially be a separate process from an end user’s assembly of it (e.g. a person looks up the open schematic online, buys the necessary parts, and builds it at home). This division of labor erodes the boundaries between hardware, software, and data, obviating a need to recast how trust is established in sensor-based data.
For journalists, a chief concern about sensor data is being able to ensure, with some degree of confidence, that the data collected from the sensor are not erroneous and not adding to misinformation to the public sphere if published. Of course, this entirely depends on what purpose the sensor is being used for. If we think of accuracy as a continuum, then the degree of accuracy can vary depending on the context. For a project whose point it is to gather a lot of data and look at general trends, such as the case with the Air Quality Egg, an open sensor that measures air quality, accuracy is less of a concern because engagement is the end goal. However, different purposes and paradigms require different metrics.n the case of StreetBump, a mobile app that uses accelerometer data to help identify potential potholes, accuracy might be a much more salient issue, as direct intervention from the city would mean allocating resources and labor towards something a sensor suggests. Thus, creating a model to work toward shared parameters, metrics, resources, and methods might be useful to generate consensus and to alleviate factors that threaten data integrity.
There may also be alternative methods for verification and accounting for known biases in sensor data. Ushahidi’s Crowdmap is an open platform used internationally to crowdsource crisis information. The reports depend on a verification system from other users for an assessment of accuracy. One can imagine a similar system for sensor data, pre-publication or even in real time. Also, if a sensor is known to be biased in a certain direction, it’s also possible to compare data against an established standard (e.g. EPA data) and account for the bias in reporting on the data.
To further investigate these questions, we can look toward extant models of verfication in open science and technology communities. The Open Geospatial Consortium provides a way of thinking about interoperability among sensors, which requires that a consensus around standards or metrics to be established. The Open Sensor Platform also suggests ways of thinking about data acquisition, communication, and interpretation across various sensor platforms.
Challenges of open licensing for sensors
A handful of licensing options exist for open hardware, including the CERN Open Hardware License, Open Compute Project License, and Solderpad License. Other intellectual property strategies include copyright (which are easily circumvented and sometimes questionable when it comes to circuits), patenting (difficult and costly to attain), and trademark (lower barrier to entry and would best meet needs of open source approaches). However, the issue of whether or not formal licensing should be applied to open hardware is still questionable, as it would inevitably impose restrictions on a design or version of hardware that—within the realm of open source—is still susceptible to modification by the original developer or the open source community writ large. In other words, a licensing or certification process would transition what is now an ongoing project into a final product..
Also, in contrast to open software, wherein the use of someone’s open code is clearly demarcated and tracked by the process of copying and pasting, it is less clear at what point a user “agrees” to using open hardware, since designs often involve a multitude of components and sometimes even come with companion software.
A few different approaches to assessing open sensors emerged during the workshop:
- Standards. A collaborative body establishes interoperable standards among open sensors, working for independent but overlapping efforts. Targeted toward the sensor.
- Certification / licensing. A central body controls a standard, facilitates testing, and and manages intellectual property. Targeted toward the sensor.
- Code of conduct. A suggestion of uses and contexts for the sensor, i.e. how to use it and how not to use it. Targeted toward people using the sensor.
- Peer assessment. Self-defined communities test and provide feedback on sensors (see Public Lab model). Targeted toward the sensor but facilitated by people using the sensor.
In the case of journalism, it would depend on how much (or how little) granularity of data is needed to effectively tell the story. In the long run, it may be that the means of assessing a sensor is largely contextual, creating a need to develop a multiplicity of models for these methods.
Sidebar: Feminism and Open Access Law
While reading “What’s Feminist about Open Access?” by Rosemary J. Coombe, I realized how old these conversations about certification really are. Her framing of open access around feminist theory might be a useful one to address the tensions that populate this conversation with regard to sanctioning individualism (e.g. through copyright/patent/trademark) while still wanting to preserve the tenets of participatory, open culture.
Preliminary conclusions
While there is certainly interest from newsrooms and individual journalists to engage with sensor tools as a valid means to collecting data about the environment, it remains to be seen what newsrooms and journalists expect from open sensors and for which contexts open sensor data would be most appropriate. The products of this workshop are relevant to evaluating what standards — if any — might be necessary to establish before sensors can be more widely adapted into newsrooms.
Henceforth, we must keep some important questions in mind:
- What the role would, could, should standards/certification play for open sensors — economically, ethically, legally? Is there a need for a certification model? If so, why? If not, what are alternative models that could be implemented?
- Whom would a certification process best serve, and who would, could, should facilitate the process? In other words, what are the power structures/implications behind a certification framework?
- What matters most to newsrooms and journalists when it comes to trusting, selecting, and using a sensor tool for reporting? Which sensor assessment models would be most useful, and for which context(s)?
With regard to certification of open sensors, it would behoove all stakeholders to further tease out a way to move the discourse forward.
References
Coombe, Rosemary J. with Carys Craig and Joseph F. Turcotte. “What’s Feminist about Open Access? A Relational Approach to Copyright in the Academy.” feminists@law: an open access journal of feminist legal scholarship, 2011.
Open Source Sensors: Promoting Access to Knowledge and Data Reliability. Google Slides. Accessed October 23, 2014.
Open Source Hardware Association (OSHWA), http://www.oshwa.org/definition
Pantelopoulos, Alexandros & Nikolaos G Bourbakis. “A survey on wearable sensor-based systems for health monitoring and prognosis.” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions. Vol. 40, Issue 1. Published January 2010. Accessed 19 September 2014.
Pitt, Fergus. Sensors & Journalism. Tow Center for Digital Journalism, May 2014.