July 30th, 2014 by Lee Colleton
In January 2013, the West Seattle Blog reported on the surveillance cameras being installed along Alki Beach. Their continued coverage of the cameras and wireless mesh radios is well worth a read for a detailed background on this post.
I recently noted that one of the wireless mesh nodes was transmitting, in contradiction of the City’s repeated assurances that the network was “turned off,” while I was attending a protest outside the King County courthouse in City Hall Park near 3rd Ave and Yesler Way. My post to Twitter caught the attention of the Seattle Police Department, who promptly shut off the node and posted a blog entry and tweeted about it. The following tweets appeared on Twitter that day, with much more commentary on the original post (which you can see if you click on the date stamp below.)
Seattle Police officer Sean Whitcomb’s reply on the SPD blotter makes a misleading claim, that “The rogue node, while producing a visible signal, was not being operated.” This isn’t only misleading because radio waves are invisible. They’re also not visible because the Service Set IDentifier (SSID or “network name”) of these mesh nodes gives no indication that they’re operated by the police department. It’s not the sort of thing that a nontechnical person would notice, even if they saw it listed on a computer or mobile device when they were trying to find a wireless network. It’s also misleading to claim that the node was “not being operated”.
The device may not have been switched on intentionally, it may not have seen any active traffic from SPD vehicles or those of other city departments while it was powered on and transmitting, but a claim that it wasn’t operating is the same category of the “non-operational” SPD cameras installed throughout the city. The glowing blue light indicates that power is applied to the cameras, just as the blinking orange and green lights indicate that mesh network nodes have power and some sort of activity. According to the Seattle Police’s definition of “operating”, these networked surveillance cameras aren’t “in use” because the digital video recording system to which they’re attached isn’t capturing any of their video feeds.
However, as Mayor Murray opined in an interview on the matter, the cameras and their mesh network could be switched on if the City decided they were needed for some sort of emergency (the Boston Marathon bombing was mentioned, but any emergency could do). Now, this mayor may have no intention of using these cameras and Seattle’s current police force might not intend to use their mesh network to monitor the movements of every active WiFi and Bluetooth device in the city (see The Stranger’s article You Are A Rogue Device), but we’re a country of laws, not of men.
Seattle should revise its ordinance regarding the installation and use of surveillance equipment. We made recommendations to the city council regarding Ordinance 124142 in March and this matter still needs to be addressed.
July 23rd, 2014 by Jan Bultmann
OK, first this, in case you don’t make it thru the blog: “The Boundaries of Privacy Harm,” by Ryan Calo. Wonky but well worth a read, especially if you’re interested in privacy and policymaking.
This week we went to Seattle TA3M, which we try to do every month, because they are close allies of ours and they put on some great talks.
This month was outstanding, with speakers from the Wikimedia Foundation and the University of Washington’s Tech Policy Lab.
Jonathan T. Morgan from the Wikimedia Foundation talked about open, collaborative groups, with a specific emphasis on Wikipedia, and about what works, what doesn’t, and how interested people can help. As probably most Seattle Privacy readers know, Wikipedia is a fabulous resource that is written primarily by people from a relatively narrow demographic group and could and should have a lot more input from a wider variety of people. Because we struggle with reaching out effectively to people outside of our immediately familiar zone here in Seattle, it was useful to hear about some of Wikimedia Foundation’s ways of measuring engagement and reaching out. (I immediately wanted to sign my 93-year old father up for the Seniors Write Wikipedia” effort, for example.) We also talked about how some of those outreach efforts can introduce new problems — particularly in the realm of privacy. The Wikipedia Zero project, for example, allows people to edit Wikipedia by mobile phone, which is great in the sense that editing becomes available to people who don’t have access to a desktop, and not great in that it makes their interests and concerns immediately obvious to their cell providers (For example, Saudi Telecom Company) and anyone they choose to share it with. (Editing the “Anarchist” entry, were we…?). The Wikipedia Zero project started off with a system based on the ubiquitous text message (SMS) but is now moving towards hypertext as web browsers are becoming nearly universal on mobile devices. Some countries have enabled zero-cost routing for encrypted hypertext traffic to Wikipedia Zero which protects the privacy of readers, however this process is not complete and access by HTTPS should be the norm.
The second speaker, Ryan Calo, prof at the UW Law School and Faculty Director of the Tech Policy Lab, talked about the Facebook “emotional manipulation” experiment (a popular name that wrongly conflates two pieces of the study’s name, it turns out), and why he considered it not a big deal, but how it points to a VERY big deal, that is, what Calo calls Digital Market Manipulation. Since the man has written a paper on the subject, I’m not going to try to recap the issue here, except to note that the gathering of big data by corporations and governments creates a very scary information asymmetry and more or less blows the concept of “informed consumers” out of the water, with many implications for price manipulation and introducing inefficiencies into economic transactions, and, well, go read the paper. In group discussion we talked at length about the meaning of the public response to Facebook’s experiment and how symbolic that reaction seems to be of our larger and growing discomfort and unease with knowing that people we don’t know anything about know everything about us.
An audience member who noted that while right now we talk in terms of corporate control of data as primarily an issue of economics, in fact it has huge political implications as well, particularly if the roles of certain gigantic corporations shift in relation to governing, which it seems quite possible they might.
At Seattle Privacy, we’ve been working on connecting various groups and institutions with an interest in privacy in hopes of putting together public informational events particular directed at City of Seattle employees and elected officials. The Tech Policy Lab has definitely been on our list of important groups to connect with. Prof Calo kindly agreed to come help us talk to the city about privacy and our Proposal for Seattle.
We’re especially excited about this because we still haven’t exactly refined an “elevator pitch” for the value of privacy. Calo, however, has given the issue a great deal of attention. He offers two categories of “privacy harms” — objective and subjective.
Calo describes subjective privacy harm as “the perception of unwanted observation, which results in unwelcome mental states—anxiety, embarrassment, fear—that stem from the belief that one is being watched or monitored”, whether by a landlord or an ex, or a massive government surveillance project.
He describes objective privacy harms as “the unanticipated or coerced use of information concerning a person against that person. These are negative, external actions justified by reference to personal information.” This could range from identity theft to redlining to having your blood samples used against you at a DUI stop.
The subjective and objective categories represent the anticipation and consequence of a loss of control over personal information. Here’s what makes this approach so valuable:
It uncouples privacy harm from privacy violations, demonstrating that no person need commit a privacy violation for privacy harm to occur (and vice versa). It creates a “limiting principle” capable of revealing when another value—autonomy or equality, for instance—is more directly at stake. It also creates a “rule of recognition” that permits the identification of a privacy harm when no other harm is apparent. Finally, this approach permits the measurement and redress of privacy harm in novel ways.
In other words, Calo is talking about a methodology that makes the harm of privacy violations testable and rankable — an approach that courts and regulators can use to investigate privacy harms and determine their severity. Finally, it takes into account the increasing automation of surveillance by addressing the perception of privacy violation as a separate harm, and eliminating the requirement that “human sensing” be involved for privacy to be harmed. (I don’t know privacy law well enough to elaborate on this point and my apologies to those of you who do and are banging your foreheads on your keyboards right now. The point is, we look forward to getting Calo together with our city’s lawmakers, and seeing where their conversations lead.)
July 22nd, 2014 by Jan Bultmann
Our allies over at the Privacy Project publish a weekly update of privacy issues in the news broken out by government, tech, international issues, and general interest. They are also here in the Pacific Northwest and often list issues of interest to Seattle Privacy. For example, this week they point to:
To see the complete update and sign up their RSS feed, visit the Privacy Project.
July 9th, 2014 by David Robinson
As Seattle Privacy discusses the need for privacy oversight in City Hall, we are interested in both the big policy and governance questions and in the technical details of privacy-sensitive technology. Here is an example of the latter, drawn from city paperwork involving Cascade Networks, Inc., the contractor that installed the police surveillance cameras and mesh radio network in 2012-2013. The radios that make up the mesh network are basically tricked-out, weather-proofed versions of normal Wi-Fi access points. Before the city “turned off” the radios last year, each of them was broadcasting a network ID that you could have seen on your laptop or cell phone alongside Starbucks or the name of your home wireless router. The specs for the project included requirements about network access and logging:
In bland technical language, we learn that the network has the following capabilities.
- It can limit logins to a list of approved users stored in a database.
- It can identify potential users based on username/password or hardware device IDs.
- It will keep detailed logs (time, duration, identity, etc.) of client connections.
However, these details raise questions that still have not been answered by the Seattle Police Department or any other city office.
- What happens if a random passerby with a laptop or cell phone attempts to “associate” with a city access point? The answer to this could have privacy and security implications for both parties.
- Wi-Fi devices broadcast uniquely identifiable radio beacons; does the city equipment record these beacons, or can it be configured to do so? Authorities in Chicago are planning just such a capability in a potentially intrusive Big Data collection scheme.
- How long will logs be kept, and who will have access to them? Will they be subject to public records requests?
These are questions that should have been asked and publicly debated at early stages of the planning process. They also quickly become issues of general policy: If data is collected, it will be used by any legal or illegal branch of government whose agents can pick up a phone. To protect privacy, don’t collect sensitive information in the first place.
Below is a link to the source documents, courtesy of Tacoma-based Infowars reporter Mikael Thalen, who discovered them on the Seattle.gov Web site:
Or download the document.
July 7th, 2014 by Jan Bultmann
Seattle City Councilmember Mike O’Brien asked us to find some examples of ways other cities are tackling privacy issues. We’re just getting started but we found Redlands CA’s Citizens Privacy Council right away.
Here’s their mission statement, with their capitalization but possibly different line breaks:
“To Promote Transparency in the City of Redlands’ use of surveillance technology through citizen involvement and input
With the Purpose of Creating and Maintaining a Balance between Public Safety Requirements and Citizen Privacy Needs when Utilizing Surveillance Tools for Crime Prevention,
Detection, Investigation, and Prosecution”
You can download a PDF of their bylaws here.