Security Draft


For the past few weeks I’ve intensified my effort to understand what I can do to move towards private, anonymous, digital communication. In the following text I describe some basic concepts, then survey current software and hardware options from a few perspectives, and then discuss the wider context. If you want to go straight to my main sources of information, see Sources at the end.

What information is this about: Social graphs, documents, messages, and records. Records include my bank records (what did I buy from whom, when, and where, with card or check), phone call records, medical records, cell phone location records, and more (EFF, Reasonable Expectation of Privacy). Documents include whatever I record, such as my diary or journal, or an essay or film I’m composing on a computer. Messages are documents that I send to other people, whether a one-word text message, a phone call, or a video. A social graph is a set of data that shows who does what with whom, and it tends to be easier to understand graphically/visually. Putting it all together: if USA Law Enforcement wants to know what I’m doing, they might ask the US Postal Service for a “mail cover” — a record of who I send mail to and receive mail from — and ask my bank for my transaction records and ask my phone service for a record of my calls and cell phone location records. They look at my records and message addresses (headers) to build a picture of my social graph and my habits.

Anonymity, privacy, security (what do I mean by these terms): Here are a few definitions of privacy on SecuShare’s site. From there:..[insert quote from that page].. Anonymity is useful when I want to publicly publish something without anyone knowing who published it (such as publishing a corporate memo that demonstrates intentional human rights abuses), and privacy is useful when I want to publish something to a select person or people and only those people (such as sharing photos of my birthday party with family and friends, or sharing documents from a business meeting with co-workers, or sharing my medical records between three doctors). Security is ? maybe the practices that produce anonymity or privacy.

Practices and Tools: No matter how tightly my software and hardware manage my data, I can break my anonymity by writing my name on the document I publish. And no matter how much care I take to remove all personally identifying details from the document I publish, my anonymity and privacy are broken if my software and hardware leave a digital trail of evidence pointing back to my computer.

Example 1: Disk encryption tools such as TrueCrypt, and the practice of turning off my laptop when travelling, since the decryption key is stored in RAM and can be found by someone who accesses my computer while it’s on.

Example 2: “Formally verifying code” is a practice in hardware and software design, enabled by tools such as mathematical theorem-provers. One way to verify code is to write software as a logical mathematical statement, and then prove that statement, the way mathematics can be used to prove a theorem (Formal verification, Wikipedia). There’s special software that is used to prove the newly written software. “Proving” the software code means that the code does what it says it does, and nothing else. I’ve heard that while this adds a great deal of security, it isn’t a panacea because the verification is only as secure as the verifier, and an error might someday be discovered in the verifier. At any rate, the practice of verifying software code seems to be slowly gaining popularity (I have no source for this, just an impression I’ve formed), including the Redphone company that works for the USA Navy, the distributed revision control system Darcs/Camp (maybe just Camp, not Darcs), and a handful of formally verified operating systems.

Data Situations: Data on my computer; data on the wire, and data on 3rd party computers (this is the frame from the EFF’s Surveillance Self-Defense guide).

Network topologies (this is designed into the hardware and software): Central server; federated servers (a.k.a. server-to-server, federated social web, decentralized social network); peer-to-peer (or user-to-user); and friend-to-friend (or web-of-trust). While we can use federated server software as peer-to-peer software, each runing our own server on our own computer, this isn’t the use case that server/client software is designed for. Central servers includes Facebook, Google+, and the NSA’s database of our telecommunication. Federated servers or s2s includes Friendica, Diaspora*, StatusNet, and email (anonymous remailers, such as MixMaster, are a special use of the email network). Peer-to-peer includes BitTorrent, and the opennet mode of Freenet. Friend-to-friend includes the darknet mode of Freenet, GNUnet, and, I think, Secure Share (which isn’t ready to use yet). I don’t know what topology the FreedomBox or Tor implement. Note that f2f networks can use servers to accelerate their traffic, so long as the server doesn’t really know what it’s doing (SecuShare). In terms of hardware and software, I’m finding that the most private, secure, anonymous tools are friend-to-friend, and remailers enable strong anonymity.

Scope (in terms of network layers or something like network layers): Entire network on all layers; software network; overlay network. FNF focuses (I think) on all layers (or maybe more on the first four). BitTorrent, and anything else that uses TCP/IP (Internet Protocol) is an overlay network (my understanding here is pretty superficial).

Distrubution / tool format (what is the product): Network hardware (accessory to node; node; network); operating system; application; plug-in; app or plugin on someone else’s server (a.k.a. web service). Free Network Foundation focuses on network hardware, operating system, and applications. focuses on apps and plugins that run on someone else’s server (maybe their focus is broader, I’m not sure).


MixMaster remailer got recent publicity in the FBI’s seizure of a server running MixMaster. The FBI agents alleged that the server, run by Riseup Collective, May First/People Link, and European Counter Network, was used to send bomb threat emails received at a university. However, MixMaster keeps no logs because it’s designed to provide anonymity, which means that server seizure cannot reveal any information of use to the FBI in that investigation. So, I think this is a case of FBI/USA gov’t bullying. If anyone took the FBI’s server under such apparently false pretenses, I think the FBI would call it terrorism.

Based on my current understanding, Secure Share seems the most realistic and intelligent path, with the Freedom Box as a possible vehicle for Secure Share, and, longer-term the FreedomLink and FreedomTower providing a hardware network controlled by users and difficult to shut down. If I’m going to use computers for telecommunication, I want to use Secure Share. However, it’s not ready to use yet, so, what to do? Well, right now, I think the Crabgrass instance hosted at is my most sensible choice. Why? Riseup has earned my trust, and I’d rather use one trusted central server than all the federated servers my friends use, which I don’t trust. The tradeoff of We.Riseup verusus Secure Share is that using a single server as the hub for our communication means adversaries only have to compromise one target in order to get our messages and social graph. Riseup knows this, and makes decisions about software design, hardware setup, legal arrangements, and practices with the aim of strong, multi-layer security. (The Lorea instance known as N-1 might be on par with We.Riseup, though I haven’t yet looked into N-1 or Lorea enough to satisfy my curiosity.)

Behind Secure Share: Carlo v.Loesch, nickname lynX, stands as the public face of Secure Share. He runs the business Symlynx, which provides Internet chat, audio, and video conferencing services. If you want to hear and see video of him, start with his presentation of Secure Share at the Unlike Us conference #2, in March, 2012. In that video, he says Secure Share consists of three programmers and few dozen people who play with it, and it started about a year earlier. I wonder if anyone else joined the team after seeing his presentation.

Why does any of this matter:

Why server/client architectures cannot provide privacy in the USA:  In USA law, giving data to any third party forfeits one’s “reasonable expectation of privacy” and thus makes warrantless search by Law Enforcement legal (source: EFF, I recommend reading it) — yes, that’s the Fourth Amendment going out the window when we send any data to any third party no matter what that third party’s privacy policy is (EFF mentions that in some cases, legislation provides more privacy, but that’s not at the foundational level of the constitution). Encrypting the data before I send it to the third party would mean that Law Enforcement would have warrantless access to my encrypted data instead of plaintext data, so at least it’ll cost them some computing power to crack my encryption (though they spend billions of dollars on computing power) — unless I let my encrpytion key or passphrase slip somewhere, or they otherwise have access to it. For Secure Share’s take on this, see Federated Web Servers are Never Private Enough.

What about the NSA intercepting my encrypted messages and storing all of them until they can decrypt them? As I understand it, forward secrecy is a way of transmitting messages such that decrypting them requires temporary data that only exists at the time the messages are shared, so that even if the NSA (or anyone else) gets my private encryption key, they can either decrypt one message or no messages (I don’t quite understand). We can enjoy forward secrecy today in our instant message / chatting by using the Off-the-Record protocol, currently available in quite a few popular computer apps, and a few phone apps. (This means that we can achieve forward secrecy when using Google/Facebook/MSN Messenger chat via Pidgin-with-OTR-plugin. Just remember, anything involving Facebooglesoft is only a coping mechanism on our way to something more secure.) The EFF’s explanation of forward secrecy offers a clear, brief intro (though commends Google for data privacy, which I think is a dangerous joke).

How this relates to my other thinking and doing: In bioregionalism, I think about [inclusive] subsidiarity: each decision made at the most local level practical (local meaning close to those affected by the decision) [and welcoming all affected to participate in the dialogue and decision]. In computing and telecommunications, that seems to mean storing and processing data as locally as possible to the person responsible for the data. It means local control, individual or community community control (some ways of seeing count individuals as communities). And, today, that seems to mean Freenet in darknet mode rather than Faceboogle or Farcebork or entrusting private data to any third party.

Other people’s ideas about why privacy and anonymity matter and how they relate to a long-term process of creating a peaceful, just, sustaining culture: Dyne Foundation’s project Weaver Birds; Markus Sabadello’s videos and Project Danube that he launched; May First / People Link (more about community access to telecommunications than privacy and anonymity, though I know they support anonymity since they run a MixMaster anonymous remailer on one of their servers); Secure Share and PSYC [insert something relevant here]; “One of the main points in Rogers’ speech was that lack of privacy and the subsequent paranoia can lead to retreat from political action, which needs a certain level of anonymity in order to fully express its potential.” (referring to Michael Rogers at Unlike Us conference)

Quote from Secure Share’s Censorship page: “In the ideal case, Ghareiba would like to no longer need a phone line or Internet connection to defend his democratic rights, just a power plug should be enough, and should that power plug be disconnected, a solar panel would do.”

Upcoming posts:

A) What is the Internet? Alfredo Lopez at MF/PL says it’s all the people connected to each other, in the essay “The Organic Internet.” Eben Moglen writes, “The growth of the network rendered the non-propertarian alternative even more practical. What scholarly and popular writing alike denominate as a thing (“the Internet”) is actually the name of a social condition: the fact that everyone in the network society is connected directly, without intermediation, to everyone else. The global interconnection of networks eliminated the bottleneck that had required a centralized software manufacturer to rationalize and distribute the outcome of individual innovation in the era of the mainframe.”

B) SecureShare in detail.

Things I didn’t mention in this post, that are relevant: OwnCloud; OneSwarm.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: