Please disable JavaScript to enhance your browsing experience.
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
interference15.08/17.08↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
homeregistercontact↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
programlocationwiki↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
three↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
days↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
of↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
critical↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
approaches↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
to↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
technology↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘
↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘↘

Program

The program of interference is designed to be printed out from the browser (Firefox or Iceweasel to obtain the planned outcome, any other browser for unexpected results...).
It is made out of three different parts: a timetable for the event, a collection of all the abstracts of the sessions, and a reader made out of texts submitted by the participants.

Click here to print, or just press CTRL + P.
Remove all default headers and footers in the print dialog and please be patient. as rendering all the pages of the reader together can take more than a minute.

15.08
11:00
Brunch
15.08
12:00
Hello
Interference
Callout

Interference, n:

    preventing (a process or activity) from continuing or being carried out properly.
    the combination of two or more electromagnetic waveforms to form a resultant wave in which the displacement is either reinforced or cancelled.

Interference is a gathering of people, perspectives, theories, and actions that share a critical approach to society and technology. It will take place at the Binnenpret in Amsterdam, NL from 15th to the 17th of August 2014. It will be a space where we can meet, debate, share, learn, and find our affinities and oppositions. The event comes as a response to the lack of a common ground for confrontation and discussion over themes like hacking, technology, art and politics that could break out of the existing containers and roles for such concepts and practices.

Interference is not a hacker conference. From a threat to the so-called national security, hacking has become an instrument for reinforcing the status quo. Fed up with yet another recuperation, the aim is to re/contextualize hacking as a conflictual praxis and release it from its technofetishist boundaries. Bypassing the cultural filters, Interference wants to take the technical expertise of the hacking scene out of its isolation to place it within the broader perspective of the societal structures it shapes and is part of.

Interference tries not to define itself. Interference challenges hacker's identity, the internal dynamics of hackerculture and its ethical values. It undermines given identities and rejects given definitions. Interference is a hacking event from an anarchist perspective: it doesn't seek for uniformity on the level of skills or interests, but rather focuses on a shared basis of intuitive resistance and critical attitude towards the techno-social apparatus.

Interference is three days of exploring modes of combining theory and practice, breaking and (re)inventing systems and networks, and playing with the art and politics of everyday life. Topics may or may not include philosophy of technology, spectacle, communication guerrilla, temporary autonomous zones, cybernetics, bureaucratic exploits, the illusions of liberating technologies, speculative software, the creative capitalism joke, the maker society and its enemies, hidden- and self- censorship, and the refusal of the binarity of gender, life, and logic.

Interference welcomes discordians, intervention artists, artificial lifeforms, digital alchemists, oppressed droids, luddite hackers and critical engineers to diverge from the existent, dance with fire-spitting robots, hack the urban environment, break locks, perform ternary voodoo, decentralise and disconnect networks, explore the potential of noise, build botnets, and party all night.

The event is intended to be as self-organised as possible which means you are invited to contribute on your own initiative with your skills and interests. Bring your talk, workshop, debate, performance, opinion, installation, project, critique, the things you're interested in, the things you want to discuss. Especially those not listed above.






by the way, text is full of academic/NGO talking, you must concentrate to follow it. some people don't have contact with low educated people and they are excluded from society. NGO people are in NGO sector (with high salaries) and they are separated from society. They will never make revolution, they enjoy in capitalism.
— Response to the interference callout in the comments section of http://anarchistnews.org
15.08
14:00
Transparency critiques – Lonneke van der Velden
15.08
15:00
The DCP Bay – taziden
15.08
16:00
A,N and D collective
The map is not the
territory
Future interactions
in hyperspace

Now, despite all the techniques for appropriating space, despite the whole network of knowledge that enables us to delimit or to formalize it, contemporary space is perhaps still not entirely desanctified (apparently unlike time, it would seem, which was detached from the sacred in the nineteenth century). To be sure a certain theoretical desanctification of space (the one signaled by Galileo’s work) has occurred, but we may still not have reached the point of a practical desanctification of space. And perhaps our life is still governed by a certain number of oppositions that remain inviolable, that our institutions and practices have not yet dared to break down. These are oppositions that we regard as simple givens: for example between private space and public space, between family space and social space, between cultural space and useful space, between the space of leisure and that of work. All these are still nurtured by the hidden presence of the sacred.

Bachelard’s monumental work and the descriptions of phenomenologists have taught us that we do not live in a homogeneous and empty space, but on the contrary in a space thoroughly imbued with quantities and perhaps thoroughly fantasmatic as well. The space of our primary perception, the space of our dreams and that of our passions hold within themselves qualities that seem intrinsic: there is a light, ethereal, transparent space, or again a dark, rough, encumbered space; a space from above, of summits, or on the contrary a space from below of mud; or again a space that can be flowing like sparkling water, or space that is fixed, congealed, like stone or crystal. Yet these analyses, while fundamental for reflection in our time, primarily concern internal space. I should like to speak now of external space.

The space in which we live, which draws us out of ourselves, in which the erosion of our lives, our time and our history occurs, the space that claws and gnaws at us, is also, in itself, a heterogeneous space. In other words, we do not live in a kind of void, inside of which we could place individuals and things. We do not live inside a void that could be colored with diverse shades of light, we live inside a set of relations that delineates sites which are irreducible to one another and absolutely not superimposable on one another.
— Michael Foucault
Of Other Spaces: Utopias and Heterotopias

 

 

 

And what is interesting here is that evolution now becomes an individually centered process, emanating from the needs and desires of the individual, and not an external process, a passive process where the individual is just at the whim of the collective. So, you produce a neo-human, okay, with a new individuality and a new consciousness. But that’s only the beginning of the evolutionary cycle because as the next cycle proceeds, the input is now this new intelligence. As intelligence piles on intelligence, as ability piles on ability, the speed changes. Until what? Until we reach a crescendo in a way could be imagined as an enormous instantaneous fulfillment of human? human and neo-human potential. It could be something totally different. It could be the amplification of the individual, the multiplication of individual existences. Parallel existences now with the individual no longer restricted by time and space.

And the manifestations of this neo-human-type evolution, manifestations could be dramatically counter-intuitive. That’s the interesting part. The old evolution is cold. It’s sterile. It’s efficient.
— Eamon Healy

Because the essence of technology is nothing technological, essential reflection upon technology and decisive confrontation with it must happen in a realm that is, on the one hand, akin to the essence of technology and, on the other, fundamentally different from it.
— Martin Heidegger
The Question Concerning Technology

To
The Inhabitants of SPACE IN GENERAL
And H.C. IN PARTICULAR
This Work is Dedicated
By a Humble Native of Flatland
In the Hope that
Even as he was Initiated into the Mysteries
Of THREE Dimensions
Having been previously conversant
With ONLY TWO
So the Citizens of that Celestial Region
May aspire yet higher and higher
To the Secrets of FOUR FIVE OR EVEN SIX Dimensions Thereby contributing
To the Enlargment of THE IMAGINATION
And the possible Development
Of that most and excellent Gift of MODESTY Among the Superior Races
Of SOLID HUMANITY

— Edwin Abott Abott
Flatland: A Romance of Many Dimensions

15.08
16:00
15.08
17:00
Novelty against sovereignty – Johan Söderberg
Johan Söderberg
Legal Highs and Piracy
Labouring in the
legal grayzone

Introduction

The economy of drug trafficking holds up a mirror image to the official economy (Ruggieroand & South, 1997). It can therefore help us to catch sight of phenomena which have grown too familiar or is clouded behind euphemisms. For instance, the organisation of local drug trafficking in the Baltimor district in the US in the 1990s closely resembled an idealised notion of a ”cottage industry” (Eck & Gersh, 2000). Likewise, the wheeling-and-dealing pusher can be said to personify the entrepreneurial subject avant-la-lettre (South, 2004). This goes to suggest that one can learn much about how how officially recognised, white markets operate by studying illegal drug markets. In a similar vein, I call upon legal highs to throw a new light on a phenomenon that has variously been labelled ”open innovation” (Chesbrough, 2003), democratisation of innovation (von Hippel, 2005), or ”research in the wild” (Callon & Rabeharisoa, 2003). Although the scholars mentioned above belong to different intellectual traditions and their objectives diverge a great deal, they are all trying to encircle the same phenomenon. The object of study is an economy where the tools and know-how to innovate have been dispersed beyond the confines of firms and state institutions, and, subsequently, beyond the confines of experts and professionals. All the aforementioned scholars describe a trend which they consider on average to be benevolent. The relation between firm and user is assumed to be consensual and cooperative. As a consequence, with the exception of licensing regimes and intellectual property rights, questions about regulation and law enforcement have rarely been evoked in relation to ‘open innovation’ or lay expertise (cf. Söderberg, 2010). Moving away from mainstream academics writing to more activist thinking about the subject, such as, for instance, the idea about a peer-to-peer society, it remains the case that the role of the state and the law in this transition has been given little reflection. This lacuna is coherent with the underlying hope that the state will fall apart when users withdraw from it in autonomous practices, such as darknets, crypto-currences, etc.

A new light falls on these assumptions when the users in question are tweaking molecule structures for the sake of circumventing legal definitions. The state and the law are not absent, but constitutive, of these practices, even as the users seek to avoid its gravitational field. The purpose of this paper is to present an empirical case which compels us to adopt a more antagonistic perspective on the market economy, thus mandating a different theoretical apparatus than the ones now commonly drawn upon in studies of open innovation and/or user innovation. In order to foreground aspects of antagonism and contradiction in the regulation of (open) innovation processes, I point to Carl Schmitt, the infamous legal theorist in Nazi-Germany, and two of his contemporary critics, Franz Neumann and Otto Kirchheimer. The latter, associates of the Frankfurt School and emigrants, anchored their reflections over law and legal order in transformations of the economy. Like them, my inquiry into the regulation of legal highs fall back on an analysis of the economy. By making a comparison with activists and entrepreneurs developing filesharing tools, typically with the intent of violating copyright law, I hope to demonstrate that legal highs is not a stand-alone case. Rather, it gives indication of contradictions at the heart of an economy centred on fostering innovation and ‘creative destruction’.

Brief overview of legal highs

The defining trait of ‘legal highs’ is that the substance has not yet been defined in law as a controlled substance. Hence the production, possession and sale of the substance is not subject to law enforcement. Everything hinges on timing and novelty. When a substance has been prohibited, a small change of the molecule structure might suffice to circumvent the legal definition. What kind of changes are required depends on the legal procedures in the country in question. A recurrent finding in Innovation Studies is that lead users often are ahead of firms in discovering new products and emerging markets. Quite so, for decades, legal highs was a marginal phenomenon chiefly engaged in by a subculture of ”psychonauts”. Pioneers in underground chemistry like Nicholas Sand started in the 1960s to synthesise DMT and LSD, and they have since been followed by generations of aspiring chemist students. However, again confirming a wisdom from Innovation Studies, instances of on-off innovation by individuals for the sake of satisfying intellectual curiosity, personal consumption habits, or an urge to win recognition from ones peers, takes on a different significance when the market grows bigger. Some of the chemistry students decided at one point to become full-time entrepreneurs, producing drugs not primarily for use but for sale. A major inflow came with the rave scene in the UK in the 1980s and early 1990s (McKay, 1994). The clampdawn on ecstasy use triggered the quest for novel substances among a larger section of the population. Initially, information about how to synthesise or extract substances were first disseminated in fanzines such as Journal of Psychodelic Drugs, High Times and The Enthogen Review, to mention three of the most renowned, and reached an audience of a few thousands of readers. With the spread of the Internet in the 1990s, information about fungus and herbs from all the corners of the world could be broadcast to a global audience. Thanks to the legally uncertain status of legal highs, the products can be advertised and sold by webshops that are shiped internationally. According to a recent survey, more than half of the shops were registered in UK, and more that a third in the Netherlands (Hillebrand, Olszewski & Sedefov, 2010). In Ireland, drugs were sold in brick-and-mortar retail stores, so-called ‘head shops’, until a law was passed in 2010 banning this practice (Ryall & Butler, 2011). Globalisation has reshaped this market like any other. Most synthetic substances today are believed to have been produced in China, and, to a lesser extent, in India (Vardakou, Pistos, Spiliopoulou, 2011).

To provide an exhaustive taxonomy of something as ephemeral as legal highs is self-defeating from the start. To get a rough overview of the phenomenon under discussion, however, some highlights need to be given. A major group of legal highs classifies as synthetic cathinones. The source of inspiration comes from Khat, a plant traditionally used in East African countries. One derivative of this substance that has made it to the headlines is mephedrone. The first known instance of its use was in 2007 but it became widespread in 2009, in response to a new legislation in the UK that banned some other designer drugs. Subsequently, mephedrone was banned in UK in 2010, as well as in Netherlands and the Nordic countries (Winstock et al., 2010). Just a few months later, however, a new synthetic cathinone called Naphyrone took its place (Vardakou , et al., 2011). Another major class of drugs are synthetic cannabinoids. On the street they go under the name ”Spice” and are marketed as a legal alternative to marijuana. The synthetic extract of cannabis has been sprayed on herbal leaves. It took a long time for drug prevention authorities to realise that the active substance did not stem from the plant mixture but from added chemicals. In fact, it appears as some of the chemicals have been added to the compound simply to lead researchers astray and avoid detection (Griffiths, et al., 2010). Piperazines, finally, have effects that are said to mimic ecstacy. One version of this substance, 1-benzylpiperazine (BZP) became a celebrity cause after New Zealand recognised its legal status. From 2005 till 2008, it was permitted to sell BZP if some restrictions on advertisement and age limits were respected. The drug could be obtained from all kinds of outlets—corner shops, petrol stations and conveniences stores (Sheridan & Butler, 2010)

Drugs and definitions

Definitions were always key in discussions about drugs and addiction. The ambiguities start with the binary separation between legal drugs (tobacco, alcohol, pharmaceuticals) and illegal ones. It has often been commented on that the harm caused by a drug only in a remote way relates to the legal status of that substance. All the major drugs, opium, cocaine, cannabis and amphetamine, were initially considered to be therapeutic and still have some medical uses. Consequently, the intoxicating effects of a drug itself do not give an exhaustive answer to the question why it has been banned. Conventions, public perceptions and entrenched interests carry a heavy weight in defining what belongs to the right or the wrong side of the law. The centrality of definitions in this discussion is old hat among the scholars studying misuse and addiction (Klaue, 1999; Derrida, 2003). However, in the case of legal highs, the inherent limitations of definitions and language take on a heightened importance. For instance, to avoid health and safety regulations, the drugs are often labelled ‘research chemicals’ or ‘bath salt’ and the containers carries the warning ”not for human consumption”. The drawback with this strategy is that no instructions can be given on the container about dosages or how to minimise risks when administrating the drug.

Legal highs thrives on phony definitions that reflect an ambiguity in the law as such. Ultimately, what legal highs points to is the limits and contradictions of modern sovereignty in one of its incarnations, the rule of law. At the heart of the legal order lies a mismatch between, on the one hand, general and universal concepts of rights, and, on the other, the singularity in which those rights must be defined and enforced. Examples abound of how this gap can be exploited to turn the law against itself. Tax evasion and off-shore banking comes to mind as examples from an altogether different field. This is to say that the case with legal highs is not exceptional. Nor is it novel. Perhaps the urge to play out the letter against the spirit of the law and find loopholes is as old as the law’s origin in divine commandments. However, if the aim is to escape prosecution by the state, then the effectiveness of such practices presupposes a society bound by the rule of law. Rule-bending preys on the formalistic character of the law, which is specific to the secular, democratic and liberal society. Some core principles of the rule of law are as follows: The effects of a new law may only take place after the law has been passed. The law must be made known to the subjects that are ruled by it. What counts as a violation of the law must be clearly defined, as must the degree of enforcement and the punitive measures that is merited by a violation. In addition to the principles for how laws must be formulated, considerable time-lags are imposed by the recognised, democratic process for passing laws.

The original 1961 United Nation Single Convention on Narcotic Drugs laid down that unauthorized trade in a controlled substance should be made a criminal offence in signatory countries. It included a list of substances that were from now on to be held as illegal. Ten years later, more substances had been identified as problematic and were added to the list drawn up in the Convention on Psychotropic Substances. In the last years, however, the number of intoxicating and psychoactive substances is snowballing. According to the annual report by European Monitoring Centre for Drugs and Drug Addiction, there were almost one new substance discovered every week during 2011, and the trend is pointing steadily upward (ECNN, 2012). It has become untenable to proceed along the default option of classifying each new substance individually. There is a wide variation in European countries how long time it takes to make a drug controlled (from a few weeks to more than a year), depending on what legislative procedures are required. The time-lag in the different jurisdictions are made the most of by the web-shops selling legal highs to the ”EU common market”. Hence, pressure is building up for changing the procedures by which new substances are classified. The ordinary, parliamentary route of passing laws needs to be sidestepped if legislators are to keep pace with developments on the field. Already in 1986, United States introduced an analogue clause that by default includes substances said to be structurally similar to classified substances (Kau, 2008). Everything hinges on what is meant with ”similar”, an ambiguity that has never found an adequate answer. A middle road between individually listed substances and the unspecified definitions in the US analogue act was pioneered by UK, and has as of late been followed in many other countries, where control measures are extended to a clusters of modifications of a classified molecule. Other countries have hesitated to follow suit, out of fear of introducing too much ambiguity in a law which carry heavy penalties and extensive, investigative powers (EMCCDA, 2009). An alternative route has been to introduce fast-track systems whereby individual substances can be temporarily listed in a few weeks only, without requiring advice from a scientific board, as has typically been required in normal listing procedures. Another practice that has developed more or less spontaneously is that middle-range public authorities use consumer safety and/or medicine regulation in creative ways to prevent the sale of drugs that have not yet been listed (Hughes & Winstock, 2011).

The fact that legal highs are not prohibited in law might give the impression that such drugs are less dangerous than known and illegal substances. The case is often the opposite. The toxicity of amphetamine, cocaine etc. are well known to medical experts. One doctor specialised in aesthetics told me that he receives almost one patient a week at his hospital in Göteborg, Sweden, unconscious from the intake of a novel substance. It is hard to treat those cases as the chemical content is unknown to the medical expertise. In 2012, for instance, it was reported in Swedish media that 14 people had died from just one drug, called 5-IT. There is a tendency that less dangerous, hence, more popular, old and novel substances, are banned, as they quickly come to the attention of public authorities, pushing users to take ever newer and more risky, but unclassified, substances.

A theoretical excursion: the sovereign and law

While the medical risks of legal highs are easy enough to appreciate, other kinds of risk stem from the responses by legislators and law enforcement agencies. The collateral damage of international drug prevention have been thoroughly documented by scholars in the field. Especially in developing countries, the war on drugs have contributed to human rights abuses, corruption, political instability, and the list goes on and on (Barrett, 2010). Given this shoddy history, it is merited to ask what negative consequences a ‘war on legal highs’ might bring. Almost every country in the European Union have revised their laws on drug prevention in the last couple of years, or are in the process of doing so, in direct response to the surge of legal highs. The laws need to be made more agile to keep pace with the innovativeness of users and organised crime. Otherwise, law enforcement will be rendered toothless. Such flexibility is bought at a high price though. The constraints and time-lags are part and parcel of the rule of law of liberal societies, which, however imperfectly they are upheld, are arguably preferable to the arbitrariness of law enforcement in a police state.

It is in this light that it becomes relevant to recall Carl Schmitt’s reflections over the sovereign, alluded to in the introduction of the paper. Schmitt identified the punitive system as the nexus where the self-image of pluralistic, liberal democracy has to face its own contradictions. Thus he called attention to the fact that peaceful deliberation always presupposes the violent suppression of hostile elements. The delimitation of the violence monopoly of the state laid down by the rule of law is, at the end of the day, limits that the sovereign chooses to impose on himself (or not) (Schmitt, 2007; Zizek, 1999). Carl Schmitt’s radical challenge to the liberal and formalist legal tradition has been extensively commented on in recent years. Many thinkers on the left are attracted to his ideas as an antidote to what they consider to be an appeasing, post-political self-understanding in liberal societies (Mouffe, 2005).

Here I am less interested in present-day appropriations of Carl Schmitt’s thinking, than to use him as an entry point to discuss the works of two of his contemporary critics, Franz Neumann and Otto Kirchheimer, both associates of the Frankfurth School. They lived through the same convulsions of the Weimar Republic as Carl Schmitt did, but drew very different conclusions from that experience. Before saying anything more, let me first make clear that I am not comparing the historical situation in Germany during the 1930s and the current one, a claim that would be hyperbolic and gravely misleading. What interests me with their writings is that they anchored the rule of law in a transformation of the capitalist economy. In a ground-breaking essay, Franz Neumann argued that formalistic modes of law and legal reasoning had enjoyed broad support from privileged business interests in an era of competitive capitalism, epitomised by nineteenth century Britain. Competing firms wanted the state to act as an honest broker. Neumann did not take the self-descriptions of the rule of law at face value. He knew full well that law did not apply equally to all the subjects of the land. Still, he also recognised that subjugated groups had something to loose if this pretence was given up. As a labour organiser in the Weimar Republic, he had seen first-hand how the German business elite begun to cede their commitments to strict, clear, public and prospective forms of general law. Neumann explained this change of heart with an ongoing transition from competitive to monopolistic capitalism. Monopolies did not rely on the state as a broker. Rather, universally applicable laws was perceived as an encumbrance and a source of inflexibility (Neumann, 1996).

If Neumann’s reasoning is pushed too far, it turns into crude economism. It is merited to bring him up, nonetheless, because his ideas provide a missing piece of the puzzle to discussions about open innovation. Recently, William Scheuerman defended the actuality of Neumann’s thinking on law in light of globalisation. Multinational companies do not depend on national legislation to the same extent as before, while national parliaments are struggling to keep ahead and passings laws in response to developments in financial trading and global markets (Scheuerman, 2001). If the word “globalisation” is replaced with “innovation”, then Scheuerman’s argument concurs with the case I am trying to make here. The actors involved in developing legal highs stand as examples of how the speed of innovation places demands on legislators and parliaments that are difficult to reconcile with the principles of rule of law and democratic decision making. Furthermore, the example with legal highs should not be understood as an isolated phenomenon. As I will argue in the next paragraph, legal highs is indicative of broader transformations in an economy mandating innovation and technological change, including, crucially, open innovation. It can be worthwhile to recall that in 1950, Carl Schmitt too expressed concerns about how the legal system would cope with the acceleration of society, which he claimed to see evidence of on all fronts. He divined that this would result in a ‘motorization of law’:

“Since 1914 all major historical events and developments in every European country have contributed to making the process of legislation ever faster and more summary, the path to realizing legal regulation ever shorter, and the role of legal science ever smaller.” (Schmitt, 2009, p.65)

Innovation: the last fronteer

The ongoing efforts to circumvent legal definitions is carried out through innovation. The property looked for in a novel string of molecules or a new family of plants is the quality of not having been classified by regulators. Innovation is here turned into a game of not-yet and relative time-lags. Legislation is the planet which this frenetic activity gravitates around. This contrasts sharply with the mainstream discourse about the relation between legal institutions and innovation. It is here assumed that the institutions of law and contractual agreement serve to foster innovative firms by providing stability and predictability for investors (Waarden, 2001). However, a closer examination will reveal that legal highs are not such an odd-one-out after all. A lot of the innovation going on in corporate R&D departments is geared towards circumventing one specific kind of legal definition, that is to say, patents. The drive to increase productivity, lower costs and creating new markets are only part of the history of innovation. As important is the drive to invent new ways to achieve the same-same old thing, simply for the sake of avoiding a legal entitlement held by a competitor. Perhaps this merits a third category in the taxonomy of innovations, besides radical and incremental innovations, which I elect to call ”phony innovations”. Note to be taken, I do not intend this term to be derogative. What ”phony” refers to is something specific: innovation that aims to get as close as possible to a pre-existing function or effect, while being at variance with how that function is defined and described in a legal text. A case in point is naphyrone, made to simulate the experience that user previously had with mephedrone.

To regulate innovation, something non-existent and unknown must be made to conform with what already exists: the instituted, formalised and rule-bound. Activist-minded members of the “psychonaut” subculture, as well as entrepreneurs selling legal highs, seize on this opportunity to circumvent state regulation, which they tend to perceive as a hostile, external force. The image of an underdog inventor who outsmarts an illegitimate state power through technical ingenuity is a trope, which seamlessly extends to engineering subcultures, engaged in publicly accepted activities, such as building wireless community networks (Söderberg, 2011) and open source 3D printing (Söderberg, 2014). In a time when the whole of Earth has been mapped out and fenced in by nation states, often with science as a handmaid, innovation and science turns out to be the final frontier, just out of reach from the instituted. In popular writings about science in the ordinary press, science is portrayed as the inexhaustible continent, offering land-grabs for everyone, this time without any native indians being expropriated from their lands. The colonial and scientistic undertones of such rhetoric hardly needs to be pointed out. What is more interesting to note is that the frontier rhetoric also calls to mind folktales about the outlaw (and, occasionally, a social bandit or two, in Hobsbawn’s sense of the word). Science and innovation is the last hide-out from the sheriff, as it were. The official recognition granted to this cultural imagination is suggested by the term “shanzhai innovation”. Shanzai used to describe marshlands in China where bandits evaded state authorities, but nowadays it designates product innovations made by small manufacturers of counterfeited goods, which then creates new and legal. Laying in the tangent of open innovation, shanzai innovation too has become a buzz word among buisiness leaders and policy makers (Lindtner & Li, 2012).

Twenty years ago, the no-mans-land of innovation found a permanent address, ”cyberspace”. John Perry Barlow declared the independence of cyberspace vis-à-vis the governments of the industrial world. These days, of course, cyberspace enjoys as much independence from states as an encircled, indian reservation. That being said, temporality is built-in to the frontier notion from the start. New windows open up as the old ones are closed down. Nonetheless, a red thread connects the subcultures dedicated to cryptography, filesharing, crypto-currencies (such as BitCoin) etc., all of them thriving on the Internet, on the one hand, and the psychonaut subculture experimenting with legal highs, on the other. Both technologies where pioneered by the same cluster of people, stemming from the same 1960s American counter-culture. Hence, the subcultures associated with either technology have inherited many of the same tropes. A case in point is the shared hostility towards state authority and state intervention, experienced as an unjust restriction of the freedom of the individual. There is also a practical link, as the surge of legal highs would be unthinkable without the Internet. Discussion forums are key for sharing instructions for how to administrate a drug, while reviews of retailers and novel substances on the Internet provide a minimum of consumer power (Walsh & Phil, 2011; Thoër & Millerand, 2012).

More to the point, the crypto-anarchic subculture and the psychonaut subculture both put in relief the troubled relation between innovation and law. The relentless search for unclassified drugs is mirrored in the creation of new filesharing protocols aimed at circumventing copyright law. To take one well-known example out of the hat: the Swedish court case against one of the worlds largest filesharing sites, the Pirate Bay (Andersson, 2011). Swedish and international copyright law specifies that an offence consists in the unauthorised dissemination of a file containing copyrighted material. Strictly speaking, however, not a single file was up- or downloaded on the Pirate Bay site that violated this definition of copyright law. The website only contained links to files that had been broken up into a torrent of thousands of fragments scattered all over the network. This qualifies as a ”phony innovation” in the sense defined above, because when the fragments are combined by the end-user/computer, an effect is produced on the screen (an image, a sound, etc.) indistinguishable from that which would have happened, had a single file been transmitted to the user. Technically speaking, the Pirate Bay provided a search engine service similar to Google. Such technical niceties creates a dilemma for the entire juridical system: either stick with legal definitions and risk having the court procedure grinding to halt, or arbitrarily override the principles of rule-of-law.

Labouring in the legal no-mans-land

The case with legal highs is exceptional on at least one account, in that governments try hard to prevent innovation from happening in this field. When those efforts fail, the response from policy makers and legislators has been to try again. Changes are introduced in legislative procedures at an accelerating rate, international cooperation is strengthened, and more power is invested in law enforcement agencies. This stand in contrast to most other cases of innovation, even potentially dangerous ones, where public authorities and regulators tend to adopt a laissez-faire attitude. The underlying assumption is that whatever unfortunate side-effects a new technology might bring with it, unemployment, health risks, environmental degradation, etc., it is due to a technical imperfection, something that can be set right through more innovation. This argument, only occasionally confirmed in experience, has purchaise, because ultimately bracketted up with the imperative to stay competitive in global markets. Everyone must bow to this imperative, be it a worker, a firm or a nation state. Subsequently, firms are forced by competition to follow after the most innovative lead-users, no matter where it leads. This is the official story, told and retold by heaps of Innovation Studies scholars, but there is a twist to the tale. Some lead users find themselves on the wrong side of the law, or at least in a grey-zone in-between legality and illegality. The important point to stress here is that, even if the motives of the delinquent innovator are despicable and self-serving, judged by society’s own standards, he or she is also very, very productive.

The appropriation of filesharing methods by the culture industry is a pointer. The distributed method for storing and indexing files in a peer-to-peer network has proven to be advantageous over older, centralised forms of data retrieval. The technique has become an industrial standard. Even the practice of filesharing itself has been incorporated in the marketing strategies of some content providers, including those who are pressing charges against individual filesharers (MediaDefender being the celebrity case). While filesharers and providers of filesharing services are fined or sent to prison, the innovations stemming from their (illegal) activities are greasing the wheels of the culture industry.

By the same token, it is predictable that discoveries made by clandestine chemists and psychonauts will end up in the patent portfolio of pharmaceutical companies. It suffices to recall how methamphetamine cooking in the US has driven up over-the-counter sales of cold medicine (in which a key precursor for methamphetamine can be found, ehpedrine), far beyond what any known cold epidemic could account for (Redding, 2009). As for the psychonauts, the user-generated data bases with trip reports, dosages and adverse drug effects from the intake of novel psychodelic substances that have accumulated on the Internet over the last decades are currently being mined by pharmaceutical researchers. Thus is the psychonaut subculture enrolled in the drug discovery process of the pharmaceutical industry. The added benefit for the industry is that they have no liability towards their test subjects.

In conclusion, the legal grey zone has become an incubator for innovation. In the same way as parts of the culture industry has become structrurally dependent on unwaged, volunteer labour by fans and hobbyists, the computer industry is structurally dependent on the illegal practices of filesharers and hackers, and the pharmaceutical industry on the psychonauts. The outlawing of their practices lay down a negative exchange rate of labour. The established practice of appropriating ideas from communities and political movements by rewarding key individiduals with access to venture capital is matched with a witheld threat of prison sentences. Concurrently, arbitrariness in the legal grey zone become the condition for labouring in the Schumpeterian-Shanzai innovation economy.


References

  • A. Alexander & M. Roberts (Eds.). (2003). High Culture — Reflections on Addiction and Modernity. Albany, State University of New York Press.
  • Andersson, J. (2011). The origins and impacts of Swedish filesharing: A case study. Journal of Peer Production. retrieved from http://peerproduction.net/issues/issue-0/peer-reviewed-papers/
  • Barrett, D. (2010). Security, development and human rights: Normative, legal and policy challenges for the international drug control system. International Journal of Drug Policy, 21, 140–144 .
  • Callon, M. & Rabeharisoa, V. (2003). Research “in the wild” and the shaping of new social identities Technology in Society 25, (2), 193–204.
  • Chesbrough, H. (2003). The era of open innovation. MIT Sloan Management Review, 44 (3), 35-41.
  • Derrida, J. (2003). The rhetoric of drugs. In: A. Alexander & M. Roberts (Eds.), High Culture— Reflections on Addiction and Modernity (pp. 19-44) Albany: State University of New York Press.
  • Eck, J & Gersh, J. (2000). Drug trafficking as a cottage industry. Crime Prevention Studies, 11, 241-271.
  • EMCDDA (2009) Legal responses to new psychoactive substances in Europe, available: http://eldd.emcdda.europa.eu/
  • Flowers, S. (2008) Harnessing the hackers: The emergence and exploitation of Outlaw Innovation. Research Policy, 37 (2), 177–193.
  • Griffiths, P, Sedefov, R., Gallegos, A. & Lopez, D. (2010). How globalization and market innovation challenge how we think about and respond to drug use: ‘Spice’ a case study. Addiction, 105, 951–953.
  • Hillebrand, J., Olszewski, D. & Sedefov, R. (2010) Legal Highs on the Internet. Substance Use & Misuse, 45, 330–340.
  • Hughes, B. & Winstock , A. (2011). Controlling new drugs under marketing regulations. Addiction, 107 (11), 1894-1899.
  • Kau, G. (2008) Flashback to the Federal Analogue Act of 1986: Mixing rules and standards in the Cauldron, University of Pennsylvania Law Review, 156 (4), 1077-1115.
  • Klaue, K. (1999). Drugs, addictions, deviance and disease as social constructs. Bulletin on Narcotics, 1–2.
  • Lettl, C., C. Herstatt, and H. Gemuenden (2006). Users’ contributions to radical innovation: Evidence from four cases in the field of medical equipment technology. R&D Management, 36, 251-72.
  • Lindtner, S. & Li, D. (2012) Created in China: the makings of China’s hackerspace community. Interactions, 19 (6), 18-22.
  • Luthje, C., C. Herstatt, and E. Hippel (2005). User-innovators and ‘‘local’’ information: The case of mountain biking. Research Policy, 34, 951-956.
  • McKay, G. (1994). Senseless acts of beauty: Cultures of resistance . London: Verso.
  • Mouffe, C. (2005). On the Political. London: Routledge.
  • Neumann, F. (1996) The change in the function of law in modern society. In: Scheuerman, W. The rule of law unders siege: Selected essays of Franz Neumann and Otto Kirchheimer (101-141) Los Angeles: California University Press.
  • Reding, N. (2009) Methland: The death and life of an American small town. New York: Bloomsury.
  • Ruggieroand, V. & South, N. (1997). The late-modern city as a bazaar: Drug markets, illegal enterprise and the ‘barricades’. The British Journal of Sociology, 48 (1), 54-70.
  • Ryall, G. & Butler, S. (2011). The great Irish head shop controversy. Drugs: education, prevention and policy. 18 (4), 303–311.
  • Scheuerman, W. (1996). The rule of law under siege: Selected essays of Franz L. Neuman and Otto Kirchheimer. Berkeley: University of California Press.
  • Scheuerman, W. (2001) Franz Neumann: Legal Theorist of Globalization? Constellations, 8 (4), 503-520.
  • Schmitt, C. (2007). The Concept of the Political. Chicago: University of Chicago Press.
  • Schmitt, C. (2009) The Motorized Legislator. In: Rosa, H. & Scheuerman, W. (Eds.) High-speed society: social acceleration, power, and modernity. (65 – 76) Pennsylvania: The Pennsylvania State University
  • Shah, S., & Tripsas, M. (2007). The accidental entrepreneur: The emergent and collective process of user entrepreneurship. Strategic Entrepreneurship Journal 1, 123-140.
  • South, N. (2004). Managing work, hedonism and ‘the borderline’ between the legal and illegal markets: Two case studies of recreational heavy drug users. Addiction Research and Theory, 12 (6), 525–538.
  • Sheridan, J. & Butler, R. (2010). “They’re legal so they’re safe, right?” What did the legal status of BZP-party pills mean to young people in New Zealand? International Journal of Drug Policy, 21 (1), 77–81.
  • Söderberg, J. (2010). Misuser Inventions and the Invention of the Misuser: Hackers, Crackers and Filesharers. Science as Culture, 19 (2), 151-179.
  • Söderberg, J. (2011). Free Space Optics in the Czech Wireless Community: Shedding Some Light on the Role of Normativity for User-Initiated Innovations. Science, Technology & Human Values, 36 (4), 423-450.
  • Thoër, C. & Millerand, F. (2012). Enjeux éthiques de la recherche sur les forums Internet portant sur l’utilisation des médicaments à des fins non médicales. Revue International: Communication Sociale et Publique, 7, 1-22
  • Vardakou, I., Pistos, C., Dona, A., Spiliopoulou, C. & Athanaselis , S. (2011). Naphyrone: a “legal high” not legal any more. Drug and Chemical Toxicology, 1–5 .
  • Vardakou, C., Pistos, C. and Spiliopoulou, C. (2011). Drugs for youth via Internet and the example of mephedrone. Toxicology Letters, 201, 191-195.
  • von Hippel, E. (2005). Democratizing innovation. MIT Press.
  • Waarden, F. (2001). Institutions and Innovation: The Legal Environment of Innovating Firms. Organisation Studies, 22 (5), 765-795.
  • Walsh, C. & Phil, M. (2011). Legal Highs on the Internet. Journal of Psychoactive Drugs, 43 ( I ), 55-63.
  • Whalen, J. (2010). Keeping it legal. Chemistry & Industry 24 (5).
  • Winstock, A, Mitcheson, L, Deluca, P, Davey, Z, Corazza, O & Schifano, F. (2010). New kid for the chop? Addiction 106, 154–161.
  • Zizek, S. (1999). Carl Schmitt in the age of post-politics. In: C. Mouffe (Ed.), The Challenge of Carl Schmitt (18-37). London: Verso.
  • Zizek, S. (2000). The Ticklish Subject: The Absent Centre of Political Ontology. New York: Verso.

 

15.08
18:00
OpenMirage – Hannes
Hannes Mehnert
Resilient and Decentral­ised Autono­mous Infra­structure

Current Situation

Large companies run the Internet, and take advantage of the fact that most Internet users are not specialists. In exchange for well-designed frontends for common services those companies snitch personal data from the users, and by presenting users with long-winded terms of service they legalise their snitching. Companies are mainly interested in increasing their profit, and they will always provide data to investigators if the alternative is to switch off their service.

Interlude: Originally the Internet was designed as decentralised communication network, and all participants were specialists who understood it, and could publish data. Nowadays, thanks to commercialisation users are presented with colourful frontends and only a minority understands the technology behind it.

Since the revelations by whistleblowers the awareness of these facts is rising. People looking for alternatives turn to hosting providers run by autonomous collectives which are not interested in profit. These collectives are great, but unfortunately the collateral damage cannot be prevented if the investigators seize their equipment (as seen in the Lavabit case). While the collectives work very hard on the administrative side, they can barely invest time to develop new solutions from scratch.

The configuring and administrative complexity to run your own services is huge. For example, in order to run your own mail server you have to, at least, have a protocol-level understanding of the domain name system (DNS), SMTP and IMAP. Furthermore, you need to be familiar with basic system administration tasks, know how a UNIX filesystem is organised, what are permissions, and keep up-to-date with security advisories. The complexity is based in the fact that general purpose operating systems and huge codebases are used even simple services. Best practises in system administration compartmentalise separate services into separate containers (either virtual machine, or lightweight approaches such as FreeBSD jails) to isolate the damage if one service has a security vulnerability.

To overcome the complexity, make it feasible for people to run their own services, and decentralise our infrastructure, we either need to patch the existing tools and services with glue to make them ease to use for non-specialists, or we need to simplify the complete setup. The first approach is prone to errors, and increases the trusted code base. We focus on the latter approach.

We will build an operating system from scratch, and think which user demands we can solve with technology. By doing so we can learn from the failures of current systems (both security and configuration wise). We will enable people to publish data on their own, without the need of a centralised server infrastructure. Furthermore, people get back control over their private data such as address book and emails.

Vision

All your devices connect to each other using secure ways. Apart from the initial deployment of keypairs there is no configuration required. Your personal overlay network reuses existing Internet infrastructure. You can easily manage communication channels to other people.

Technology

Core technology

We use Mirage OS1 [2, 3], a library operating system. Mirage executes OCaml applications directly as XEN guest operating systems, with no UNIX layer involved. Each service contains only the required libraries (e.g. a name server will not include a file system, user management, etc.). The usage of OCaml has immediate advantages: memory corruptions are mitigated at the language level, the static expressive type system catches problems at compile time, and modules allow compilation to either UNIX for development or XEN for deployment of the same source. Mirage OS is completely open source, under BSD/MIT licenses. The trusted code base of each service is easily two orders of magnitude smaller than common operating system kernels and libraries.

Performance

As for performance, each service consists only of a single process, which means a single address space and no context switching between the kernel and userland, and received byte arrays are transferred to the respective service using zero-copy.

Reduced configuration complexity

Instead of using configuration files in a general purpose operating system with an ad-hoc syntax, the configuration is directly done in the application before it is compiled. Each service only requires a small amount of configuration, and most of the configuration can be checked at compile time. No further reconfiguration is required at run time. Updating services is straightforward: each service is contained in a virtual machine, which can be stopped and replaced; on-the-fly updates and testing of new versions are easily done by running both the old and the new service in parallel.

Existing libraries

Libraries already available for Mirage OS include a TCP/IP stack, a HTTP implementation, a distributed persistent branchable git-like store (Irmin2), a DNS implementation, a TLS implementation3 [6], and more.

Future

On top of Mirage OS we want to develop an authenticated and encrypted network between your devices, using DNSSEC and dnscurve for setting up the communication channels. Legacy UNIX devices will need a custom DNS resolver and a routing engine which sets up the encrypted channels. The routing engine will use various tactics: NAT punching, TOR hidden service, VPN, direct TLS connection, etc.

In conclusion, the properties of our proposed network are:

  • Resilience - small code base developed with rigorous methods
  • Decentralisation - taking back tight control over your data
  • Autonomy - no control is imposed by any organisation or company
Further future

In the first stage only data will be decentralised, but the communication will still rely on the centralised DNS. Later on, this can be replaced by a distributed hash table or services such as GNUnet.

About the Author

Hannes talked in 2005 at What the Hack (Dutch hacker camp) together with Andreas Bogk about Phasing Out UNIX before 2038-01-194, and implemented TCP/IP in Dylan [1, 5]. In 2013 he received a PhD in formal verification of the correctness imperative programs [4], during which he discovered that especially shared mutable state is tedious to reason about. He thus concluded to smash the state, one field at a time. Hannes works on reimplementing desired Internet protocols (e.g. TLS [6]) in a functional world, on top of Mirage OS [2, 3].


References

  • Bogk, A., and Mehnert, H. Secure networking. In 23rd Chaos Communication Congress (2006).
  • Madhavapeddy, A., Mortier, R., Rotsos, C., Scott, D., Singh, B., Gazagnaire, T., Smith, S., Hand, S., and Crowcroft, J. Unikernels: Library operating systems for the cloud. In Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems (New York, NY, USA, 2013), ASPLOS ’13, ACM, pp. 461–472.
  • Madhavapeddy, A., and Scott, D. J. Unikernels: The rise of the virtual library operating system. Commun. ACM 57, 1 (Jan. 2014), 61–69.
  • Mehnert, H. Incremental Interactive Verification of the Correctness of Object-Oriented Software. PhD thesis, IT University of Copenhagen, July 2013.
  • Mehnert, H., and Bogk, A. A domain-specific language for manipulation of binary data in Dylan. In International Lisp Conference (2007).
  • Mehnert, H., and Merˇsinjak, D. K. Transport layer security purely in OCaml. In OCaml Users and Developers Workshop (2014).

 

15.08
18:00
Michael Anthony C. Dizon
Rules of a networked society
Here, there and everywhere

Impact of technological rules and actors on society

At the height of the dot-com boom in the late 1990s, Lawrence Lessig expressed and popularized the idea that technical code regulates (Lessig 2006). Since then, together with rapid technological developments and widespread dissemination of technologies in all aspects of people’s lives, the growing influence on the behavior of people and things of technological rules and actors other than and beyond the law and the state has become all the more apparent. The behaviors of people online are to a certain extent determined by the technical protocols and standards adopted by the Internet Engineering Task Force (IETF) and other non-governmental bodies that design the architecture of the internet (see Murray 2007, 74; Bowrey 2005, 47). The Arab Spring has proven that the use of internet-based communications and technologies can enable or have a role in dramatic political and social change (Howard 2011; Stepanova 2012)1. Making full use of their technical proficiency, the people behind the whistleblower website Wikileaks and the hacktivist group Anonymous have become influential albeit controversial members of civil society who push the boundaries of what democracy, freedom and civil disobedience mean in a digital environment (Ludlow 2010; Hampson 2012, 512-513; Chadwick 2006, 114). Technology-related rules even had a hand in one of the most significant events of the new millennium, the Global Financial Crisis. It is claimed that misplaced reliance by financial institutions on computer models based on a mathematical algorithm (Gaussian copula function) was partly to blame for the miscalculation of risks that led to the crisis (Salmon 2009; MacKenzie&Spears 2012). The fact that a formula (i.e., a rule expressed in symbols) can affect the world in such a momentous way highlights the critical role of rules in a ‘networked information society’ (Cohen 2012, 3) or ‘network society’ (Castells 2001, 133).

This paper argues that, in order to better understand how a networked society is organized and operates now and in the near future, it is important to develop and adopt a pluralist, rules-based approach to law and technology research. This means coming to terms with and seriously examining the plural legal and extra-legal rules, norms, codes, and principles that influence and govern behavior in a digital and computational universe (see Dyson 2012), as well as the persons and things that create or embody these rules. Adopting this rules-based framework to technology law research is valuable since, with the increasing informatization and technologization of society (Pertiera 2010, 16), multiple actors and rules – both near and far – do profoundly shape the world we live in.

From ‘what things regulate’ to ‘what rules influence behavior’

A pluralist and distributed approach to law and technology is not new (see Schiff Berman 2006; Hildebrandt 2008; Mifsud Bonnici 2007, 21-22 (‘mesh regulation’); Dizon 2011). Lessig’s theory of the four modalities of regulation (law, social norms, the market, and architecture) serves as the foundation and inspiration for many theories and conceptualizations about law, regulation and technology within and outside the field of technology law (Lessig 2006, 123; Dizon 2011). Modifying Lessig’s model, Murray and Scott (2002, 492) come up with what they call the four bases of regulation: hierarchy, competition, community, and design. Similarly, Morgan and Yeung (2007, 80) advance the five classes of regulatory instruments that control behavior, namely: command, competition, consensus, communication, and code.

Lessig’s conception of “what things regulate” (Lessig 2006, 120) is indeed insightful and useful for thinking about law and technology in a networked society. I contend, however, that his theory can be remade and improved by: (1) modifying two of the modalities; (2) moving away from the predominantly instrumentalist concerns of ‘regulation’ and how people and things can be effectively steered and controlled to achieve stated ends (Morgan & Yeung 2007, 3-5; Black 2002, 23, 26); and (3) focusing more on how things actually influence behavior rather than just how they can be regulated. I prefer to use the term ‘technology’ rather than ‘architecture’ since the former is a broader concept that subsumes architecture and code within its scope. By technology, I mean the ‘application of knowledge to production from the material world. Technology involves the creation of material instruments (such as machines) used in human interaction with nature’ (Giddens 2009, 1135). The regulatory modality of ‘the market’ is slightly narrow in its scope since it pertains primarily to the results of people’s actions and interactions. This modality can be expanded to also cover the ‘naturaland social forces and occurrences’ (including economic ones) that are present in the physical and material world. In the same way that market forces have been the classic subject of law and economics, it makes sense for those studying law and technology issues to also examine other scientific phenomena. As will be explained in more detail below, social norms are distinguished from natural and social phenomena since the former are composed of prescriptive (ought) rules while the latter are expressed as descriptive (is) rules. Based on the above conceptual changes to Lessig’s theory, the four things that influence behavior are: (1) law, (2) norms, (3) technology, and (4) natural and social forces and occurrences.

The four things that influence behavior can be further described and analyzed in terms of the rules that constitute them. From a rules-based perspective, law can be conceived of as being made up of legal rules, and norms are composed of specific social norms. On their part, technology consists of technical codes and instructions, while natural and social forces and occurrences are expressed in scientific principles and theories. By looking at what rules influence behavior, one can gain a more detailed, systematic and interconnected picture of who and what governs a networked society. Lessig’s well-known diagram of what constrains an actor can be reconfigured according to the four types of rules that influence behavior. The four rules of a networked society therefore are legal rules, social norms, technical codes, and scientific principles (see Figure 1).

 

 

Rules of a networked society



Plurality of rules

How the various rules of a networked society relate to and interact with each other is very important to understanding how the informational and technological world works. Having a clear idea of how and why the rules are distinct yet connected to one another is paramount given that, in most cases, there are multiple overlaps, connections, intersections and even conflicts among these rules. More often than not, not one but many rules are present and impact behavior in any given situation. The presence of two or more rules or types of rules that influence behavior in a given situation gives rise to a condition of plurality of rules. Plurality of rules resembles the concept of ‘legal pluralism’, which is described by John Griffiths as “that state of affairs, for any social field, in which behavior pursuant to more than one legal order occurs” (Griffiths 1986, 2; von Benda-Beckmann & von Benda-Beckmann 2006, 14). Legal pluralism generally considers both descriptive and normative/prescriptive rules as falling within the ambit of the term law (von Benda-Beckmann 2002, 48). As a result, legal pluralism has been subject to the perennial criticism that, due to its more expansive conception of law, the distinction between law and other forms of social control has been blurred (Merry 1988, 871, 878-879, 858; Griffiths 1986, 307; von Benda-Beckmann 2002, 47, 54, 56). In order to avoid a similar critique, while still maintaining a pluralist perspective, I deliberately use the term ‘rule’ (regula) rather than ‘law’ (lex) to characterize and describe the things that influence and govern behavior. Unlike law, which is inherently normative, a rule has greater flexibility and can cover both is and ought statements. In this way, the important distinction between descriptive rules and normative rules is retained, and the term rule can still be used in two discrete senses: (1) as an observed regularity2 and (2) as a standard that must be observed. The concept of rules is thus sufficiently robust and nuanced that it can serve as the basis for constructing a new way of perceiving the state and degree of normativity in the networked information society (Riesenfeld 2010). Far from conflating the four things that influence behavior, a rules-based perspective is able to integrate and find important interconnections between and among them while, at the same time, preserving and taking in account their uniqueness.

The different types of rules of a networked society and how they connect with each other are explained in greater detail below.

Legal rules and social norms

Legal rules and social norms are both prescriptive types of rules. A social norm has been defined in a number ways: as ‘a statement made by a number of members of a group, not necessarily by all of them, that the members ought to behave in a certain way in certain circumstances’ (Opp 2001, 10714), as ‘a belief shared to some extent by members of a social unit as to what conduct ought to be in particular situations or circumstances’ (Gibbs 1966, 7), and as ‘generally accepted, sanctioned prescriptions for, or prohibitions against, others’ behavior, belief, or feeling, i.e. what others ought to do, believe, feel – or else’ (Morris 1956, 610). Dohrenwend proffers a more detailed definition:

A social norm is a rule which, over a period of time, proves binding on the overt behavior of each individual in an aggregate of two or more individuals. It is marked by the following characteristics: (1) Being a rule, it has content known to at least one member of the social aggregate. (2) Being a binding rule, it regulates the behavior of any given individual in the social aggregate by virtue of (a) his having internalized the rule; (b) external sanctions in support of the rule applied to him by one or more other individuals in the social aggregate; (c) external sanctions in support of the rule applied to him by an authority outside the social aggregate; or any combination of these circumstances.
— Dohrenwend 1959

From the above definitions, the attributes of social norms are: ‘(1) a collective evaluation of behavior in terms of what it ought to be; (2) a collective expectation as to what behavior will be; and/or(3) particular reactions to behavior, including attempts to apply sanctions or otherwise induce a particular kind of conduct’. (Gibbs 1965, 589)
Norms are a key element to a rules-based approach to law and technology. This is especially evident when one recognizes that law is ‘a type of norm’ and ‘a subset of norms’ (Gibbs 1966, 315; Opp 2001, 10715; Posner 1997, 3653) Laws may be deemed to be more formal norms. Galligan holds the inverse to be true: ‘some rule-based associations are mirrors of law and as such may lay some claim to be considered orders of informal laws’ (Galligan 2007, 188). Norms and laws can therefore be imagined as forming a continuum and the degree of formality, generality, certainty and importance (among other things) is what moves a rule of behavior from the side of norms to the side of law (see Ruby 1986, 591).

Legal rules and social norms have a close and symbiotic relationship. Cooter explains one of the basic dynamics of laws and norms, ‘law can grow from the bottom up by [building upon and] enforcing social norms’ (Cooter 1996, 947-948), but it can also influence social norms from the top down – “law may improve the situation by enforcing a beneficial social norm, suppressing a harmful social norm, or supplying a missing obligation” (ibid 1996, 949). Traditional legal theory has settled explanations of how laws and norms interact. Social norms can be transformed into legal norms or accorded legal status by the state through a number of ways: incorporation (social norms are transformed or codified into law by way of formal legislative or judicial processes), deference (the state recognizes social norms as facts and does not interfere with certain private transactions and private orderings), delegation (the state acknowledges acts of self-regulation of certain groups) (Michaels 2005, 1228, 1231, 1233 and 1234), and recognition (the state recognizes certain customary or religious norms as state law) (van der Hof & Stuurman 2006, 217).

Technical codes and instructions

Technical codes like computer programs consist of descriptive rather than prescriptive instructions. However, the value of focusing on rules of behavior is that the normative effects of technical codes can be fully recognized and appreciated. Despite the title of his seminal book and his famous pronouncement that ‘code is law’, Lessig (2006, 5) does not actually consider technical or computer code to be an actual category or form of law, and the statement is basically an acknowledgement of code’s law-like properties. As Leenes (2011, 145) points out, ‘Although Lessig states that ‘code is law’, he does not mean it in a literal sense’. Even Reidenberg’s earlier concept of Lex Informatica, which inspired Lessig, is law in name only (Lessig 2006, 5). Reidenberg explicitly states that Lex Informatica can be ‘a useful extra-legal instrument that may be used to achieve objectives that otherwise challenge conventional laws and attempts by government to regulate across jurisdictional lines’ (1997, 556 emphasis added). While Lex Informatica ‘has analogs for the key elements of a legal regime’, it is still ‘a parallel rule system’ that is ‘distinct from legal regulation’ (Reidenberg 1997, 569, 580 emphasis added). But if ‘code is not law’ as some legal scholars conclude (Dommering 2006, 11, 14), what exactly is the relationship between technical codes and law, and what is the significance of code in the shaping of behavior in a networked society?

Using the above definition and characterization of norms, technical code in and of itself (i.e., excluding any such norms and values that an engineer or programmer intentionally or unintentionally ally implements or embodies in the instructions) is neither a legal rule nor a social norm because: (1) it is not a shared belief among the members of a unit; (2) there is no oughtness (must or should) or a sense of obligation or bindingness in the if-then statements of technical code (they are simply binary choices of on or off, true or false, is versus not); (3) there is no ‘or else’ element that proceeds from the threat of sanctions (or the promise of incentives) for not conforming (or conforming) with the norm; (4) the outcome of an if-then statement does not normally call for the imposition of external sanctions by an authority outside of the subject technology; (5) and, generally, there is no collective evaluation or expectation within the technical code itself of what the behavior ought to be or will be (a matter of is and not ought).

Even though technical codes and instructions are not per se norms, they can undoubtedly have normative effects (van der Hof & Stuurman 2006, 218, 227). Furthermore, technologies are socially constructed and can embody various norms and values (Pinch & Bijker 1984, 404). A massively multiplayer online role-playing game (MMORPG) such as World of Warcraft has its own rules of play and can, to a certain extent, have normative effects on the persons playing it (they must abide by the game’s rules and mechanics). A virulent computer program such as the ‘ILOVEYOU’ virus/worm (or the Love Bug) that caused great damage to computers and IT systems around the world in 2000 can have a strong normative impact; it can change the outlooks and behaviors of various actors and entities (Cesare 2001, 145; Grossman 2000). As a result of the outbreak of the Love Bug, employees and private users were advised through their corporate computer policies or in public awareness campaigns not to open email attachments from untrustworthy sources. In the Philippines, where the alleged author of the Love Bug resided, the Philippine Congress finally enacted a long awaited Philippine Electronic Commerce Act, which contained provisions that criminalized hacking and the release of computer viruses4. The Love Bug, which is made up of technical instructions, definitely had a strong normative impact on computer use and security.

Digital rights management (DRM) is an interesting illustration of the normative effects of technology since, in this case, technical code and legal rules act together to enforce certain rights for the benefit of content owners and limit what users can do with digital works and devices (see Brown 2006). DRM on a computer game, for example, can prevent users from installing or playing it on more than one device. Through the making of computer code, content owners are able to create and grant themselves what are essentially new and expanded rights over intellectual creations that go beyond the protections provided to them under intellectual property laws (van der Hof & Stuurman 2006, 215; McCullagh 2005). Moreover, supported by both international and national laws5, DRM acts as hybrid techno-legal rules that not only restrict the ability of users to access and use digital works wrapped in copy protection mechanisms, but the circumvention of DRM and the dissemination of information about circumvention techniques are subject to legal sanctions (see Dizon 2010).6 When users across the world play a computer game with ‘always-on’ DRM like Ubisoft’s Driver, it is tantamount to people’s behavior being subjected to a kind of transnational, technological control, which users have historically revolted against (Hutchinson 2012; Doctorow 2008).

Another example of hybrid techno-legal rules is the so-called Great Firewall of China. This computer system that monitors and controls what users and computers within China can access and connect to online clearly has normative effects since it determines the actions and communications of an entire population (Karagiannopoulos 2012, 155). In fact, it does not only control what can be done within China but it also affects people and computers all over the world (e.g., it can prevent a Dutch blogger or a U.S. internet service such as YouTube from communicating with Chinese users and computers). In light of their far-reaching normative impact, technical codes and instructions should not be seen as mere instruments or tools of law (Dommering 2006, 13-14), but as a distinct type of rule in a networked society. Code deserves serious attention and careful consideration in its own right.

Scientific principles and theories

There is a whole host of scientific principles, theories and rules from the natural, social and formal sciences that describe and explain various natural and social phenomena that influence the behavior of people and things. Some of these scientific principles are extremely relevant to understanding the inner workings of a networked society. For example, Moore’s Law is the observation-cumprediction of Intel’s co-founder Gordon Moore that ‘the number of transistors on a chip roughly doubles every two years’,7 and can be expressed in the mathematical formula n = 2((y - 1959) ÷ d) (Ce ruzi 2005, 585)8 Since 1965, this principle and the things that it represents have shaped and continue to profoundly influence all aspects of the computing industry and digital culture particularly what products and services are produced and what people can or cannot do with computers and electronic devices (Ceruzzi 2005, 586; Hammond 2004; see Anderson 2012, 73, 141). Ceruzzi rightly claims, ‘Moore’s law plays a significant role in determining the current place of technology in society’ (2005, 586). However, it is important to point out that Moore’s Law is not about physics; it is a self-fulfilling prophecy that is derived from ‘the confluence and aggregation of individuals’ expectations manifested in organizational and social systems which serve to self-reinforce the fulfillment of Moore's prediction’ for some doubling period (Hammond 2004, citations omitted)and is ‘an emergent property of the highly complex organization called the semiconductor industry’ (ibidem). This statement reveals an important aspect of Moore’s Law and other scientific principles and rules – that they are also subject to social construction. As Jasanoff eruditely explains:

science is socially constructed. According to a persuasive body of work, the “facts” that scientists present to the rest of the world are not simple reflections of nature; rather these “facts” are produced by human agency, through the institutions and processes of science, hence they invariably contain a social component. Facts, in other words, are more than merely raw observations made by scientists exploring the mysteries of nature. Observations achieve the status of “facts” only if they are produced in accordance with prior agreements about the rightness of particular theories, experimental methods, instrumental techniques, validation procedures, review processes, and the like. These agreements, in turn, are socially derived through continual negotiation and renegotiation among relevant bodies of scientists.
— Jasanoff 1991, see also Polanyi 2000

Since the construction of scientific rules and facts is undertaken by both science and non-science institutions, “what finally counts as ‘science’ is influenced not only by the consensus views of scientists, but also by society’s views of what nature is like – views that may be conditioned in turn by deep-seated cultural biases about what nature should be like” (Jasanoff 1991, 347). Due to “the contingency and indeterminacy of knowledge, the multiplicity and non-linearity of ‘causes’, and the importance of the narrator’s (or the scientific claims-maker’s) social and cultural standpoint in presenting particular explanations as authoritative” (Jasanoff 1996, 411-412), science is without doubt a ‘deeply political’ and ‘deeply normative’ activity (Jasanoff 1996, 409, 413; see Geertz 1983, 189; Hoppe 1999). It reveals as much about us as it does the material world.

The ‘constructedness’ (Jasanoff 1991, 349) of science can also been seen in the history of the use of and meaning ascribed to the term ‘scientific law’. The use of the term ‘law’ in reference to natural phenomena has been explained as ‘a metaphor of divine legislation”, which “combined the biblical ideas of God’s legislating for nature with the basic idea of physical regularities and quantitative rules of operation’ (Ruby 1986, 341, 342,358 (citations omitted). Ruby argues, however, that the origins of the use of the term law (lex) for scientific principles is not metaphorical but is inherently connected to the use and development of another term, rule (regula) (Ruby 1986, 347, 350). Through the changing uses and meanings of lex and regula and their descriptive and/or prescriptive application to the actions of both man and nature throughout the centuries, law in the field of science became more commonly used to designate a more fundamental or forceful type of rule, which nevertheless pertains to some ‘regularity’ in nature. At its core, a scientific principle or law is about imagining ‘nature as a set of intelligible, measurable, predictable regularities’ (Ruby 1986, 350 (emphasis added)).

Another important characteristic of scientific principles is that they act as signs, and consist of both signifier and signified. Moore’s Law is both the expression and the embodiment of the natural and social forces and occurrences that it describes. As Ruby (1986, 347) explains, “in the case of natural phenomena it is not always possible to distinguish the use of lex for formulated principles from that for regularities in nature itself”. Thus, from a practical standpoint, natural and social phenomena are reified and can be referred to by the labels and formulations of the relevant scientific principles that they are known by. For example, rather than saying, ‘the forces described by Moore’s Law affected the computer industry’, it can be stated simply as ‘Moore’s Law affected the computer industry’.

There is much that can be learned about how the world works if we take as seriously the influence of scientific rules on behavior as we do legal ones. Scientific principles are descriptive and not prescriptive rules, but, like technical codes, they too are socially constructed and have significant normative effects on society and technology, and are thus worth studying in earnest (see Jasanoff 1996, 397 ‘co-production of scientific and social order’). People who are aware of the descriptive ‘theory of the Long Tail’ (Anderson 2007, 52)9 would conform their actions to this rule and build businesses that answer the demands of niche markets. Scientists and engineers are obviously cognizant of the “law of gravity” and they know that they ought to design rockets and airplanes based on this important descriptive principle. Competition authorities know that they must take into account market forces and relevant economic principles before imposing prescriptive rules on a subject entity or market. These and many other examples show how descriptive rules can also give rise to or be the basis of ought actions and statements.10 Descriptive rules as such can influence behavior and have normative effects.

Being able to incorporate scientific principles within the purview of law and technology research is important since it creates connections that bring the fields of law and science and technology ever closer together. If there is value in a sociologist of science and technology studying law-making processes (Latour 2010), there is equal merit in law and technology researchers examining scientific principles and technical codes since rules that govern the networked society can similarly be made and found in laboratories and workshops (Latour 2010; Callon 1987, 99).

Significance of rules

On a theoretical level, a rules-based approach is very useful and valuable to law and technology research in a number of ways. First, it distinguishes but does not discriminate between normative and descriptive rules. While the key distinction between is and ought is maintained, the role and impact of descriptive rules on behavior and order is not disregarded but is, in fact, fully taken into account. By focusing as well on descriptive rules and regularities that are not completely subject to human direction, a rules-based framework can complement and support the more instrumentalist, cybernetic and state-centric theories and methods to law and technology (Morgan & Yeung 2007, 3-5, Black 2002, 23, 26). Rather than concentrating solely or mainly on how state actors and the law directly or indirectly regulate behavior, a rules-based approach creates an awareness that problems and issues brought about by social and technological changes are often not completely solvable through man-made, top-down solutions alone, and more organic and bottom-up approaches should also be pursued. By placing equal emphasis on descriptive rules such as technical codes and scientific principles and their normative effects, the complexity and unpredictability of reality can be better understood, and the people, things and phenomena within and beyond our control are properly considered, addressed or, in some case, left alone.

Second, conceptualizing the networked society in terms of is and ought rules makes evident the ‘duality of structure’ that recursively constitutes and shapes our world (Giddens 1984 25, 375). As Giddens explains,

The constitution of agents and structures are not two independently given set of phenomena, a dualism, but represent a duality. According to the notion of duality of structure, the structural properties of social systems are both medium and outcome of the practices they recursively organized. Structure is not “external” to individuals…. Structure is not to be equated with constraint but is always both constraining and enabling.
— Giddens 1984

Applying Giddens’ ‘theory of structuration’, a networked society is thus not constituted solely by one dimension to the exclusion of another – agency versus structure, human against machine, man versus nature, instrumentalism or technological determinism, society or technology – but it is the action-outcome of the mutual shaping of any or all of these dualities (Giddens 1984).

Finally, a rule can be a key concept for an interdisciplinary approach to understanding law, technology and society. A rule can serve as a common concept, element or interface that connects and binds different academic fields and disciplines (Therborn 2002, 863). With the increasing convergence of different technologies and various fields of human activity (both inter and intra) and the multidisciplinary perspectives demanded of research today, a unifying concept can be theoretically and methodologically useful. The study of rules (particularly norms) has received serious attention from such diverse fields as law (see Posner 2000; Sunstein 1996; Lessig 1995; Cooter 2000), sociology (Hecter 2001), economics (McAdams & Rasmusen 2007; Posner 1997), game theory (Bicchieri 2006; Axelrod 1986), and even information theory (Boella et al. 2006; Floridi 2012). The study of ‘normative multiagent systems’ illustrates the interesting confluence of issues pertaining to law, technology and society under the rubric of rules (Boella et al. 2006; Savarimuthu & Cranefield 2011).

Rules of hacking

In addition to its conceptual advantages, a rules-based approach can be readily applied to analyze real world legal and normative problems that arise from technical and social changes. There can be greater clarity in determining what issues are involved and what possible actions to take when one perceives the world as being ‘normatively full’ (Griffiths 2002, 34) and replete with rules. For instance, the ‘problem’ of computer hacking11 is one that legislators and other state actors have been struggling with ever since computers became widely used. Using the rules of a networked society as a framework for analysis, it becomes evident that hacking is not simply a problem to be solved but a complex, techno-social phenomenon that needs to be properly observed and understood.

Laws on hacking

Early attempts to regulate hacking seemingly labored under the impression that the only rules that applied were legal rules. Thus, despite the absence of empirical data showing that hacking was an actual and serious threat to society, legislators around the world enacted computer fraud and misuse statutes that criminalized various acts of hacking, particularly unauthorized access to a computer (Hollinger 1991, 8). Some studies have shown, however, that these anti-hacking statutes have mostly been used against disloyal and disgruntled employees and only seldom in relation to anonymous outsiders who break into a company’s computer system, the oft-cited bogeyman of computer abuse laws (Hollinger 1991, 9; Skibbel 2003, 918). Not all laws though are opposed to all forms of hacking. The Software Directive upholds the rights to reverse engineer and to decompile computer programs to ensure interoperability subject to certain requirements12. The fair use doctrine and similar limitations to copyright provide users and developers with a bit of (but clearly not much) space and freedom to hack and innovate (see Rogers & Szamosszegi 2011).

Norms of hackers

Another thing that state actors fail to consider when dealing with hacking is that computer hackers belong to a distinct culture with its own set of rules. Since the social norms and values of hackers are deeply held, the simple expedient of labeling hacking as illegal or deviant is not sufficient to deter hackers from engaging in these legally prohibited activities. In his book Hackers, Levy codified some of the most important norms and values that make up the hacker ethic:

  • Access to computers should be unlimited and total.
  • All information should be free.
  • Mistrust Authority – Promote Decentralization.
  • Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.
  • You can create art and beauty on a computer.
  • Computers can change your life for the better (Levy 2010, 28-34).

These norms and values lie at the very heart of hacker culture and are a source from which hackers construct their identity. Therborn (2002, 869) explains the role of norms in identity formation, “This is not just a question of an ‘internalization’ of a norm, but above all a linking of our individual sense of self to the norm source. The latter then provides the meaning of our life”. While hacker norms have an obviously liberal and anti-establishment inclination, the main purposes of hacking are generally positive and socially acceptable (e.g., freedom of access, openness, freedom of expression, autonomy, equality, personal growth, and community development). It is not discounted that there are hackers who commit criminal acts and cause damage to property. However, the fear or belief that hacking is intrinsically malicious or destructive is not supported by hacker norms. In truth, many computer hackers adhere to the rule not to cause damage to property and to others (Levy 2010, 457). Among the original computer hackers in the Massachusetts Institute of Technology (MIT) and the many other types and groups of hackers, there is a ‘cultural taboo against malicious behavior’ (Williams 2002, 178). Even the world famous hacker Kevin Mitnick, who has been unfairly labeled as a ‘dark-side hacker’ (Hafner & Markoff 1991, 15), never hacked for financial or commercial gain (Mitnick & Siomon 2011; Hollinger 2001, 79; Coleman & Golub 2008, 266).

It is not surprising then that the outlawing and demonization of hacking inflamed rather than suppressed the activities of hackers. After his arrest in 1986, a hacker who went by the pseudonym of The Mentor wrote a hacker manifesto that was published in Phrack, a magazine for hackers, and became a rallying call for the community.13 The so-called ‘hacker crackdown’ in 1990 (also known as Operation Sun Devil, where U.S. state and federal law enforcement agencies attempted to shut down rogue bulletin boards run by hackers that were allegedly “trading in stolen long distance telephone access codes and credit card numbers”) (Hollinger 2001, 79) had the unintended effect of spurring the formation of the Electronic Frontier Foundation, an organization of digital rights advocates (Sterling 1992, 12). Similarly, the suicide of a well-known hacker, Aaron Schwartz, who at the time of his death was being prosecuted by the US Justice Department for acts of hacktivism, has spurred a campaign to finally reform problematic and excessively harsh US anti hacking statutes that have been in force for decades.14

It may be argued that Levy’s book, which is considered by some to be the definitive account of hacker culture and its early history, was a response of the hacker community (with the assistance of a sympathetic journalist) to counteract the negative portrayal of hackers in the mass media and to set the record straight about the true meaning of hacking (Levy 2010, 456-457, 464; Sterling 1992, 57, 59; Coleman & Golub 2008, 255). Through Levy’s book and most especially his distillation of the hacker ethic, hackers were able to affirm their values and establish a sense of identity and community (Williams 2002, 177-178). According to Jordan and Taylor,

 

Rather than hackers learning the tenets of the hacker ethic, as seminally defined by Steven Levy, they negotiate a common understanding of the meaning of hacking of which the hacker ethic provides a ready articulation. Many see the hacker ethic as a foundation of the hacker community.
— Jordan & Taylor 2008

To illustrate the importance of Levy’s book as a statement for and about hacker culture, the wellknown German hacker group Chaos Computer Club uses the hacker ethic as their own standards of behavior (with a few additions).15

Technologies of hacking

Hackers do not only practice and live out their social norms and values, but the latter are embodied and upheld in the technologies and technical codes that hackers make and use. This is expected since hackers possess the technical means and expertise to route around, deflect or defeat the legal and extra-legal rules that challenge or undermine their norms. Sterling notes, “Devices, laws, or systems that forbid access, and the free spread of knowledge, are provocations that any free and self-respecting hacker should relentlessly attack” (1992, 66). The resort of hackers to technological workarounds is another reason why anti-hacking laws have not been very successful in deterring hackingactivities.

Despite the legal prohibition against different forms of hacking, there is a whole arsenal of tools and techniques that are available to hackers for breaking and making things. There is not enough space in this paper to discuss in detail all of these hacker tools, but the following are some technologies that clearly manifest and advance hacker norms. Free and open source software (FOSS) is a prime example of value-laden hacker technology (Coleman & Golub 2008, 262). FOSS is a type of software program that is covered by a license that allows users and third party developers the rights to freely run, access, redistribute, and modify the program (especially its source code).16 FOSS such as Linux (computer operating system), Apache (web server software), MySQL (database software) WordPress (content management system), and Android (mobile operating system) are market leaders in their respective sectors and they exert a strong influence on the information technology industry as a whole. The freedoms or rights granted by FOSS licenses advance the ideals of free access to computers and freedom information, which arealso the first tenets of the hacker ethic. What is noteworthy about FOSS and its related licenses is that they too are a convergence of legal rules (copyright and contract law), social norms (hacker values), technical codes (software) and scientific principles (information theory) (Coleman 2009; Benkler 2006, 60). In order to grasp the full meaning and impact of FOSS on society, one mustengage with the attendant plurality of rules. Other noteworthy examples of hacking technologies that hackers use with higher socio-political purposes in mind are Pretty Good Privacy (PGP, an encryption program for secret and secure communications) (Coleman & Golub 2008, 259), BackTrack (security auditing software that includes penetration testing of computer systems), Low Orbit Ion Cannon (LOIC, network stress testing software that can also be used to perform denial-of-service attacks), and circumvention tools such as DeCSS (a computer program that can decrypt content that is protected by a technology protection measure).

Technical codes are an important consideration in the governance of a networked society since “technology is not a means to an end for hackers, it is central to their sense of self – making and using technology is how hackers individually create and how they socially make and reproduce themselves” (Coleman & Golub 2008, 271). While technical codes are not themselves norms, they can embody norms and have normative effects. As such, technical codes too are essential to understanding normativity in a networked society.

Science of hacking

The norms and normative effects of hacking tend to be supported and often magnified by scientific principles and theories. Hackers, for instance, can rely on Moore’s Law and the principle of ‘economies of scale’ (Lemley & McGowan 1998, 494) to plan for and develop technologies that are exponentially faster and cheaper, which can receive the widest distribution possible. Being cognizant of Schumpeter’s ‘process of creative destruction’ (Schumpeter 1962) and Christensen’s related ‘theory of disruptive innovation’ (Christensen 2006), hackers, as innovators and early adopters of technology, are in an ideal position to take advantage of these principles and create new technologies or popularize the use of technologies that can potentially challenge or upend established industries. Creative destruction is Schumpeter’s theory that capitalist society is subject to an evolutionary process that “incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one” (Schumpeter 1962, 82). Schumpeter argues that today’s monopolists industries and oligopolistic actors will naturally and inevitably be destroyed and replaced as a result of competition from new technologies, new goods, new methods of production, or new forms of industrial organization (Schumpeter 1962, 82-83). The revolutionary Apple II personal computer, the widely used Linux open source operating system, the controversial BitTorrent file-sharing protocol, and the ubiquitous World Wide Web are some notable technologies developed by hackers,17 which through the process of creative destruction profoundly changed not just the economic but the legal, social and technological structures of the networked society as well.

Furthermore, because of their proclivity for open standards, resources and platforms that anyone can freely use and build on, hackers can naturally benefit from principles of network theory such as network effects. According to Lemley,

“Network effects” refers to a group of theories clustered around the question whether and to what extent standard economic theory must be altered in cases in which “the utility that a user derives from consumption of a good increases with the number of other agents consuming the good.” (Lemley & McGowan 1998, 483 (citations omitted))

This means that the more people use a technology, the greater the value they receive from it and the less likely they will use another competing technology. A consequence of network effects is a

natural tendency toward de facto standardization, which means everyone using the same system. Because of the strong positive-feedback elements, systems markets are especially prone to ‘tipping,’ which is the tendency of one system to pull away from its rivals in popularity once it has gained an initial edge.
— Lemley & McGowan 1998

Network effects and the openness of the Android open source mobile operating system may partly explain how Android became dominant in the smartphone market despite the early lead of Apple’s iPhone and iOS. While Apple’s iOS operating system is proprietary, closed and can only be used on Apple’s own devices, developers are free to use, modify and improve the open source software components of Android, and manufacturers can use Android on their devices subject to certain limitations. The success of Android confirms a view that hackers will have no trouble agreeing with – “open always wins… eventually” (Downes 2009).

Creative destruction and network effects are a few of the important scientific principles and theories that influence the networked information society that hackers are able to benefit from. These principles do not merely remain in the background, quietly establishing the conditions and contexts of action, but, as cognitive statements about observed phenomena in nature and the market, they have strong normative effects in their own right and people tend to conform their behavior to these principles.

Rules, rules everywhere

As illustrated in the case of hacking, a pluralist and rules-based approach can be very useful in describing and analyzing legal problems and normative issues brought about by new or disruptive technologies. Attempts by state and non-state actors to adapt to the changing digital environment or to change people’s behaviors can derive much benefit from knowing how the world works. The workings of the networked society can be framed in infinite ways, but, as explained above, seeing these operations in relation to the presence, action and interaction of rules is extremely helpful in making sense of reality. The formation and implementation of laws must therefore take into account the social, technical and scientific rules that govern a subject area or field. This is necessary because behavior in a technology-mediated and scientifically validated world is not only shaped by laws, but equally by norms, technologies, and natural and social phenomena. These four rules, whether as norms as such or through their normative effects, determine the state and degree of normativity in a networked society.

This paper has enlarged the domain of technology law to cover not just legal rules but also extra-legal rules such as social norms, technical code, and scientific principles. The expanded scope should not be bemoaned but instead embraced as a challenge since there are now more interesting people, things and phenomena which technology lawyers and legal scholars can and ought to study. There is nothing wrong with perceiving the networked society in relation to rules. Other academic fields have no problem seeing the world through their own distinct and widelyencompassing disciplinary lenses – for anthropologists it is all about culture, evolutionary biologist focus on the gene, physicists perceive the universe in terms of matter and energy, and information theorists unabashedly see everything as bits. Technology law researchers should not hesitate to say that everything could potentially be about rules. In a world where normative and descriptive rules pervade all aspects of our lives and we have to constantly negotiate all sorts of rules, norms, codes and principles, a pluralist and rules-based approach brings law and technology study much closer to the messy reality that it seeks to understand and explain. Just look around and it is evident that the world is truly normatively complex and full of rules.

 


Notes

1. But see Morozov (2011a; 2011b) for more somber assessments of technology’s role in political change.
2. In relation to social norms, behavioral regularities that lack a normative element are also called “conventions” (see McAdams & Rasmusen 2007, 1576).
3. “law is older than political society, which means that it originates as a set of norms” Posner (2000, 365).
4. Philippine Electronic Commerce Act, s 33(a); Disini, Jr. & Toral (2000, 36-37; Sprinkel 2001, 493-494; Pabico & Chua 2001).
5. See WIPO Copyright Treaty and the World Performance and Phonograms Treaty (together the WIPO Internet Treaties).
6. Another noteworthy example of hybrid techno-legal rules are those relating to “privacy by design”, which are being advanced in the privacy and data protection regulations of Canada and the European Union (see Cavoukian 2009; see European Commission COM(2010) 245 final/2 and COM(2010) 609 final.
7. http://download.intel.com/museum/Moores_Law/Printed_Materials/Moores_Law... accessed 7 September 2012; see also Ceruzzi (2005, 584).
8. n is the number of circuits, y is the current year and d is the doubling time.
9. The Long Tail is the phenomenon where consumption and production “are increasingly shifting away from a focus on a relatively small number of hits (mainstream products and markets) at the head of the demand curve, and moving towards a huge number of niches in the tail”.
10. Hume’s law, which states that one cannot derive ought from is, is not applicable since the examples do not involve morality but conclusions based on experience and empirical data (see Hume (1739, Book III, Part I, Section 1)).
11. To “hack” is to produce a surprising result through deceptively simple means, which belies the impressive mastery or expertise possessed by an actor who is neither bound nor excluded by the rules of the subject technology or technological system. This is a paraphrasing and refinement of Turkle definition of a hack (see Turkle 2005, 208).
12. Directive 2009/24/EC of 23 April 2009 on the legal protection of computer programs [2009] OJ L111/416, art 5(3), 6, and 8.
13. The Mentor, “The Hacker Manifesto” http://www.phrack.org/issues.html?issue=7&id=3 accessed 4 December 2012
14. Electronic Frontier Foundation, “Computer Fraud And Abuse Act Reform” https://www.eff.org/issues/cfaa accessed 4 March 2013; see Olivenbaum (1996).
15. See Chaos Computer Club, "hackerethics" http://www.ccc.de/hackerethics accessed 7 November 2012
16. Free Software Foundation, “What is free software?” http://www.gnu.org/philosophy/free-sw.html accessed 5 December 2012; Open Source Initiative, “The Open Source Definition” http://opensource.org/osd accessed 5 December 2012.
17. Steve Wozniak, Linus Torvalds, Bram Cohen, and Tim Berners-Lee, creators of the Apple II, Linux, BitTorrent and the World Wide Web, respectively, view themselves as hackers and participate in hacker culture (see Levy 2010, 249; see Himanen 2001; see Thompson 2005; see Berners-Lee 2013).


References

  • Anderson, Chris (2007). The Long Tail: How Endless Choice is Creating Unlimited Demand, Random House Business Books
  • Anderson, Chris (2012). Makers: The New Industrial Revolution, Random House Business Books.
  • Axelrod, Robert (1986). “An Evolutionary Approach to Norms”, 80 American Political Science Review, 1095.
  • Benkler, Yochai (2006). The Wealth of Networks: How Social Production Transforms Markets and Freedom, Yale University Press.
  • Berman, Paul Schiff (2006). “Global Legal Pluralism”, 80 Southern California Law Review 1155.
  • Berners-Lee, Tim (2013). “Aaron is dead” W3C mailing list <http://lists.w3.org/Archives/Public/www-tag/2013Jan/0017.html> accessed 4 March 2013.
  • Bicchieri, Cristina (2006). The Grammar of Society: The Nature and Dynamics of Social Norms, Cambridge University Press.
  • Black, Julia (2002). “Critical Reflections on Regulation”, 27 Australian Journal of Legal Philosophy 1.
  • Boella, Guido, Leendert van der Torre and Harko Verhagen (2006). “Introduction to normative multiagent systems”, 12 Computation & Mathematical Organization Theory 71.
  • Bowrey, Kathy (2005). Law and Internet Cultures, Cambridge University Press 2005.
  • Brown, Ian (2006). “The Evolution of Anti-Circumvention Law”, 20 International Review of Law, Computers & Technology 239.
  • Callon, Michel (1987). “Society in the Making: The Study of Technology as a Tool for Sociological Analysis” in WE Bijker, TP Hughes and TJ Pinch (eds), The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, The MIT Press.
  • Castells, Manuel (2001). The Internet Galaxy: Reflections on the Internet, Business, and Society, Oxford University Press.
  • Cavoukian, Ann (2009). “Privacy by Design: The 7 Foundational Principles” <http://www.privacybydesign.ca/content/uploads/2009/08/7foundationalprinc... accessed 4 March 2013.
  • Ceruzzi, Paul E. (2005). “Moore's Law and Technological Determinism”, 46 Technology and Culture 584.
  • Cesare, Kelly (2001). “Prosecuting Computer Virus Authors: The Need for an Adequate and Immediate International Solution”, 14 The Transnational Lawyer 135.
  • Chadwick, Andrew (2006). Internet Politics: States, Citizens, and New Communication Technologies, Oxford University Press.
  • Chaos Computer Club, “hackerethics” <http://www.ccc.de/hackerethics> accessed 7 November
  • 2012.
  • Christensen, Clayton M. (2006). “The Ongoing Process of Building a Theory of Disruption”, 23 The Journal of Product Innovation Management 39.
  • Cohen, Julie E. (2012). Configuring the Networked Self: Law, Code, and the Play of Everyday Practice, Yale University Press.
  • Coleman, Gabriella (2009). “Code is Speech: Legal Tinkering, Expertise, and Protest among Free and Open Source Software Developers”, 24 Cultural Anthropology 420.
  • Coleman, E. Gabriella and Alex Golub (2008). “Hacker Practice: Moral genres and the cultural articulation of liberalism”, 8 Anthropological Theory 255.
  • Cooter, Robert (1996). “Normative Failure Theory of Law”, 82 Cornell Law Review 947.
  • Cooter, Robert D. (2000). “Three Effects of Social Norms on Law: Expression, Deterrence, and Internalization”, 79 Oregon Law Review 1.
  • Directive 2009/24/EC of 23 April 2009 on the legal protection of computer programs [2009] OJ L111/416.
  • Disini, Jr., Jesus M. and Janette C (2000). Toral, Annotations on the Electronic Commerce Act and its Implementing Regulations (Philexport 2000).
  • Dizon, Michael Anthony C. (2011). “Laws and Networks: Legal Pluralism in Information and Communications Technology”, 15 Journal of Internet Law 1.
  • Dizon, Michael Anthony C. (2010). “Participatory democracy and information and communications technology: A legal pluralist perspective”, European Journal of Law and Technology, Vol. 1, Issue 3.
  • Doctorow, Cory (2008). “Amazon reviewers clobber Spore DRM” <http://boingboing.net/2008/09/07/amazon-reviewers-clo.html> accessed 6 December 2012.
  • Dohrenwend, Bruce P. (1959). “Egoism, Altruism, Anomie, and Fatalism: A Conceptual Analysis of Durkheim’s Types”, 24 American Sociological Review 466.
  • Dommering, Egbert (2006). “Regulating Technology: Code is not Law” in E Dommering and L Asscher (eds), Coding Regulations: Essays on the Normative Role of Information Technology, TMC Asser Press.
  • Downes, Larry (2009). The Laws of Disruption: Harnessing the New Forces that Govern Life and Business, Audible.
  • Dyson, George (2012). Turing’s Cathedral: The Origins of the Digital Universe, Random House Audio.
  • Electronic Frontier Foundation, “Computer Fraud And Abuse Act Reform” <https://www.eff.org/issues/cfaa> accessed 4 March 2013.
  • European Commission, “Communication on a comprehensive approach on personal data protection in the European Union” COM(2010) 609 final.
  • European Commission, “Communication on a Digital Agenda for Europe” COM(2010) 245 final/2.
  • Floridi, Luciano (2012). “Norms as Informational Agents and the Problem of their Design”, SCRIPT Conference: Law and Transformation, Edinburgh, June 2012.
  • Free Software Foundation, “What is free software?” <http://www.gnu.org/philosophy/free-sw.html> accessed 5 December 2012.
  • Galligan, D.J. (2007). Law in Modern Society, Oxford University Press.
  • Geertz, Clifford (1983). Local Knowledge: Further Essays in Interpretative Anthropology, Basic Books, Inc.
  • Gibbs, Jack P. (1981). Norms, Deviance, and Social Control: Conceptual Matters, Elsevier.
  • Gibbs, Jack P. (1965). “Norms: The Problem of Definition and Classification”, 70 American Journal of Sociology 586.
  • Gibbs, Jack P. (1966). “The Sociology of Law and Normative Phenomena”, 31 American Sociological Review 315.
  • Giddens, Anthony (2009). Sociology, 6th edn, Polity Press.
  • Giddens, Anthony (1984). The Constitution of Society: Outline of the Theory of Structuration, Polity Press.
  • Griffiths, Anne (2002). “Legal Pluralism” in R Banakar and M Travers (eds), An Introduction to Law and Social Theory, Hart.
  • Griffiths, John (1986). “What is Legal Pluralism?”, 24 Journal of Legal Pluralism & Unofficial Law 1.
  • Grossman, Lev (2000). “Attack of the Love Bug”,TIME (15 May 2000).
  • Hafner, Katie and John Markoff (1991). Cyberpunk: Outlaws and Hackers of the Computer Frontier, Corgi Books.
  • Hammond, Martin L. (2004). “Moore's Law: The First 70 Years”, Semiconductor International (1 April 2004).
  • Hampson, Noah C.N. (2012). “Hacktivism: A New Breed of Protest in a Networked World”, 35 Boston College International and Comparative Law Review 511.
  • Hecter, Michael and Karl-Dieter Opp (eds) (2001). Social Norms, Russel Sage Foundation.
  • Hildebrant, Mireille, “A Vision of Ambient Law” in R. Brownsword and K. Yeung (eds) (2008). Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes, Hart Publishing.
  • Himanen, Pekka (2001). The Hacker Ethic and the Spirit of the Information Age, Secker & Warburg.
  • Hollinger, Richard C. (2001). “Computer Crime” in David Luckenbill and Denis Peck (eds), Encyclopedia of Crime and Juvenile Delinquency (Vol. II), Taylor and Francis.
  • Hollinger, Richard C. (1991). “Hackers: Computer Heroes or Electronic Highwaymen”, 21 Computers & Society 6.
  • Hoppe, Robert (1999). “Policy analysis, science and politics: from ‘speaking truth to power’ to ‘making sense together’”, 26 Science and Public Policy 201.
  • Howard, Philip N. and others (2011). “Opening Closed Regimes: What Was the Role of Social Media During the Arab Spring?” <http://pitpi.org/index.php/2011/09/11/opening-closed-regimeswhat-was-the... accessed 7 December 2012.
  • Hume, David (1739). A Treatise of Human Nature<http://www.gutenberg.org/files/4705/4705h/4705-h.htm> accessed 7 March 2013.
  • Hutchinson, Lee (2012). “Ubisoft backtracks on PC DRM, citing customer feedback” Ars Technica
  • <http://arstechnica.com/gaming/2012/09/ubisoft-backtracks-on-pc-drm-citin... accessed 6 December 2012.
  • Intel, “Moore’s Law” <http://download.intel.com/museum/Moores_Law/Printed_Materials/Moores_Law... accessed 7 September 2012.
  • Jasanoff, Sheila (1996). “Beyond Epistemology: Relativism and Engagement in the Politics of Science”, 26 Social Studies of Science 393.
  • Jasanoff, Sheila (1991). “What Judges Should Know About the Sociology of Science”, 32 Jurimetrics 345.
  • Jordan, Tim and Paul Taylor (2008). “A sociology of hackers”, 46 The Sociological Review 757, 774-775.
  • Karagiannopoulos, Vasileios (2012). “China and the Internet: Expanding on Lessig’s Regulation Nightmares”, 9:2 SCRIPTed 150 <http://script-ed.org/?p=478> accessed 2 October 2012.
  • Latour, Bruno (2010). The Making of Law: An Ethnography of the Conseil D’Etat (Polity Press 2010).
  • Leenes, Ronald (2011). “Framing Techno-Regulation: An Exploration of State and Non-state Regulation by Technology”, 5 Legisprudence 143.
  • Lemley, Mark A. and David McGowan (1998). “Legal Implications of Network Economic Effects”, 86 California Law Review 479.
  • Lessig, Lawrence (2006). Code: version 2.0, Basic Books.
  • Lessig, Lawrence (1995). “Social Meaning and Social Norms”, 144 University of Pennsylvania Law Review 2 181.
  • Levy, Steven (2010). Hackers: Heroes of the Computer Revolution, O’Reilly Media, Inc.
  • Ludlow, Peter (2010). “Wikileaks and Hacktivist Culture”,The Nation (New York, 4 October 2010).
  • MacKenzie, Donald and Taylor Spears “‘The Formula That Killed Wall Street’?: The Gaussian Copula and the Material Cultures of Modelling” <http://www.sps.ed.ac.uk/__data/assets/pdf_file/0003/84243/Gaussian14.pdf> accessed 7 December 2012.
  • McAdams, Richard H., and Eric B. Rasmusen (2007).“Norms and the Law” in A Polinsky and S Shavell (eds), Handbook of Law and Economics, Volume 2, Elsevier.
  • McCullagh, Declan and Milana Homsi (2005). “Leave DRM Alone: A Survey of Legislative Proposals Relating to Digital Rights Management Technology and Their Problems”, Michigan State Law Review 317.
  • Merry, Sally Engle (1988). “Legal Pluralism”, 22 Law & Society Review 869.
  • Michaels, Ralf (2005). “The Re-State-Ment of Non-State Law: The State, Choice of Law, and the Challenge from Global Legal Pluralism”, 51 The Wayne Law Review 1209.
  • Mifsud Bonnici, Jeanne Pia (2007). Self-Regulation in Cyberspace, TMC Asser Press.
  • Mitnick, Kevin and William L. Simon (2011). Ghost in the Wires: My Adventures as the World’s Most Wanted Hacker, Blackstone Audio.
  • Morgan, Bronwen and Karen Yeung (2007). An Introduction to Law and Regulation: Text and Materials, Cambridge University Press 2007.
  • Morozov, Evgeny (2011a). “Facebook and Twitter are just places revolutionaries go”, The Guardian (7 March 2011) <http://www.guardian.co.uk/commentisfree/2011/mar/07/facebook-twitterrevo... accessed 5 March 2013.
  • Morozov, Evgeny (2011b). The Net Delusion: How Not to Liberate the World, Penguin Books.
  • Morris, Richard T. (1956). “A Typology of Norms”, 21 American Sociological Review 610.
  • Murray, Andrew D. (2007). The Regulation of Cyberspace: Control in the Online Environment, Routledge-Cavendish.
  • Murray, Andrew and Colin Scott (2002). “Controlling the New Media: Hybrid Responses to New Forms of Power”, 65 The Modern Law Review 491.
  • Olivenbaum, Joseph M. (1996). “<CTRL><ALT><DEL>: Rethinking Federal Computer Crime Legislation”, 27 Seton Hall Law Review 574.
  • Open Source Initiative, “The Open Source Definition” <http://opensource.org/osd> accessed 5 December 2012.
  • Opp, K.D. (2001). “Norms” in N Smelser and P Baltes (eds), International Encyclopedia of the Social & Behavioral Sciences, Elsevier.
  • Pabico, Alecks P. and Yvonne T. Chua (2001). “Cyberspace has become the playground of hightech criminals”<http://pcij.org/imag/Online/cybercrimes.html> accessed 2 October 2012.
  • Pertierra, Raul (2010). “The Anthropology of New Media in the Philippines”, Institute of Philippine Culture 2010. Philippine Electronic Commerce Act, adopted 14 June 2000.
  • Pinch, Trevor J. and Wiebe E. Bijker (1984). “The Social Construction of Facts and Artefacts: or How the Sociology of Science and the Sociology of Technology might Benefit Each Other” 14 Social Studies of Science 399.
  • Polanyi, Michael (2000). “The Republic of Science: Its Political and Economic Theory”, 38 Minerva 1.
  • Posner, Eric A. (2000). Law and Social Norms,Harvard University Press.
  • Posner, Richard A. (1997). “Social Norms and the Law: An Economic Approach”, 87 AEA Papers and Proceedings 365.
  • Reindenberg, Joel R. (1997). “Lex Informatica: The Formulation of Information Policy Rules Through Technology”, 76 Texas Law Review 553.
  • Riesenfeld, Dana (2010). The Rei(g)n of “Rule”, Ontos verlag.
  • Rogers, Tomas and Andrew Szamosszegi (1986). “Fair use in the U.S. Economy: Economic Contribution of Industries Relying on Fair Use”, Computer & Communications Industry Association.
  • Ruby, Jane E. (1986). “The Origins of Scientific ‘Law’”, 47 Journal of the History of Ideas 341.
  • Salmon, Felix (2009). “Recipe for Disaster: The Formula That Killed Wall Street”, Wired Magazine Issue 17.03.
  • Savarimuthu, Bastin Tony Roy and Stephen Cranefield (2011). “Norm creation, spreading and emergence: A survey of simulation models of norms in multi-agent systems”, 7 Multiagent and Grid Systems 21.
  • Schumpeter, Joseph A. (1962). “The Process of Creative Disruption” in Capitalism, Socialism and Democracy, Harper Torchbooks.
  • Skibbel, Reid (2003). “Cybercrimes & Misdemeanors: A Reevaluation of the Computer Fraud and Abuse Act”, 18 Berkeley Technology Law Review 909.
  • Sprinkel, Shannon C. (2001). “Global Internet Regulation: The Residual Effects of the ‘ILOVEYOU’ Computer Virus and the Draft Convention on Cyber-Crime”, 25 Suffolk Transnational Law Review 491.
  • Stepanova, Ekaterina (2012). “The Role of Information Communication Technologies in the ‘Arab Spring’” <http://www.gwu.edu/~ieresgwu/assets/docs/ponars/pepm_159.pdf> accessed 7 December 2012.
  • Sterling, Bruce (1992). The Hacker Crackdown, Bantam Books.
  • Sunstein, Cass R. (1996). “Social Norms and Social Roles”, 96 Columbia Law Review 903.
  • The Mentor, “The Hacker Manifesto” <http://www.phrack.org/issues.html?issue=7&id=3> accessed 4 December 2012.
  • Therborn, Goran (2002). “Back to Norms! On the Scope and Dynamics of Norms and Normative Action”, 50 Current Sociology 863.
  • Thompson, Clive (2005). “The BitTorrent Effect”,Wired Magazine Issue 13.01.
  • Turkle, Sherry (2005). The Second Self: Computers and the Human Spirit (Twentieth anniversary edition), the MIT Press.
  • Van der Hof, Simone and Kees Stuurman (2006). “Code as Law” in BJ Koops and others (eds),
  • Starting Points for ICT Regulation: Deconstructing Prevalent Policy One-Liners, TMC Asser Press.
  • Von Benda-Beckmann, Franz (2002). “Who’s Afraid of Legal Pluralism”, 47 Journal of Legal Pluralism 37.
  • Von Benda-Beckmann, Franz and Keebet von Benda-Beckmann (2006). “The Dynamics of Change and Continuity in Plural Legal Orders”, 53-54 Journal of Legal Pluralism 1, 14.
  • Williams, Sam (2002). Free As In Freedom: Richard Stallman’s Crusade for Free Software, O'Reilly.
  • WIPO Copyright Treaty, adopted on 20 December 1996.
  • World Performance and Phonograms Treaty, adopted on 20 December 1996.

 

15.08
19:00
Dinner – Rampenplan
15.08
20:00
XLterrestrials
Digital
Oculus
Ripped

Hello CiTiZENs, we are the psychomedia analysts from the XLterrestrials, an arts and praxis group currently based in the last remaining tunnels of Btropolis, still under siege, stocking up on brainfood items, counter-cultural maps, re-embodied detournement and freshly-picked perspectives to make it through to the next autonomous zone alive and intact.

We were once an extremely naive project which landed in the midst of the Silicon Valley screams – the Cali-technotopian 90s – in the belly of the IT beast, and we began to hatch an aberration species of insurrectionist events to keep awake the pre-mediated consciousness, like little pre-Apple organic community pods, keeping a record of living organisms bobbing up and down in the shitstorm, before the next wave of gadgeted colonies could entirely erase the collective memory of rent control, fair wages, job security, health care, world peace, anarchist horizons and fleshy fun in the physical world.

We initially called it “The Transmigration Of Cinema” taking head-on The Spectacle 2.0.


Living in the future means we can read instant missives from teens living under siege instead of diaries years later, yet cruelty persists.
— tweeted by Astra Taylor, documentary maker and author of The People’s Platform.


Here we are now, living a strange fruit future in the digital settlements, where Some people get access to all the information, some to All You Can’t Eat, and others pile high their profits and power and virtual orgy-time scraped from your hourly un-earthed and uncompensated labor to run the world further into the deepest glory-hole ditch. And others still are getting their asses droned into the stone age like the sadist’s best and realest and spectacular-est video game ever. And we all watch!

A sharp Internet Combatant like Astra Taylor may be even too polite ! Reformers beware ! The digital occupation is showing all the signs of an absolute catastrophe, unbound silicon caliphates (like Amazon’s Bezos, Google’s Eric Schmidt, and their media mouthpieces like Tim O’Reilly and Wired’s Kevin Kelly) , a fundamentalist neo-liberal feeding frenzy, technotopian controller extremism holding all the techno-cards… or so it seems. She provides a few glimmers of hope in correcting the balance of empowerments, fighting bravely the feminist and mutli-culturalist fronts, but w/ pacifist-like arms tied behind her back, pointing mainly w/ nods all the ways the civic-minded might strengthen the institutions that can help redeem our creativity and our world from the gaping maw of advertisers and winner-take-all monopole black holes.

Let’s not forget all these communications and tech tools that exist in a predatory- corporate-capitalist environment, practically nothing will be used by the masses as any passionate citizen would desire; They’ll more often than not be turned upside down on their head and implemented to suck you dry. Might we need to focus on changing the operating system before powerful new tech can be applied?!!


But just last night as we were surfing the net like the frozen cat being hoisted toward the militarized and interconnected stars of imperial horizons, unable to focus in the flood of information, and all the images of flattened ancient and sacred cities, the fresh-splattered blood of disemboweled children of the net, and the broken-necked indigenous bodies hanging from all the accelerated money trees…  we stumbled upon a little dusty cultural treasure, a fire really. Yeah, the dust caught fire !

On the Youtubes, an old Killing Joke song at the peak of punk dissent and self-discovering euphoria , a song called “Pssyche”, a b-side of “Wardance” from around 1980. Fortunately, it was not another mesmerizing eye-candy video, just an image of the 7″ single:  Gene Kelly in a tux and top hat dancing over a body of some previous world war corpses. And just that raw thrashing sound of a young band at a pinnacle of their expressive fury ( no doubt fueled by Thatcherism / Snatcherism ). An inspired music project fresh out of the nest, before the popdom factories even had a chance to boil them down into little shrunken product dolls.


You’re alone in the pack,
you’re feeling like you wanna go home
You’re feeling life’s finished but you keep on going,
the reason is there
You won’t find it till you’ve been and gone
because you’re living a hoax!
Someone’s got you sussed!
Dull your brain or seek inspiration
You feel illusion and then you finally say Transfer
Transform a machine to play with your head
So you can stand back and watch
or take part and learn.
If you don’t know the game,
then you’re still part of it...


Jaz Coleman / Killing Joke , almost a prescient and situationist nudge towards a hacker manifesto ( 1980 ) Listen at https://www.youtube.com/watch?v=-RnYhOxz_Zs


It was just enough to shake us from the surfing stupor. We were back on our feet.


“As Lacan used to say about Big Data, it’s far more dangerous for man than the atom bomb.“ tweeted Evgeny Morozov about a year ago, feisty take-no-technotopian-prisoners author of The Net Delusion and Click Here to Save Everything.


We are lost without, at the very least, some punk attitude to these all-seeing, all-sucking, all-managing operating systems now. We can’t meander through the endless “Oil of the 21st century”* and “communicative capitalism”*, through the poppy fields and/or bottomless pits of the disembodied showtime.

We must rise up before the soma poison hits the central nervous system of the collective consciousness. Or else we’ll all lie paralyzed in the web of corporate endgames, and the motherload of transnational black spider data-reaches coming to devour us and our neighborhoods, one by one in our cocooned and misconnected cubicles.


XLterrestrials present CiTiZEN KINO #40 : All You Can’t Eat

All our psychomedia + psychoactive agents got to offer currently is a desperate but blazing tactical maneuver: Going back in mediated history times to re-route the basic Spectacle-code!

Taking cues from Charles Pathe’s day of the newsreels, that 2nd door of the epic new cinema tools, the one that didn’t just lead to 2-hour dreamy avatar-fantasia and phantasm/escapism, but tactical engagements w/ current events. Enter here:  A hyper-linked, sub-poppied tunnel to the days of the dark room public sphere gatherings.

To hack it! To ingest it! To detourn and reconstruct it! To analyze, and plot ahead! Absurd maybe yes, but consider this: Here was the moment when the Eyewars* began, and it could have evolved into some platform for open-sourced, collective intelligence, instead we got the short end of the stick, a downloaded dangling feed from the overlords and infowar profiteers. Like a facepluck, ripped out from a whole head, mined and extracted from the whole communing social body. So, can we go back and re-route the game for true citizen upgrades, from it’s original pre-militarized mass media sources?!! Can we still slip out the back door of this burning digital theater?!!

15.08
20:00
Apatris – ΑΠΑΤΡΙΣ
15.08
23:00
Paralyzing Device – Paralyzing Device
16.08
11:00
Brunch
16.08
12:00
Signal to Noise – Tincuta Heinzel and Lasse Scherffig
Tincuta Heinzel, Lasse Scherffig, Ioana Macrea-Toma
Reference and
Interference

References

In Ceausescu’s Romania, when media was not only censored, but also strictly limited (there were only 4 hours daily of TV broadcasting dominated by propagandist ceremonies and no football world championships transmissions), people have found tactics to counteract the in-transparent information and the lack of entertainment. They build up antennas to catch neighbor countries’ TV broadcasting (Hungarian, Bulgarian, Serbian or USSR’ ones) or they set up their radios on foreign frequencies. As interesting these technical bricolages could have been, not less striking were the strategic actions of the two political camps of the Cold War era to control and to reinforce their ideological positions. If the “West” was looking to influence the opinion of the Eastern block inhabitants by transmitting “un-maked up” news and trying to establish its cultural productions as alternative norm, Eastern authorities have answered by technically interfering with Western radio broadcasting and by allowing mass cultural movements mixing national ideological contents within westernized cultural forms (as was the case of Cenanclul Flacara in Romania).

The communist context exhibits from this point of view all the faces of an authoritarian system: it employs censorship and surveillance, as well as offensive and defensive mechanisms, it balances between internal control and external defense.

On the other hand, thought liberal in its statements, the Western media knew its own torments. The War in Vietnam, 1968’s European movements, or the Watergate scandal were testing the foundations of press liberty. At the same time, the overall mediatic disclosures were about to install what Guy Debord had called the “society of the spectacle”.

On the other hand, thought liberal in its statements, the Western media knew its own torments. The War in Vietnam, 1968’s European movements, or the Watergate scandal were testing the foundations of press liberty. At the same time, the overall mediatic disclosures were about to install what Guy Debord had called the “society of the spectacle”.

Since the fluidity of radio (or “Hertzian”) space continues to be relevant today, in the age of wireless communication and coded information, the Cold War context is far from being historical and contextual dated. Feeding a fresh perspective on the old debate about “East” and “West” our project investigates the arsenals of producing and reiterating the “Other”.

“Signal to Noise”

“Signal to Noise” is part of the project “Repertories of (in)discreetness” – an art research project that has its starting point in the archives of Radio Free Europe (RFE) from the Open Society Archives in Budapest, considered one of the most important archives of the Cold War period. The project questions the act and mechanisms of archiving the “Other”, with a focus on the European East. It discusses the ways in which information is collected and transferred, the ways in which the East has gained an epistemic body through refraction, as well as the ways in which this body is reiterated today.

“Signal to Noise” is a radio installation dealing with the concreteness of ideological discourses and the imaginary of the “Other”. The minimal and non-visual set-up is composed of two radio stations using the same frequency which intersect and neutralize into a virtual line crossing the space of the exhibition. These two voices consist of archived material of the Cold War era from (Western) Radio Free Europe and (Socialist) Radio Romania. Carrying mobile radios, listeners move through the space in which both broadcasters jam each other and with every motion enact an ever-changing soundscape. As this soundscape is formed by exploring the interference of both radio voices, listeners are caught between two Logos (or two ideological positions), where beyond words, the ideological and media wars embody beings. The result is a spontaneous choreography, re-enacting the “fight over hearts and minds” at the border between East and West as well as recalling the technological and ideological mechanisms used during the Cold War when Radio Free Europe was meant to counterbalance the socialist propaganda in the countries of Eastern Europe – and in consequence was jammed by Eastern authorities.

Modular, “Signal to Noise” can be set up in- or outdoors and can vary the spatial compositions by using varying broadcasting ranges. It may thus function as a temporal intervention in public space, interrupting regular radio programs with its two interfering ideologies. The participants can experience it by using any standard radio receiver.

“Signal to Noise” it is also adaptive. By using different media contents, the installation can address specific political contexts.

 

“Repertories of (in)discreetness”

In his last – and tragic – essay on the historian’s craft, Marc Bloch praised the societies that endeavor to organize their knowledge, as opposed to those that let themselves be explored after their demise by mere accidental preservation, or even worse, by rumors.

One of the distinctive features of modern Western encyclopedias is to combine utopianism and anxiety over the creation of a “time capsule” archive, systematically embodying the relevant knowledge of a certain age. Oriented towards the present by offering a platform of inter-subjective knowledge to a self-articulated “public”, such universalist constructions also aimed at the future. And the future was supposed to be provided with solid, architecturally organized information.

It is the aim of our project to reverse this process by re(creating), through an artistic aggregate, the ephemeral dimension of these over-arching ideas by re-enacting their documentary corpus and questioning their after-life. The vantage point of our exploration is the extreme case of Radio Free Europe’s archive, which is the result of an extensive operation of “compiling” the world beyond the Iron Curtain while trying to direct its course through missionary broadcasting. The issue we would like to raise is that of the posterity of what was conceived by the officials of the day as the “fight over hearts and minds”1; what is the relationship between the epistemic body that we were given then, as captive historical subjects of the East, and now, as agents confronted with an epistemic documentary corpus? We are therefore investigating not just the interaction with the past, but the possibilities of extricating ourselves from its conceptual-political frame which was (in)forming while trying to control. In doing so, our project places the accent on the process of archiving itself and thus questions the production of knowledge.

Radio Free Europe is considered unique in the annals of international broadcasting: acting as surrogate domestic broadcaster for the nations under Communism. It also relied on local official media and informal news in order to broadcast what was considered “objective” information. It performed political transcending journalism while anxiously developing a large research apparatus for systematizing knowledge about subjects which were its audience at the same time. Being in a circular relationship with propaganda (decoding it and afterwards monitoring its reactions within an on-going cycle), it was also giving shape to an inaccessible public, instantiated as the “East”. Acting as objective disseminator, it was nevertheless embedded within a theatre of conflicting and yet complementary, mutating knowledge, thus epitomizing the concentric positioning of encyclopedias within layered landscapes of data.

Due to their wish to outline an exhaustive portrait of the world behind the Iron Curtain, Radio Free Europe Archives give way a series of questions. What did the archives not capture? And, if something was indeed captured, how was it transformed through archiving and broadcasting? What parts of this composite portrait sketched by Radio Free Europe still survive today? And is this portrait only a mirror image resulting from the media war between East and West? By raising these questions, our project looks to divert and to put into a sensible perspective the act of collecting, organizing and using information, in order to question the nature of the information itself. Hoping to give form to such a purpose, the project favors situations over concrete artworks - situations in which the visitors will be become engaged in the production of knowledge and the use of informational dilemmas. By mixing archived documents with real time facts and events, the project wants to address media and mediatic constructions of the Other and to interrogate the patterns by which the recorded facts resurface in different context and through different media.2


notes

1. Expression used by the American propaganda during WWII.
2. “Repertories of (in)discreetness” project reunites Irina Botea, Jon Dean, Istvan Laszlo, Tincuta Heinzel, Lasse Scherffig, and Ioana Macrea-Toma. The project is programmed to be presented at Tranzit Bucharest in 2015.

16.08
12:00
16.08
13:00
Bitcoin wants to be money – Kittens Editorial Collective
The Wine and Cheese Appreciation Society of Greater London, Scott Lenney
Bitcoin:
Finally, fair money?

In 2009 Satoshi Nakamoto invented a new electronic or virtual currency called Bitcoin, the design goal of which is to provide an equivalent of cash on the Internet.1 Rather than using banks or credit cards to buy stuff online, a Bitcoin user will install a piece of software, the Bitcoin client, on her computer and send Bitcoin directly to other users under a pseudonym.2 One simply enters into the software the pseudonym of the person one wishes to send Bitcoin and the amount to send and the transaction will be transmitted through a peer-to-peer network.3 What specifically one can get with Bitcoin is somewhat limited to the few hundred websites which accept them, but includes other currencies, web hosting, server hosting, web design, DVDs, coffee in some coffee shops, and classified adverts, as well as the ability to use online gambling sites despite being a US citizen and to donate to Wikileaks.4 However, what allowed Bitcoin to break into the mainstream – if only for a short period of time – is the Craigslist-style website “Silk Road” which allows anyone to trade Bitcoin for prohibited drugs.5

On February 11th, 1 BTC exchanged for 5.85 USD. So far 8.31M BTC were issued, 0.3 Million BTC were used in 8,600 transactions in the last 24 hours and about 800 Bitcoin clients were connected to the network. Thus, it is not only some idea or proposal of a new payment system but an idea put into practice, although its volume is still somewhat short of the New York Stock Exchange.

The three features of cash which Bitcoin tries to emulate are anonymity, directness and lack of transaction costs, all of which are wanting in the dominant way of going about e-commerce using credit or debit cards or bank transfers. It is purely peer-to-peer just like cash is peer-to-peer. So far, so general.

But what makes the project so ambitious is its attempt to provide a new currency. Bitcoin are not a way to move Euros, Pounds or Dollars around, they are meant as a new money in itself; they are denominated as BTC not GBP. In fact, Bitcoin are even meant as a money based on different principles than modern credit monies. Most prominently, there is no “trusted third party”, no central bank in the Bitcoin economy and there is a limited supply of 21 million ever. As a result, Bitcoin appeals to libertarians who appreciate the free market but are sceptical of the state and in particular state intervention in the market.

Because Bitcoin attempts to accomplish something well-known – money – using a different approach, it allows for a fresh perspective of this ordinary thing, money. Since the Bitcoin project chose to avoid a trusted third-party in its construction, it needs to solve several ‘technical’ problems or issues to make it viable as money. Hence, it points to the social requirements and properties which money has to have.

In the first part of this text we want to both explain how Bitcoin works using as little technical jargon as possible and also show what Bitcoin teaches about a society where free and equal exchange is the dominant form of economic interaction. In the second part we then want to criticise Bitcoin’s implicit position on credit money. From this also follows a critique of central tenets of the libertarian ideology.

The first thing one can learn from Bitcoin is that the characterisation of the free market economy by the (libertarian) Bitcoin adherents (and most other people) is incorrect; namely, that exchange implies:

Mutual benefit, cooperation and harmony.

Indeed, at first sight, an economy based on free and equal exchange might seem like a rather harmonious endeavour. People produce stuff in a division of labour such that both the coffee producer and the shoemaker get both shoes and coffee; and this coffee and those shoes reach their consumers by ways of money. The activity of producers is to their mutual benefit or even to the benefit of all members of society. In the words of one Bitcoin partisan:

“If we’re both self-interested rational creatures and if I offer you my X for your Y and you accept the trade then, necessarily, I value your Y more than my X and you value my X more than your Y. By voluntarily trading we each come away with something we find more valuable, at that time, than what we originally had. We are both better off. That’s not exploitative. That’s cooperative.”6

In fact, it is consensus in the economic mainstream that cooperation requires money and the Bitcoin community does not deviate from this position: “A community is defined by the cooperation of its participants, and efficient cooperation requires a medium of exchange (money) …”7 Hence, with their perspective on markets, the Bitcoin community agrees with the consensus among modern economists: free and equal exchange is cooperation and money is a means to facilitate mutual accommodation. They paint an idyllic picture of the ‘free market’ whose ills should be attributed to misguided state intervention and sometimes misguided interventions of banks and their monopolies.8

Cash

One such state intervention is the provision of money and here lies one of Bitcoin’s main features: its function does not rely on a trusted third-party or even a state to issue and maintain it. Instead, Bitcoin is directly peer-to-peer not only in its handling of money – like cash – but also in the creation and maintenance of it, as if there was no Bank of England but there was a protocol by which all people engaged in the British economy collectively printed Sterling and watched over its distribution. For such a system to accomplish this, some ‘technical’ challenges have to be resolved, some of which are trivial, some of which are not. For example, money needs to be divisible, e.g., two five pound notes must be the same as one ten pound note, and each token of money must be as good as another, e.g., it must not make a difference which ten pound note one holds. These features are trivial to accomplish when dealing with a bunch of numbers on computers, however, two qualities of money present themselves as non-trivial.

Digital signatures: guarantors of mutual harm

Transfer of ownership of money is so obvious when dealing with cash that it is almost not worth mentioning or thinking about. If Alice hands a tenner to Bob, then Bob has the tenner and not Alice. After an exchange (or robbery, for that matter) it is evident who holds the money and who does not. After payment there is no way for Alice to claim she did not pay Bob, because she did. Neither can Bob transfer the tenner to his wallet without Alice’s consent except by force. When dealing with bank transfers etc., it is the banks who enforce this relationship, and in the last instance it is the police.

One cannot take this for granted online. A banknote is now represented by nothing but a number or a string of bits. For example, let us say 0xABCD represents 1 BTC (Bitcoin).9 One can copy it easily and it is impossible to prove that one does not have this string stored anywhere, i.e., that one does not have it any more. Furthermore, once Bob has seen Alice’s note he can simply copy it. Transfer is tricky: how do I make sure you really give your Bitcoin to me?10

This is the first issue virtual currencies have to address and indeed it is addressed in the Bitcoin network.

To prove that Alice really gave 0xABCD to Bob, she digitally signs a contract stating that this string now belongs to Bob and not herself. A digital signature is also nothing more than a string or big number. However, this string/number has special cryptographic/mathematical properties which make it – as far as we can ascertain – impossible to forge. Hence, just as people normally transfer ownership, say a title to a piece of land, money in the Bitcoin network has its ownership transferred by digitally signing contracts. It is not the note that counts but a contract stating who owns the note. This problem and its solution – digital signatures – is by now so well established that it hardly receives any attention, even in the Bitcoin design document.11

Yet, the question of who owns which Bitcoin in itself starts to problematise the idea of harmonic cooperation held by people about economy and Bitcoin. It indicates that in a Bitcoin transaction, or any act of exchange for that matter, it is not enough that Alice, who makes coffee, wants shoes made by Bob and vice versa. If things were as simple as that, they would discuss how many shoes and how much coffee was needed, produce it and hand it over. Everybody happy.

Instead, what Alice does is to exchange her stuff for Bob’s stuff. She uses her coffee as a lever to get access to Bob’s stuff. Bob, on the other hand, uses his shoes as a leverage against Alice. Their respective products are their means to get access to the products they actually want to consume. That is, they produce their products not to fulfil their own or somebody else’s need, but to sell their products such that they can buy what they need. When Alice buys shoes off Bob, she uses her money as a leverage to make Bob give her his shoes; in other words, she uses his dependency on money to get his shoes. Vice versa, Bob uses Alice’s dependence on shoes to make her give him money.12 Hence, it only makes sense for each to want more of the other’s for less of their own, which means deprive the other of her means: what I do not need immediately is still good for future trades. At the same time, the logic of exchange is that one wants to keep as much of one’s own means as possible: buy cheep, sell dear. In other words, they are not expressing this harmonious division of labour for the mutual benefit at all, but seeking to gain an advantage in exchange, because they have to. It is not that one seeks an advantage for oneself but that one party’s advantage is the other party’s disadvantage: a low price for shoes means less money for Bob and more product for her money for Alice. This conflict of interest is not suspended in exchange but only mediated: they come to an agreement because they want to but that does not mean it would not be preferable to just take what they need.13 This relation they have with each other produces an incentive to cheat, rob, steal.14 Under these conditions – a systematic reason to cross each other – answering the question who holds the tenner is very important.

This systemic production of circumstances where one party’s advantage is the other party’s disadvantage also produces the need for a monopoly on violence of the state. Exchange as the dominant medium of economic interaction and on a mass scale is only possible if parties in general are limited to the realm of exchange and cannot simply take what they need and want. The libertarians behind Bitcoin might detest state intervention, but a market economy presupposes it. When Wei Dai describes the online community as “a community where the threat of violence is impotent because violence is impossible, and violence is impossible because its participants cannot be linked to their true names or physical locations.”15 he not only acknowledges that people in the virtual economy have good reasons to harm each other but also that this economy only works because people do not actually engage with each other. Protected by state violence in the physical world, they can engage in the limited realm of the Internet without the fear of violence.

The fact that ‘unbreakable’ digital signatures – or law enforced by the police – are needed to secure such simple transactions as goods being transferred from the producer to the consumer implies a fundamental enmity of interest of the involved parties. If the libertarian picture of the free market as a harmonic cooperation for the mutual benefit of all was true, they would not need these signatures to secure it. The Bitcoin construction – their own construction – shows their theory to be wrong.

Against this, one could object that while by and large trade was a harmonious endeavour, there would always be some black sheep in the flock. In that case, however, one would still have to inquire into the relationship between effort (the police, digital signatures, etc.) and the outcome. The amount of work spent on putting those black sheep in their place demonstrates rather vividly that it is expected there would be many more of them without these countermeasures. Some people go still further and object on the more principal level that it is all down to human nature, that it is just how humans are. However, by saying that, one first of all agrees that this society cannot be characterised as harmonic. Secondly, the statement “that’s just how it is” is no explanation, though it claims to be one. At any rate, we have tried to give some arguments above as to why people have good reason to engage with each other the way they do.

Purchasing power

With digital signatures only those qualities of Bitcoin which affect the relation between Alice and Bob are treated, but when it comes to money the relation of Alice to the rest of society is of equal importance. That is, the question needs to be answered how much purchasing power Alice has. When dealing with physical money, Alice cannot use the same banknote to pay two different people. There is no double spending, her spending power is limited to what she owns.

When using virtual currencies with digital signatures, on the other hand, nothing prevents Alice from digitally signing many contracts transferring ownership to different people: it is an operation she does by herself.16 She would sign contracts stating that 0xABCD is now owned by Bob, Charley, Eve, etc.

The key technical innovation of the Bitcoin protocol is that it solves this double spending problem without relying on a central authority. All previous attempts at digital money relied on some sort of central clearing house which would ensure that Alice cannot spend her money more than once. In the Bitcoin network this problem is addressed by making all transactions public.17 Thus, instead of handing the signed contract to Bob, it is published on the network by Alice’s software. Then, the software of some other participant on the network signs that it has seen this contract certifying the transfer of Bitcoin from Alice to Bob. That is, someone acts as notary and signs Alice’s signature and thereby witnesses Alice’s signature. Honest witnesses will only sign the first spending of one Bitcoin but will refuse to sign later attempts to spend the same coin by the same person (unless the coin has arrived in that person’s wallet again through the normal means). They verify that Alice owns the coin she spends. This witness’ signature again is published (all this is handled automatically in the background by the client software).

Yet, Alice could simply collude with Charley and ask Charley to sign all her double spending contracts. She would get a false testimony from a crooked witness. In the Bitcoin network, this is prevented, however, by selecting one witness at random for all transactions at a given moment. Instead of Alice picking a witness, it is randomly assigned. This random choice is organised as a kind of lottery where participants attempt to win the ability to be witness for the current time interval. One can increase one’s chances of being selected by investing more computer resources. But to have a decent chance one would need about as much computer resources as the rest of the network combined.18 In any case, for Alice and Charley to cheat they would have to win the lottery by investing considerable computational resources, too much to be worthwhile – at least that is the hope. Thus, cheating is considered improbable since honest random witnesses will reject forgeries.

But what is a forgery and why is it so bad that so much effort is spent, computational resources wasted for solving the aforementioned mathematical puzzle, in order to prevent it? On an immediate, individual level a forged bank note behaves no different from a real one: it can be used to buy stuff and pay bills. In fact, the problem with a forgery is precisely that it is indistinguishable from real money, that it does not make a difference to its users: otherwise people would not accept it. Since it is indistinguishable from real money it functions just as normal money and more money confronts the same amount of commodities and the value of money might go down.19

So what is this value of money, then? What does it mean? Purchasing power. Recall, that Alice and Bob both insist on their right to their own stuff when they engage in exchange and refuse to give up their goods just because somebody needs them. They insist on their exclusive right to dispose over their stuff, on their private property. Under these conditions, money is the only way to get access to each other’s stuff, because money convinces the other side to consent to the transaction. On the basis of private property, the only way to get access to somebody else’s private property is to offer one’s own in exchange. Hence, money counts how much wealth in society one can get access to. Money measures private property as such. Money expresses how much wealth as such one can make use of: not only coffee or shoes but coffee, shoes, buildings, services, labour-power, anything. On the other hand, money counts how much wealth as such my coffee is worth: coffee is not only coffee but a means to get access to all the other commodities on the market: it is exchanged for money such that one can buy stuff with this money. The price of coffee signifies how much thereof. All in all, numbers on my bank statement tell me how much I can afford, the limit of my purchasing power and hence – reversing the perspective – from how much wealth I am excluded.20

Money is power one can carry in one’s pockets; it expresses how much control over land, people, machines, products I have. Thus, a forgery defeats the purpose of money: it turns this limit, this magnitude into an infinity of possibilities, anything is – in principle – up for grabs just because I want it. If everyone has infinity power, it loses all meaning. It would not be effective demand that counts, but simply the fact that there is demand, which is not to say that would be a bad thing, necessarily.

In summary, money is an expression of social conditions where private property separates means and need. For money to have this quality it is imperative that I can only spend what is mine. This quality, and hence, this separation of means and need, with all its ignorance and brutality towards need, must be violently enforced by the police and on the Bitcoin network – where what people can do to each other is limited – by an elaborate protocol of witnesses, randomness and hard mathematical problems.21

The value of money

Now, two problems remain: how is new currency introduced into the system (so far we only handled the transfer of money) and how are participants convinced to do all this hard computational work, i.e., to volunteer to be a witness. In Bitcoin the latter problem is solved using the former.

In order to motivate participants to spend computational resources on verifying transactions they are rewarded a certain amount of Bitcoin if they are chosen as a witness. Currently, each such win earns 50 BTC plus a small transaction fee for each transaction they witness. This also answers the question of how new coins are created: they are “mined” when verifying transactions. In the Bitcoin network money is created ‘out of thin air’, by solving a pretty pointless problem – that is, the puzzle whose solution allows one to be a witness. The only point of this puzzle is that it is hard, that is all.22 What counts is that other commodities/merchants relate to money as money and use it as such, not how it comes into the world.23

Thin air: Bitcoin, credit money and capitalism

However, the amount of Bitcoin one earns for being a witness will decrease in the future – the amount is cut in half every four years. From 2012 a witness will only earn 25 BTC instead of 50 BTC and so forth. Eventually there will be 21 million BTCs in total and no more.

There is no a priori technical reason for the hard limit of Bitcoin; neither for a limit in general nor the particular magnitude of 21 million. One could simply keep generating Bitcoin at the same rate, a rate that is based on recent economic activity in the Bitcoin network or the age of the lead developer or whatever. It is an arbitrary choice from a technical perspective. However, it is fair to assume that the choice made for Bitcoin is based on the assumption that a limited supply of money would allow for a better economy; where “better” means more fair, more stable and devoid of state intervention.24 Libertarian Bitcoin adherents and developers claim that by ‘printing money’ states – via their central banks – devalue currencies and hence deprive their subjects of their assets.25 They claim that the state’s (and sometimes the banks’) ability of creating money ‘out of thin air’ would violate the principles of free market because they are based on monopoly instead of competition. Inspired by natural resources such as gold, Satoshi Nakamoto chose to fix a ceiling for the total amount of Bitcoin to some fixed magnitude.26 From this fact most pundits quickly make the transition to the “deflationary spiral” and whether it is going to happen or not; i.e., whether this choice means doom for the currency by exponentially fast deflation – the value of the currency rising compared to all commodities – or not. Indeed, for these pundits the question why modern currencies are credit money hardly deserves attention. They do not ask why modern currencies do not have a limit built in, how credit money came about, if and how it is adequate for the capitalist economy and why the gold standard was departed from in the first place.27 They are not interested in explaining why the world is set the way it is but instead to confront it with their ideal version. Consequently, they miss what would likely happen if Bitcoin or something like it were to become successful: a new credit system would develop.

Growth

Capitalist enterprises invest money to make more money, to make a profit. They buy stuff such as goods and labour-power, put these ‘to work’ and sell the result for more money than they initially spent. They go through cycles of buying – production – selling.28 The faster each of these steps, the faster the advanced investment returns, the faster the profit arrives and the faster new investments can be made. Capitalist success is measured by the difference between investment and yield and not by the amount of money someone owns in absolute terms. Of course, the absolute amount of wealth a company owns is a relevant magnitude, because more money is a better basis for augmentation. Yet, in order to decide whether a company did well or poorly in the last quarter, the surplus is usually what counts. For a capitalist enterprise, money is a means and more wealth – counted in money – the end: fast growth – that is the mantra.

Libertarian Bitcoin adherents have no problem with this. While currently Bitcoin are mainly used – if at all – to buy means of consumption or as a hoard, they hope that one day something like Bitcoin will replace the US dollar and other central bank controlled currencies: Bitcoin or its successor as the currency to do serious business in. This sets Bitcoin apart from other virtual currencies such as Linden Dollars or World of Warcraft Gold. They are purely used to buy/sell in some limited realm of some virtual world, while Bitcoin are in principle usable for any purchase (on the Internet). Bitcoin want to be money, not just some means of circulation in a virtual reality.

Credit

If money is a means for growth and not the end, a lack of money is not sufficient a reason for the augmentation of money to fail to happen. With the availability of credit money, banks and fractional reserve banking it is evident that this is the case. Just because some company did not earn enough money yet to invest in a new plant, that does not mean it cannot – it would apply for a loan from a bank. That bank in the last instance may have borrowed that money from the central bank which created it ‘out of thin air’. However, assume, for the sake of argument, that these things did not exist. Even then, at any given moment, companies (or parts thereof) are necessarily in different stages of their accumulation cycles: some are just starting to sell a large stock of goods while others are looking to buy machines and hire workers. Some companies have money which they cannot spend yet while other companies need money to spend now. Hence, both the need and means for credit appear. If some company A expects to make, say, 110 BTC from a 100 BTC investment but only has 70 BTC in its accounts, it could take a loan of 30 BTC from some company B with 10% interest rate and still make 10 - 3 = 7 BTC of profit. For the company B which lends A 30 BTC, this business – if successful – is also better than just sitting on those 30 BTC which earn exactly nothing. If growth is demanded, having money sitting idly in one’s vaults while someone else could invest and augment it is a poor business decision.29 This simple form of credit hence develops spontaneously under free market conditions.30 The consequences of this fact are not lost on Bitcoin adherents. As of writing, there are several attempts to form credit unions: attempts to bundle up the money people have in their wallets in order to lend it out to others – for interest, of course.

Furthermore, under the dictate of the free market, success itself is a question of how much money one can mobilise. The more money a company can invest the better its chances of success and the higher the yield on the market. Better technologies, production methods, distribution deals and training of workers, all these things are available – for a price. Now, with the possibility of credit the necessity for credit arises as well. If money is all that is needed for success and if the right to dispose over money is available for interest then any company has to anticipate its competitors borrowing money for the next round of investments, rolling up the market. The right choice under these conditions is to apply for credit and to start the next round of investment oneself; which – again – pushes the competition towards doing the same. This way, the availability of money not only provides the possibility for credit but also the basis for a large scale credit business, since the demand for credit motivates further demand.

Even without fractional reserve banking or credit money, e.g., within the Bitcoin economy, two observations can be made about the relation of capital to money and the money supply. If some company A lends some other company B money, the supply of means of payment increases. Money that would otherwise be petrified to a hoard, kept away from the market, used for nothing, is activated and used in circulation. More money confronts the same amount of commodities, without printing a single new banknote or mining a single BTC. That is, the amount of money active in a given society is not fixed, even if Bitcoin was the standard substance of money.

Instead, capital itself regulates the money supply in accordance with its business needs. Businesses ‘activate’ more purchasing power if they expect a particular investment to be advantageous. For them, the right amount of money is that amount of money which is worth investing; to have available that money which can be used to make more money. This is capital’s demand for money.31

Growth guarantees money

When one puts money in a bank account or into some credit union, or simply lends it to some other business, to earn an interest, the value of that money is guaranteed by the success of the debtor to turn it into growth. If the debtor goes bankrupt that money is gone. No matter what the substance of money, credit is guaranteed by success.

In order to secure against such defaults creditors may demand securities, some sort of asset which has to be handed over in case of a default. On the other hand, if on average a credit relation means successful business, an IOU – i.e., a promise of payment – itself is such an asset. If Alice owes Bob and Bob is short on cash but wants to buy from Charley he can use the IOU issued by Alice as a means of payment: Charley gets whatever Alice owes Bob. If credit fulfils its purpose and stimulates growth then debt itself becomes an asset, almost as good as already earned money. After all, it should be earned in the future. Promises of payment get – and did get in the past – the quality of means of payment. Charley can then spend Alice’s IOU when buying from Eve, and so forth. Thus, the amount of means of payment in society may grow much larger than the official money, simply by exchanging promises of payment of this money. And this happens without fractional reserve banks or credit money issued by a central bank. Instead, this credit system develops spontaneously under free market conditions and the only way to prevent it from happening is to ban this practice: to regulate the market, which is what the libertarians do not want to do.

However, the replacement of cash by these securities remains temporary. In the most severe situation, in crisis, the means of payment available for the whole of society would be reduced back to hard cash again, which these credit tokens were meant to replace. Simply because people start distrusting the money quality of these promises of payment would lead to a collapse of trade which relies on these means of payment. In crisis, credit’s purpose to replace money is void.

Central banks

This is where the central banks step in, they replace the substance of money with something adequate for its purpose: a money whose value is guaranteed by the growth it stimulates. With the establishment of central banks, the economy is freed from the limitations of the total social hoard of hard cash. If there is a lucrative business then there is credit: money which is regulated according to the needs of capital. Credit money as issued by a central bank is not a promise of payment of money, it is itself money. The doubt whether these promises of payments are actually money ought to be put to rest by declaring them as money in the first place.

Now, the value of modern credit money is backed by its ability to bring about capitalist growth. When it facilitates this growth then – and only then – money fulfils its function.

Hence, something capital did to money before, is now ‘built in’. The central bank allows private banks to borrow (sometimes buy) additional funds – for interest – when needed. The money they borrow is created by the central bank ‘out of thin air’. Hence, all money in society comes into being not only with the purpose of stimulating growth but also with the explicit necessity: it is borrowed from the central bank which has to be paid back with interest. While clearly a state intervention, the central banks’ issuing of money is hardly a perversion of capitalism’s first purpose: growth. On the contrary, it is a contribution to it.

Systematic enmity of interests, exclusion from social wealth, subjection of everything to capitalist growth – that is what an economy looks like where exchange, money and private property determine production and consumption. This also does not change if the substance of money is gold or Bitcoin. This society produces poverty not because there is credit money but because this society is based on exchange, money and economic growth. The libertarians might not mind this poverty, but those on the Left who discovered Bitcoin as a new alternative to the status quo perhaps should.


notes

1. This text is a slightly revised version of a text which first appeared on http://metamute.org and was written in collaboration with Scott Lenney.

2. The central white paper on Bitcoin is Bitcoin: A Peer-to-Peer Electronic Cash System by Satoshi Nakomoto, the Bitcoin creator. However, some details of the network are not explicitly described anywhere in the literature but only implemented in the official Bitcoin client. As far as we know, there is no official specification except for https://en.bitcoin.it/wiki/Protocol\_specification.

3. A peer-to-peer network is a network where nodes connect directly, without the need of central servers (although some functions might be reserved to servers). Famous examples include Napster, BitTorrent and Skype.

4. Probably due to pressure from the US government all major online payment services stopped processing donations to the Wikileaks project (http://www.bbc.co.uk/news/business-11938320). Also, most US credit card providers prohibit the use of their cards for online gambling.

5. After Gawker media published an article about Silk Road (http://gawker.com/5805928/the-underground-website-where-you-can-buy-any-drug-imaginable) two US senators became aware of it and asked congress to destroy it. So far, law enforcement operations against Silk Road seem to have been unsuccessful.

6. https://forum.bitcoin.org/index.php?topic=5643.0;all

7. Wei Dai, bmoney.txt, http://www.weidai.com/bmoney.txt. This text outlines the general idea on which Satoshi Nakamoto based his Bitcoin protocol.

8. “The Real Problem with Bitcoin is not that it will enable people to avoid taxes or launder money, but that it threatens the elites’ stranglehold on the creation and distribution of money. If people start using Bitcoin, it will become obvious to them how much their wage is going down every year and how much of their savings is being stolen from them to line the pockets of banksters and politicians and keep them in power by paying off with bread and circuses those who would otherwise take to the streets.” – http://undergroundeconomist.com/post/6112579823

9. For those who know a few technical details of Bitcoin: we are aware that Bitcoin are not represented by anything but a history of transactions. However, for ease of presentation we assume there is some unique representation – like the serial number on a five pound note.

10. “Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. [...] Completely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes. [...] With the possibility of reversal, the need for trust spreads. Merchants must be wary of their customers, hassling them for more information than they would otherwise need. A certain percentage of fraud is accepted as unavoidable. These costs and payment uncertainties can be avoided in person by using physical currency, but no mechanism exists to make payments over a communications channel without a trusted party.” – Satoshi Nakomoto, op. Cit.

11. For an overview of the academic state-of-the-art on digital cash see Burton Rosenberg (Ed.), Handbook of Financial Cryptography and Security, 2011.

12. To avoid a possible misunderstanding. That money mediates this exchange is not the point here. What causes this relationship is that Alice and Bob engage in exchange on the basis of private property. Money is simply an expression of this particular social relation.

13. Of course, people do shy away from stealing from each other. Yet, this does not mean that it would not be advantageous to do so.

14. The Bitcoin designers were indeed aware of these activities of direct appropriation and the need to protect the possible victim . “Transactions that are computationally impractical to reverse would protect sellers from fraud, and routine escrow mechanisms could easily be implemented to protect buyers.” – Satoshi Nakomoto, op. Cit.

15. Wei Dai, op. Cit

16. “The problem of course is the payee can’t verify that one of the owners did not double-spend the coin.” – Satoshi Nakomoto, op. Cit.

17. “We need a way for the payee to know that the previous owners did not sign any earlier transactions. For our purposes, the earliest transaction is the one that counts, so we don’t care about later attempts to double-spend. The only way to confirm the absence of a transaction is to be aware of all transactions.” – Satoshi Nakomoto, op. Cit. Note that this also means that Bitcoin is far from anonymous. Anyone can see all transactions happening in the network. However, Bitcoin transactions are between pseudonyms which provides some weaker form of anonymity.

18. On the Bitcoin network anyone can pretend to be arbitrary many people by creating many pseudonyms. Hence, this lottery is organised in such a way that any candidate has to solve a mathematical puzzle by trying random possible solutions which requires considerable computational resources (big computers). This way, being ‘more people’ on the network requires more financial investment in computer hardware and electricity. It is just as in the lottery: those who buy many tickets have a higher chance of winning. As a side effect, many nodes on the network waste computational resources solving some mathematical puzzle by trying random solutions to win this witness lottery.

19. For many people, this is where they content themselves with knowing that the value goes down without ever asking what this “value” thing is. However, changes in value only make sense if one knows what it is that changes. Furthermore, the relationship of money supply and inflation is not as it might seem: increased money supply does not necessarily imply inflation; only if it is not accompanied by increased economic activity.

20. From this it is also clear that under these social conditions – free and equal exchange – those who have nothing will not get anything, aka the poor stay poor. Of course, free agents on a free market never have nothing, they always own themselves and can sell their skin – their labour-power – to others. Yet, their situation is not adequately characterised by pointing out that nature condemns us to work for the products we wish to consume, as the libertarians have it. Unemployed workers can only find work if somebody else offers them a job, if somebody else deems it profitable to employ them. Workers cannot change which product they offer, they only have one. That this situation is no pony farm can be verified by taking a look at the living conditions of workers and people out of work worldwide.

21. The Bitcoin forum is – among other things – a remarkable source of ignorant and brutal statements about the free market, such as this: “If you want to live then you have to work. That’s nature’s fault (or God’s fault if you’re a Christian). Either way, you have to work to survive. Nobody is obligated to keep you alive. You have the right not to be murdered, you don’t have the right to live. So, if I offer you a job, that’s still a voluntary trade, my resources for your labor. If you don’t like the trade then you can reject it and go survive through your own means or simply lay down and die. It’s harsh but fair. Otherwise, I’d have to take care of myself and everyone else which is unfair. Requiring me to provide you a living is actual slavery, much worse than nonexistent wage slavery.” – https://bitcointalk.org/index.php?topic=5643.0\%3ball

22. “The only conditions are that it must be easy to determine how much computing effort it took to solve the problem and the solution must otherwise have no value, either practical or intellectual” – Wei Dai, op. Cit.

23. Those who read Marx’s Capital might now object that this implies that Bitcoin is based on a concept of value whose substance is not abstract human labour. Instead it would rely on value which is abstract computer labour or something else entirely. This objection is based on a misunderstanding: computing power earns, if one is lucky, 50 BTC but this is just a number, it is meaningless. What 50 BTC buy, how much purchasing power or command over social wealth they represent is an entirely different question. 50 BTC have value because they command social wealth not because a computer picked the right random number.

24. “The root problem with conventional currency is all the trust that’s required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust. Banks must be trusted to hold our money and transfer it electronically, but they lend it out in waves of credit bubbles with barely a fraction in reserve. We have to trust them with our privacy, trust them not to let identity thieves drain our accounts. Their massive overhead costs make micropayments impossible.” – Satoshi Nakamoto quoted in Jashua Davis, The Crypto-Currency: Bitcoin and Its Mysterious Inventor, The New Yorker, 10 October, 2011.p. 62.

25. We stress that opposing states increasing the ‘money supply’ at will and fixing the absolute amount of money that can ever be created are not the same thing. One could just as well keep generating 50 new BTC every 10 minutes until the end of time or the Bitcoin network – whichever comes first.

26. “The steady addition of a constant amount of new coins is analogous to gold miners expending resources to add gold to circulation. In our case, it is CPU [central processing unit] time and electricity that is expended.” – Satoshi Nakomoto, op. Cit. Furthermore, the distribution of how Bitcoin are generated is inspired by gold. In the beginning it is easy to mine but it becomes harder and harder over time. Bitcoin’s mining concept is an attempt to return to gold money but on the Internet.

27. cf. our text “Public debt makes the state go round” available at http://www.junge-linke.org/en/public-debt-makes-the-state-go-round. It should be noted that Bitcoin is not an equivalent to a return to the gold standard but a return to paying with gold coins. Even under the gold standard there were many more dollars than the gold they represented, based on the assumption that people would not claim the gold worth of their dollars from the FED.

28. Some companies such as supermarkets do not have a production phase, they simply buy and sell. This difference does not matter for the argument presented here though.

29. Of course, there are also reasons keep a certain amount of money around, such as the uncertainties of the markets.

30. An even simpler form of credit exists between whole-sellers and producers. If, for example, the producer allows the whole-seller to pay later, he is effectively granting credit.

31. On a side note, if businesses which take out loans are successful on average, they produce more commodities: more commodities that confront the increased supply of purchasing power. Hence, increases in the money supply, and hence purchasing power, does not necessarily mean inflation.

 

16.08
14:00
Olia Lialina
Turing
Complete
User

Any error may vitiate the entire output of the device. For the recognition and correction of such malfunctions intelligent human intervention will in general be necessary.
— John von Neumann, First Draft of a Report on the EDVAC, 1945

If you can’t blog, tweet! If you can’t tweet, like!
— Kim Dotcom, Mr. President, 2012

Invisible and Very Busy

Computers are getting invisible. They shrink and hide. They lurk under the skin and dissolve in the cloud. We observe the process like an eclipse of the sun, partly scared, partly overwhelmed. We divide into camps and fight about advantages and dangers of The Ubiquitous. But whatever side we take — we do acknowledge the significance of the moment.

With the disappearance of the computer, something else is silently becoming invisible as well — the User. Users are disappearing as both phenomena and term, and this development is either unnoticed or accepted as progress — an evolutionary step.

The notion of the Invisible User is pushed by influential user interface designers, specifically by Don Norman a guru of user friendly design and long time advocate of invisible computing. He can be actually called the father of Invisible Computing.

Those who study interaction design read his “Why Interfaces Don’t Work” published in 1990 in which he asked and answered his own question: “The real problem with the interface is that it is an interface”. What’s to be done? “We need to aid the task, not the interface to the task. The computer of the future should be invisible!”1

It took almost two decades, but the future arrived around five years ago, when clicking mouse buttons ceased to be our main input method and touch and multi-touch technologies hinted at our new emancipation from hardware. The cosiness of iProducts, as well as breakthroughs in Augmented Reality (it got mobile), rise of wearables, maturing of all sorts of tracking (motion, face) and the advancement of projection technologies erased the visible border between input and output devices. These developments began to turn our interactions with computers into pre-computer actions or, as interface designers prefer to say, “natural” gestures and movements.

Of course computers are still distinguishable and locatable, but they are no longer something you sit in front of. The forecasts for invisibility are so optimistic that in 2012 Apple allowed to themselves to rephrase Norman’s predictive statement by putting it in the present tense and binding it to a particular piece of consumer electronics:

We believe that technology is at its very best when it is invisible, when you are conscious only of what you are doing, not the device you are doing it with […] iPad is the perfect expression of that idea, it’s just this magical pane of glass that can become anything you want it to be. It’s a more personal experience with technology than people have ever had.2

In this last sentence, the word “experience” is not an accident, neither is the word “people”.

Invisible computers, or more accurately the illusion of the computerless, is destroyed if we continue to talk about “user interfaces”. This is why Interface Design starts to rename itself to Experience Design — whose primary goal is to make users forget that computers and interfaces exist. With Experience Design there is only you and your emotions to feel, goals to achieve, tasks to complete.

The field is abbreviated as UXD, where X is for eXperience and U is still for the Users. Wikipedia says Don Norman coined the term UX in 1995. However, in 2012 UX designers avoid to use the U-word in papers and conference announcements, in order not to remind themselves about all those clumsy buttons and input devices of the past. Users were for the interfaces. Experiences, they are for the PEOPLE!3

In 2008 Don Norman simply ceased to address Users as Users. At an event sponsored by Adaptive Path, a user interface design company, Norman stated “One of the horrible words we use is users. I am on a crusade to get rid of the word ‘users’. I would prefer to call them ‘people.’”4 After enjoying the effect of his words on the audience he added with a charming smile, “We design for people, we don’t design for users.”

A noble goal in deed, but only when perceived in the narrow context of Interface Design. Here, the use of the term “people” emphasizes the need to follow the user centered in opposition to an implementation centered paradigm. The use of “people” in this context is a good way to remind software developers that the User is a human being and needs to be taken into account in design and validation processes.

But when you read it in a broader context, the denial of the word “user” in favor of “people” becomes dangerous. Being a User is the last reminder that there is, whether visible or not, a computer, a programmed system you use.

In 2011 new media theoretician Lev Manovich also became unhappy about the word “user”. He writes on his blog “For example, how do we call a person who is interacting with digital media? User? No good.”5

Well, I can agree that with all the great things we can do with new media — various modes of initiation and participation, multiple roles we can fill — that it is a pity to narrow it down to “users”, but this is what it is. Bloggers, artists, podcasters and even trolls are still users of systems they didn’t program. So they (we) are all the users.

We need to take care of this word because addressing people and not users hides the existence of two classes of people — developers and users. And if we lose this distinction, users may lose their rights and the opportunity to protect them. These rights are to demand better software, the ability “to choose none of the above”6, to delete your files, to get your files back, to fail epically and, back to the fundamental one, to see the computer.

In other words: the Invisible User is more of an issue than an Invisible Computer.

What can be done to protect the term, the notion and the existence of the Users? What counter arguments can I find to stop Norman’s crusade and dispel Manovich’s skepticism? What do we know about a user, apart from the opinion that it is “no good” to be one?

We know that it was not always like this. Before Real Users (those who pay money to use the system) became “users”, programmers and hackers proudly used this word to describe themselves. In their view, the user was the best role one could take in relation to their computer.7

Furthermore, it is wrong to think that first there were computers and developers and only later users entered the scene. In fact, it was the opposite. At the dawn of personal computer the user was the center of attention. The user did not develop in parallel with the computer, but prior to it. Think about Vanevar Bush’s “As we May Think” (1945), one of the most influential texts in computer culture. Bush spends more words describing the person who would use the Memex than the Memex itself. He described a scientists of the future, a superman. He, the user of the Memex, not the Memex, itself was heading the article.8

20 years later, Douglas Engelbart, inventor of the pioneering personal computer system NLS, as well as hypertext, and the mouse, talked about his research on the augmentation of human intellect as “bootstraping” — meaning that human beings, and their brains and bodies, will evolve along with new technology. This is how French sociologist Thierry Bardini describes this approach in his book about Douglas Engelbart: “Engelbart wasn’t interested in just building the personal computer. He was interested in building the person who could use the computer to manage increasing complexity efficiently.”9

And let’s not forget the title of J.C.R. Licklider’s famous text, the one that outlined the principles for APRAs Command and Control research on Real Time System, from which the interactive/personal computer developed — Man-Computer Symbiosis (1960).10

When the personal computer was getting ready to enter the market 15 years later, developers thought about who would be model users. At XEROX PARC, Alan Kay and Adele Goldberg introduced the idea of kids, artists, musicians and others as potential users for the new technology. Their paper “Personal Dynamic Media” from 197711 describes important hardware and software principles for the personal computer. But we read this text as revolutionary because it clearly establishes possible users, distinct from system developers, as essential to these dynamic technologies. Another Xerox employee, Tim Mott (aka “The father of user centered design”) brought the idea of a Secretary into the imagination of his colleagues. This image of the “Lady with the Royal Typewriter”12 predetermined the designs of XEROX Star, Apple Lisa and and further electronic offices.

So, it’s important to acknowledge that users existed prior to computers, that they were imagined and invented — Users are the figment of the imagination. As a result of their fictive construction, they continued to be re-imagined and re-invented through the 70’s, 80’s, 90’s, and the new millennium. But however reasonable, or brave, or futuristic, or primitive these models of users were, there is a constant.

Let me refer to another guru of user centered design, Alan Cooper. In 2007, when the U word was still allowed in interaction design circles, he and his colleagues shared their secret in “About Face, The Essentials of Interaction Design”:

“As an interaction designer, it’s best to imagine that users – especially — beginners — are simultaneously very intelligent and very busy.”13

It is very kind advice (and one of the most reasonable books on interface design, btw) and can be translated roughly as “hey, front end developers, don’t assume that your users are more stupid than you, they are just busy.” But it is more than this. What the second part of this quote gets to so importantly is that Users are people who are very busy with something else.

Alan Cooper is not the one who invented this paradigm, and not even Don Norman with his concentration on task rather than the tool. It originated in the 1970’s. Listing the most important computer terms of that time, Ted Nelson mentions so called “user level systems” and states that these “User-level systems, [are] systems set up for people who are not thinking about computers but about the subject or activity the computer is supposed to help them with.”14 Some pages before he claims:

15One should remember that Ted Nelson was always on the side of users and even “naïve users” so his bitter “just a user” means a lot.

Alienation of users from their computers started in XEROX PARC with secretaries, as well as artists and musicians. And it never stopped. Users were seen and marketed as people who’s real jobs, feelings, thoughts, interests, talents — everything what matters — lie outside of their interaction with personal computers.

For instance, in 2007, when Adobe, the software company who’s products are dominating the so called “creative industries”, introduced version 3 of Creative Suite, they filmed graphic artists, video makers and others talking about the advantages of this new software package. In particular interesting was one video of a web designer (or an actress in the role of a web designer): she enthusiastically demonstrated what her new Dream Weaver could do, and that in the end “I have more time to do what I like most — being creative”. The message from Adobe is clear. The less you think about source code, scripts, links and the web itself, the more creative you are as a web designer. What a lie. I liked to show it to fresh design students as an example of misunderstanding the core of the profession.

This video is not online anymore, but actual ads for Creative Suite 6 are not much different – they feature designers and design evangelists talking about unleashing, increasing and enriching creativity as a direct result of fewer clicks to achieve this or that effect.16

In the book “Program or be Programmed”, Douglas Rushkoff describes similar phenomena:

[…] We see actual coding as some boring chore, a working class skill like bricklaying, which may as well be outsourced to some poor nation while our kids play and even design video games. We look at developing the plots and characters for a game as the interesting part, and the programming as the rote task better offloaded to people somewhere else.17

Rushkoff states that code writing is not seen as a creative activity, but the same applies to engagement with the computer in general. It is not seen as a creative task or as “mature thought”.

In “As we may think”, while describing an ideal instrument that would augment the scientist of the future, Vanevar Bush mentions

For mature thought there is no mechanical substitute. But creative thought and essentially repetitive thought are very different things. For the latter there are, and may be, powerful mechanical aids18

Opposed to this, users, as imagined by computer scientists, software developers and usability experts are the ones who’s task is to spend as little time as possible with the computer, without wasting a single thought on it. They require a specialized, isolated app for every “repetitive thought”, and, most importantly, delegate drawing the border in between creative and repetitive, mature and primitive, real and virtual, to app designers.

There are periods in history, moments in life (and many hours a day!) where this approach makes sense, when delegation and automation are required and enjoyed. But in times when every aspect of life is computerized it is not possible to accept “busy with something else” as a norm.

So let’s look at another model of users that evolved outside and despite usability experts’ imagination.

General Purpose, “Stupid” and Universal

In “Why Interfaces Don’t Work” Don Norman heavily criticizes the world of visible computers, visible interfaces and users busy with all this. Near the end of the text he suggests the source of the problem:

“We are here in part, because this is probably the best we can do with today’s technology and, in part, because of historical accident. The accident is that we have adapted a general-purpose technology to very specialized tasks while still using general tools.”19

In December 2011 science fiction writer and journalist Cory Doctorow gave a marvelous talk at the 28th Chaos Communication Congress in Berlin titled “The coming war on general computation”.20 He explains that there is only one possibility for computers to truly become appliances, the tiny, invisible, comfortable one purpose things Don Norman was preaching about: to be loaded with spyware. He explains,

“So today we have marketing departments who say things like ‘[…] Make me a computer that doesn’t run every program, just a program that does this specialized task, like streaming audio, or routing packets, or playing Xbox games’ […] But that’s not what we do when we turn a computer into an appliance. We’re not making a computer that runs only the “appliance” app; we’re making a computer that can run every program, but which uses some combination of rootkits, spyware, and code-signing to prevent the user from knowing which processes are running, from installing her own software, and from terminating processes that she doesn’t want. In other words, an appliance is not a stripped-down computer — it is a fully functional computer with spyware on it out of the box.”

By fully functional computer Doctorow means the general purpose computer, or as US mathematician John von Neumann referred to it in his 1945 “First Draft of a Report on the EDVAC” — the “all purpose automatic digital computing system”.21 In this paper he outlined the principles of digital computer architecture (von Neumann Architecture), where hardware was separated from the software and from this the so called “stored program” concept was born. In the mid 40’s the revolutionary impact of it was that “by storing the instructions electronically, you could change the function of the computer without having to change the wiring.”22

Today the rewiring aspect doesn’t have to be emphasized, but the idea itself that a single computer can do everything is essential, and that it is the same general purpose computer behind “everything” from dumb terminals to super computers.

Doctorow’s talk is a perfect entry point to get oneself acquainted with the subject. To go deeper into the history of the war on general computation you may consider reading Ted Nelson. He was the first to attract attention to the significance of the personal computer’s all-purpose nature. In 1974 in his glorious fanzine “Computer Lib” which aimed to explain computers to everybody, he writes in caps lock:

COMPUTERS HAVE NO NATURE AND NO CHARACTER
Computers are, unlike any other piece of equipment, perfectly BLANK. And that is how we have projected on it so many different faces.23

Some great texts written this century are “The Future of the Internet and How to Stop It” (2008) by Jonathan Zittrain and of course “The Future of Ideas” (2001) by Lawrence Lessig. Both authors are more concerned with the architecture of the internet than the computer itself but both write about the end-to-end principle that lies at the internet’s core — meaning that there is no intelligence (control) build into the network. The network stays neutral or “stupid”, simply delivering packets without asking what’s inside. It is the same with the von Neuman computer — it just runs programs.

The works of Lessig, Zittrain and Doctorow do a great job of explaining why both computer and network architectures are neither historic accidents nor “what technology wants”.24 The stupid network and the general purpose computer were conscious design decisions.

For Norman, further generations of hardware and software designers and their invisible users dealing with General Purpose technology is both accident and obstacle. For the rest of us the rise and use of General Purpose Technology is the core of New media, Digital Culture and Information Society (if you believe that something like this exists). General purpose computers and Stupid Networks are the core values of our computer-based time and the driving force behind all the wonderful and terrible things that happen to people who work and live with connected computers. These prescient design decisions have to be protected today, because technically it would be no big deal to make networks and computers “smart”, i.e. controlled.

What does it all have to do with “users” versus “people” — apart from the self evident fact that only the users who are busy with computers at least a little bit — to the extent of watching Doctorow’s video till the end — will fight for these values?

I would like to apply the concept of General Purpose Technology to users by flipping the discourse around and redirecting attention from technology to the user that was formed through three decades of adjusting general purpose technology to their needs: The General Purpose User.

General Purpose Users can write an article in their e-mail client, layout their business card in Excel and shave in front of a web cam. They can also find a way to publish photos online without flickr, tweet without twitter, like without facebook, make a black frame around pictures without instagram, remove a black frame from an instagram picture and even wake up at 7:00 without a “wake up at 7:00” app.

Maybe these Users could more accurately be called Universal Users or Turing Complete Users, as a reference to the Universal Machine, also known as Universal Turing Machine — Alan Turing’s conception of a computer that can solve any logical task given enough time and memory. Turing’s 1936 vision and design predated and most likely influenced von Neuman’s First Draft and All-purpose Machine.

But whatever name I chose, what I mean are users who have the ability to achieve their goals regardless of the primary purpose of an application or device. Such users will find a way to their aspiration without an app or utility programmed specifically for it. The Universal user is not a super user, not half a hacker. It is not an exotic type of user.

There can be different examples and levels of autonomy that users can imagine for themselves, but the capacity to be universal is still in all of us. Sometimes it is a conscious choice not to delegate particular jobs to the computer, and sometimes it is just a habit. Most often it is not more than a click or two that uncover your general purpose architecture.

For instance, you can decide not to use Twitter at all and instead inform the world about your breakfast through your own website. You can use Live Journal as if it is Twitter, you can use Twitter as Twitter, but instead of following people, visit their profiles as you’d visit a homepage.

You can have two Twitter accounts and log in to one in Firefox, and the other in Chrome. This is how I do it and it doesn’t matter why I prefer to manage it this way. Maybe I don’t know that an app for managing multiple accounts exists, maybe I knew but didn’t like it, or maybe I’m too lazy to install it. Whatever, I found a way. And you will do as well.

A Universal User’s mind set (it is a mind set, not set of rules, not a vow) means to liaise with hardware and software. Behavior that is antipodal to the “very busy” user. This kind of interaction makes the user visible, most importantly to themselves. And, if you wish to think about it in terms of Interface Design and UX, it is the ultimate experience.

Does this mean that to deliver this kind of user experience the software industry needs to produce imperfect software or hold itself back from improving existing tools? Of course not! Tools can be perfect.

Though the idea of perfect software could be revised, taking into account that it is used by the General Purpose User, valuing ambiguity and users’ involvement.

And thankfully ambiguity is not that rare. There are online services where users are left alone to use or ignore features. For example, the developers of Twitter didn’t take measures that prevent me from surfing from profile to profile of people I don’t follow. The Dutch social network Hyves allows their users to mess around with background images so that they don’t need any photo albums or instagrams to be happy. Blingee.com, who’s primary goal is to let users add glitter to their photos, allows to upload whatever stamps they want — not glittery, not even animated. It just delivers the user merged layers in return.

I can also mention here an extreme example of a service that nourishes the user’s universality — myknet.org — an Aboriginal social network in Canada. It is so “stupid” that users can re-purpose their profiles every time they update them. Today it functions as a twitter feed, yesterday it was a youtube channel, and tomorrow it might be an online shop. Never-mind that it looks very low-tech and like it was made 17 years ago, it works!

In general the WWW, outside of Facebook, is an environment open for interpretation.

Still, I have difficulties finding a site or an app, that actually addresses the users, and sees their presence as a part of the work flow. This maybe sounds strange, because all web 2.0 is about pushing people to contribute, and “emotional design” is supposed to be about establishing personal connections in between people who made the app and people who bought it, but I mean something different. I mean a situation when the work flow of an application has gaps that can be filled by users, where smoothness and seamlessness are broken and some of the final links in the chain are left for the users to complete.

I’ll leave you with an extreme example, an anonymous (probably student) project:
“Google Maps + Google Video + Mashup — Claude Lelouch’s Rendezvous”:

It was made in 2006, at the very rise of Web 2.025, when the mash-up was a very popular cultural, mainstream artistic form. Artists were celebrating new convergences and a blurring of the borders between different pieces of software. Lelouch’s Rendezvous is a mash up that puts on the same page the famous racing film of the same name and a map of Paris, so that you can follow the car in the film and see its position on the Google map at the same time. But the author failed (or perhaps didn’t intend) to synchronize the video and the car’s movement on the map. As a result the user is left with the instruction: “Hit play on the video. […] At the 4 second mark, hit the ‘Go!’ button.”

The user is asked not only to press one but two buttons! It suggests that we take care ourselves, that we make can complete a task at the right moment. The author obviously counts on users intelligence, and never heard that they are “very busy”.

The fact that the original video file that was used in the mash up was removed, makes this project even more interesting. To enjoy it, you’ll have to go to YouTube and look for another version of the film. I found one, which means you’ll succeed as well.

There is nothing one user can do, that another can’t given enough time and respect. Computer Users are Turing Complete.


When Sherry Turkle, Douglas Rushkoff and other great minds state that we need to learn programming and understand our computers in order to not be programmed and “demand transparency of other systems”26, I couldn’t agree more. If the approach to computer education in schools was to switch from managing particular apps to writing apps it will be wonderful. But apart from the fact that it is not realistic, I would say it is also not enough. I would say it is wrong to say either you understand computers or u are the user.27

An effort must be made to educate the users about themselves. There should be understanding of what it means to be a user of an “all purpose automatic digital computing system”.

General Purpose Users are not a historic accident or a temporary anomaly. We are the product of the “worse is better” philosophy of UNIX, the end-to end principle of the internet, the “under construction” and later “beta” spirit of the web. All these designs that demand attention, and ask for forgiveness and engagement formed us as users, and we are always adjusting, improvising and at the same time taking control. We are the children of the misleading and clumsy Desktop Metaphor, we know how to open doors without knobs.28

We, general purpose users — not hackers and not people — who are challenging, consciously or subconsciously, what we can do and what computers can do, are the ultimate participants of man-computer symbiosis. Not exactly the kind of symbiosis Licklider envisioned, but a true one.


Notes

1. Don Norman, “Why Interfaces Don’t Work”, in: Brenda Laurel (Ed.), The Art of Human-Computer Interface Design, 1990, p. 218
2. Apple Inc, Official Apple (New) iPad Trailer, 2012
3. Another strong force behind ignoring the term User comes from adepts of Gamification. They prefer to address users as gamers. But that’s another topic.
4. Video of the talk
5. See also Norman’s 2006 essay Words matter: “Psychologists depersonalize the people they study by calling them ‘subjects.’ We depersonalize the people we study by calling them ‘users.’ Both terms are derogatory. They take us away from our primary mission: to help people. Power to the people, I say, to repurpose an old phrase. People. Human Beings. That’s what our discipline is really about.”
6. Lev Manovich, How do you call a person who is interacting with digital media?, 2011
7. Borrowed from the subtitle “You May Always Choose None of the Above” of the chapter “Choice” in: Douglas Rushkoff, Program or be Programmed, 2010, p.46
8. “The movie Tron (1982) marks the highest appreciation and most glorious definition of this term. […] The relationship of users and programs is depicted as a very close and personal one, almost religious in nature, with a caring and respecting creator and a responsible and dedicated progeny.” — Olia Lialina and Dragan Espenschied, Do you believe in users?, in: Digital Folklore, 2009
9. Vanevar Bush, As we may think (illustrated version, PDF facsimile), Life magazine, 1945
10. Thierry Bardini, Bootstrapping, 2000
11. J.C.R. Licklider, Joseph Carl Robnett, Man-Computer Symbiosis, IRE Transactions on Human Factors in Electronics, volume HFE-1, p.4-11, 1960
12. Alan Kay, Personal Dynamic Media, 1977, in: Noah Wardrip-Fruin and Nick Montfort (ed), The New Media Reader, MIT Press, 2003
13. See Douglas K. Smith and Robert C. Alexander, Fumbling The Future, 1999, p.110 (on Google Books)
14. Alan Cooper, Robert Reimann, David Cronin, About Face 3: The Essentials of Interaction Design, 2007, p.45
15. Ted Nelson, Computer Lib/Dream Machines, Revised Edition 1987, p.9
16. Scanned from Computer Lib, page 3
17. See for example the trailers for Adobe Creative Suite 6, 2012
18. Douglas Rushkoff, Program or be Programmed, 2010, p.131
19. Vanevar Bush, As we may think (HTML version), The Atlantic Magazine, 1945
20. Don Norman, “Why Interfaces Don’t Work”, in: Brenda Laurel (Ed.), The Art of Human-Computer Interface Design, 1990, p. 218
21. Transcript, Video
22. John von Neumann, Introduction to “The First Draft Report on the EDVAC”, 1945
23. M.Mitchell Waldrop, The Dream Machine, 2001, p.62
24. Ted Nelson, Computer Lib/Dream Machines, Revised Edition 1987, p.37
25. See Kevin Kelly, What Technology Wants, 2010
26. Web 2.0 was supposed to be a complete merge of people and technology, but was again progressing alienation and keeping users and developers apart. People were driven from self-made home pages to social networks.
27. “Politics is a system, complex to be sure, all the same. If people understand something as complicated as a computer, they will demand greater understanding of other things.” — Respondent’s statement, discussed in Sherry Turkle, The Second Self, edition 2004, p.163
28. “Instead of teaching programming, most schools with computer literacy curriculums teach programs [… ] The bigger problem is that their entire orientation to computing will be from perspective of users” — Douglas Rushkoff, Program or be Programmed, 2010, p.130
29. “Direct-manipulation systems, like the Macintosh desktop, attempt to bridge the interface gulf by representing the world of the computer as a collection of objects that are directly analogous to objects in the real world. But the complex and abundant functionality of today’s new applications — which parallels people’s rising expectations about what they might accomplish with computers — threatens to push us over the edge of the metaphorical desktop. The power of the computer is locked behind a door with no knob.” — Brenda Laurel, Computer as Theater, 1993, p. xviii

Appendix A: Subjects of Human-Computer Interaction

  UX Web 2.0 Cloud Computing Gamification 
computer technology social network The Cloud epic win
user interface experience submit button upload button epic win
users people you download button gamers

Appendix B: Users Imagined

Scientist (Vannevar Bush: As we may think, 1945)
“One can now picture a future investigator in his laboratory. His hands are free, and he is not anchored. As he moves about and observes, he photographs and comments.”

Knowledge Worker, Intellectual Worker, Programmer (Douglas Engelbart: Augmenting Human Intellect, 1962)
“Consider the intellectual domain of a creative problem solver […]. These […] could very possibly contribute specialized processes and techniques to a general worker in the intellectual domain: Formal logic—mathematics of many varieties, including statistics—decision theory—game theory—time and motion analysis—operations research—classification theory—documentation theory—cost accounting, for time, energy, or money—dynamic programming—computer programming.”

Real Users (J.C.R. Licklider: Some Reflections on Early History, 1988 in: Adele Goldberg, A History of Personal Workstations, 1988, p.119)
“People who are buying computers, especially personal computers, just aren’t going to take a long time to learn something. They are going to insist on using it awfully quick.”

Naïve User (Ted Nelson: Computer Lib/Dream Machines, Revised Edition 1987, p.9, 1974)
“Person who doesn’t know about computers but is going to use the system. Naive user systems are those set up to make things easy and clear for such people. We are all naive users at some time or other; its nothing to be ashamed of. Though some computer people seem to think it is.”

Lady with the Royal Typewriter (Tim Mott, as quoted in: Fumbling The Future, 1999, p.110 (on Google Books, 1975)
“My model for this was a lady in her late fifties who had been publishing all her life and still used a Royal typewriter.”

Children, Artists, Musicians (Alan Kay:Personal Dynamic Media, 1977)
“Another interesting nugget was that children really needed as much or more computing power than adults were willing to settle for when using a timesharing system. […] The kids […] are used to finger-paints, water colors, color television, real musical instruments, and records.”

Deity (Steven Lisberger: TRON, 1982)
— “You believe in the users?” — “Yes, sure. If I don’t have a user, then who wrote me?”.

TIME Magazine, 1983 Statement: The “person of the year” is a machine: “Machine of the Year: The Computer Moves In”.

Clueless newbies (Eric S. Raymond: September that never ended, Jargon File, 1993)
“September that never ended: All time since September 1993. One of the seasonal rhythms of the Usenet used to be the annual September influx of clueless newbies who, lacking any sense of netiquette, made a general nuisance of themselves. This coincided with people starting college, getting their first internet accounts, and plunging in without bothering to learn what was acceptable.”

Hackers = Implementors, lamers = Users (Eric S. Raymond: The New Hacker’s Dictionary, 1996)
“hacker n. […] 1. A person who enjoys exploring the details of programmable systems and how to stretch their capabilities, as opposed to most users, who prefer to learn only the minimum necessary.” — p.233.
“Lamer n. […] Synonym for luser, not used much by hackers but common among warez d00dz, crackers and phreakers. Oppose elite. Has the same connotations of self-conscious elitism that use of luser does among hackers.” — p.275.

YOU (TIME Magazine, 2006)
"Yes, You. You control the information age. Welcome to your world".

People (Don Norman: Talk at UX Week 2008, 2008)
“I’d prefer to call them people.”

Them (Sir Tim Berners-Lee: The Next Web, TED Talk, 2009)
“20 years ago […] I invented the World Wide Web.”

Customer (Jack Dorsey, executive chairman of Twitter: Let’s reconsider our “users”, 2012)
“If I ever say the word ‘user’ again, immediately charge me $140.”

Interactor (Janet Murray, interaction designer, educator, author of Hamlet on the Holodeck: In introduction to Inventing the Medium, 2012)
“[User] is another convinient and somewhat outdated term, like ”interface” [...] A user may be seeking to complete an immediate task; an interactor is engaged in a prolonged give and take with the machine” p.11.

Buyer (Bruce Tognazzini, principle of Nielsen Normal Group: The Third User, 2013).

 

 

16.08
14:00
aharon
bits about being clueless and
interference

Mondrian's Broadway Boogie-Woogie(see end of text) might seem initially innocent, perhaps even quint as an object-painting nowadays, that it might easily pass by.

The colours, indeed, the style, has been chewed upon culturally throughout the later parts of the 20th century so many times in so many ways that visually, I can agree there is a sense of archaic about the painting. The pattern, and its derivatives, has been on countless t-shirts, couches, arm chairs, chairs, windows mobile interface (arguably), buildings, music album covers, walls, posters, shoes, socks glasses, cars, and all sorts of other design objects.
However, for me, there is something else in Mondrian's Broadway Boogie-Woogie painting-object, something that perhaps arguably underlines it. Stuff that I suspect, Adorno might call the ambiguous, an uncompromising sense, elements that seem to me to be the actual art bits in the painted object. These seem to linger on, in and through time, linger, hang on in a sort of cultural ghost-beats, and keep draw a sense of fascination.
This Fascination sense, it will argued here, is made of a production or generation, perhaps propagation(?) of continuous disturbance. Harking back to Adorno, the sense is of captivating senses rather than of pleasing and entertaining them. Despite and in-spite the Mondrianic visual language being used, abused and regurgitated over and over.

To Name that which withstands the attempts to please, and continues to radiate disturbances of the senses all these years, is kind of hard - precisely because the disturbance is nameless. Hence I say, somehow hesitantly, that there are certain elements that cross, fold into each other and collide. These inner ruptures, inner self doubts, the elements that seemingly go into each-others' trajectories, seem to Be the the waves of disturbances, perhaps the art in these painted objects. (Here it might be apt to note that Mondrian’s self doubts seem to have extended to paintings by the fact he used to attend his openings armed with paints and a brash, as he often added stuff..) I'd say that elements such as rhythms in the patterns; the Mondrianic language of horizontal and vertical; the urban reference specific to New York's street arrangements in a similar vertical and horizontal manner, and the flatness of the object surface; the sense of music that comes from colours on rhythmic pattens in various sizes - these are the elements that fold, cross and collide with one another to create a sense that keeps being - moving - refusing to rest. These, I'd argue, make the object's innate refusal to be pinned down and consumed, and keep it being a disturbance. It is not simply a painting object, nor is it a sort of a music score, or is it sort of a map of urbanity, or some flat aesthetically pleasing design. It refuses to Be any of these in particular.

The sense, the vibrations that emanate, are the art bits in the object, and in my mind, make the refusal to be a pleasing, an entertaining object, to be something familiar as a map, score, etc.. These art bits, these sequences that move towards being a pattern - but are not; that move towards Being a map - but is not; that seem like might be a sort of musical score - but not really; like the very term “INTERFERENCE” - bump, collide into one another, pierce holes through one another to Be Inter (between) Forare (hit/strike/make-holes).
The painted stuff on the object keeps Being in between stuff, and restless at that. Moving from one element, one sense to another, creating its own aesthetic identity that refuses to give up - even in the face of countless reductive attempts, each, in its own way, just fails to hit the target, perhaps precisely because the Boogie-Woogie keeps being its own “bogey being”, its own movement, a nomadic demon, restlessly bouncing from one element to another.
Maybe a bit like a pixel-line that bounces in Pong games, the lines of Broadway Boogie-Woogie move From an element to another.
A movement From, rather than Towards, is how we define Waves, and in internet early terminology, I'd argue, the equivalent term is “Surf”.
To Surf, like waves we surf upon, is a movement From. Indeed while surfing, you know, you might have a clue about where you might come from, but the next bit of your journey is utterly unknown and can alter at any second. (ie can change, evolve, or you, the surfer might fold/fall..)
In that sense, the movement From, of being a surfing movement, can not follow a path, can not be a part of a map in its present tense - only perhaps in a hindsight. A map which will never be, like the Boogie-Woogie design references, the surf itself. (despite the attempt to describe it.)

A movement from, rather than for/towards, is that of spying. The search from stuff a detective/spy might have a few clues about - into an unknown. The spy, in that sense, like the detective, attempt to be a witness interfering in the future via the very fact of being clueless about the future. Via piecing together shreds and shredded bits of seeming evidence that might - or might not - link to one another, and form an imagination/perception about past and contemporary events, which which will affect the future precisely because the approach is from cluelessness - rather than from ideology/preconception. Therefore the interference in the future can be very painful, it might involve realising the spy/detective’s lover is indeed a criminal to be jailed. However, the relentless clueless spy/detective is bound to search the unknown until it is known, until it is concurred and is clueless no more. In that sense, the familiar narratives of solving the crime are to end the interference. To make the interference interfering no more. To stop it from bumping into life. To arrest the disturbance and tie it, bound it, domesticate its wild nature - as well as the detectives’ - a nature that is supposed to be satisfied by the solution.
In that sense, the normality, life, as it seen from the spy/detective search range1, is permanently non-radical. Life has to go on without interference, and if there is one - it should be by a person/entity that seeks to shut the interference opening. That opening is a crime to be solved, closed and captured. From that view, the crime of art, or of Broadway-Boogie-Woogie, is that they continue to interrupt and interfere. While the ideas above might sound pretty abstract indeed, I have recently been through a process of Near-Arrest, that might place the gist of interfering practices in a more functional, actual and perhaps easy to link with light.

A few days ago, I was accused of interfering with the Sense of Safety of fellow fight passengers. To manage that fit of sense interference creation, I took photos of chairs, cups, doors, and other elements on the plane(see images at the end of text) - with a sign written in Magritte’s “This is not a pipe” font. The sign read: “this is not a human”2. To top that, I was taking photos of orange peels, some of which I wrote some words on. These words were written for myself, and to read - someone had to spy from behind my back. An offending word was “Boom?” (written with the ? mark..) The police was very interested in meanings. I began by saying: am doing art, art is a meaningless sequence, its yet-to-be confabulated - that’s where its at. Hence the words written, apart from being random automatic writing, were intent free, and I insinuated, should be taken as such.
However, pre-confabulation, automatic writing and specifically - meaningless - did not go down very well with the local airport police..
Also, they seem to be stuck on the idea that This is Not a Human, has to do with Boom. Obviously nothing to do with other words like Pen? Fly? - but must, by all means, have to link with Not a Human. An interference in the “acceptable” sequence, is hard to have a tolerance - let alone acceptance - for. New, or unfamiliar sequences, challenge power simply by being a difference links/connections among elements. If a person is allowed to take photos but not to have thoughts with words/terms that are culturally happened to be censored, then that person - intentionally or not - is offering a new sequence of possibility. If this is not done at an acceptable time-space, a parrhesitic range that power allows for people to be fearless at3, then power will consider long and Hard how to come down on you. Yes, while being questioned by the police, it slowly dawned on me how precious it is to have these times and sequences of fearless cultural interference.
Claiming I was clueless regarding meaning intentionality, a cluelessness that was honest - is not the kind of honesty, the police were able to confront. In the range of possibilities in the police officers’ minds, in the ranges of their imaginations - clueless, intent free, and yet-to-mean/confabulation sequences seem to be, excuse the bluntness, Out of range.
Out of the frequency range that possibilities allow. In that sense, the very being of the sequence was an interference not just for some fellow flight passengers, but also for the investigating police. In that sense, the fact the whole thing was Out-Of-Range, was an interference in and of itself, not just the local questions of intent and other passengers.
I think this episode with “the law", with power, illustrates how easily that interference allowance in itself can be interrupted, consumed, become a ritual to be performed and be but a memory of interference. They wanted me to tell them a different story. A different narrative, a false sequence of events - however one they could link with and feel like they Understand. In a sense, like the Mondrianic inner interferences which do not allow Broadway Boogie Woogie to be pinned down, that create a constant inner sequences of interferences, perhaps the challenge for 1st, 2014, interference is precisely that - how to keep interfering. Whether it is in future interferences, or simply, to be an interference in and of itself.
From the sense of interference aesthetics, there is, in my mind at least, the search from cluelessness. A search from the clueless elements of interference. A search from being allowed to be clueless interfering, and interfering within the very cluelessness.
 

The very allowance to be clueless, unlike the withdrawal of allowances to be clueless by fellow flight passengers and the airport police, is simultaneously the challenge of abuse-less usage, and questioning so as to keep the sequence of interference allowance from folding onto itself. From becoming a ritual, a trope, a parody of itself, a tick in a calendar for a self-righteous narcissistic feel-good.
Much like the slightly unlinked elements this text puts together, and that am perhaps arbitrarily attempting to cross and present as organic and innately linked, the search from cluelessness will attempt linking, colliding and crossing seemingly disconnected elements. These collisions in mind, might hold, fertilise, propagate, evolve - or fold into other elements, other possible collisions and interferences, or even simply die quickly. If you are reading this, please note that the talk will/was to open up the questions, to interfere and challenge them, rather than performing a reading of the the text. (More details: http://senseclueless.beep.pm )



notes

1. Terms linked to the old German word: “Spahen", where the English “Spy" comes from. http://en.wiktionary.org/wiki/sp%C3%A4hen Based on that, I use “Spy" as interferes via being a searching witness, moving from certain ideas into others.. (eg. the detective’s required basic cluelessness..)
2. http://pas-un.eu/ - images and http://pas-un.eu/?page_id=2 - about the activities.
3. In relation to Foucault’s use of parrhesia and fearless speech. More details: http://foucault.info/documents/parrhesia/foucault.dt1.wordparrhesia.en.html

16.08
15:00
Universal Automation – Strike Now Collective
johnny void
Universal Jobmatch Comes Unstuck As
Automated
Jobsearch Is Launched

Iain Duncan Smith’s plans to force claimants to spend hours a week endlessly searching for jobs on the Universal Jobmatch website* could become a laughing stock after the launch this week of an app which will carry out  jobsearch automatically.

According to their website, Universal Automation is a “browser extension which will automatically search for jobs on Universal Jobmatch and apply for them. It is the robot that will perform useless job search activities instead of you at a click of a button.”

In a humiliating blow to the DWP’s plans to turn Universal Jobmatch into a virtual workhouse, it seems that technology could make these plans redundant.  Whilst the app is in early stages of development this is exactly the kind of technological innovation that some analysts have claimed could replace up to half of jobs in the next couple of decades.

As apps like this become more sophisticated it is easy to imagine that all jobsearch could soon be fully automated.  Employers are unlikely to mind, as methods of recruitment will no doubt follow a similar pattern.  No matter how much Iain Duncan Smith stamps his foot – and expect a tantrum when he finds out about this – his digital by default welfare reforms look to have already been out-digitised and now seem to belong to the last century.

Universal Automation – which I have on good authority comes from a trusted source – can be downloaded from:  http://automation.strikenow.org.uk/get-the-extension/

As mentioned this is at an early stage of development, so read all the guidance notes and use at your own risk – those behind Universal Automation have called for feedback and techies who are interested in helping to get in touch. 

*Universal Jobmatch is the government run job seeking website built at huge expense by Monster Jobs and which is designed to snoop on claimant’s job-seeking activity.  The project has been an utter shambles with thousands of fake or spam job vacancies listed along with vacancies for sex work – against current DWP policy – as well as spoof jobs and outright scams.

Despite this hundreds of thousands of unemployed claimants have been forced to register with the website under the threat of losing their benefits.  Whilst registration can be compelled, there is currently no requirement to tick the box allowing the DWP to snoop on your jobsearch (you can also untick the box which asks if they can send you emails).

16.08
16:00
On Computational Somatics – Luis Rodil-Fernández
Luis Rodil­-Fernández
Computatio­nal Somatics
Thoughts on holistic computing

A Cartesian way of thinking is useful for solving a problem when it is static, unchangeable or of material nature. This tool shows its weak points when the problem at hand is dynamic, has changeable and capricious aspects, and a spontaneous drive. Such is the case of the human problem.

A human being is an indivisible whole. It is not a coupling of the psychological and the physical parts which would remain distinct and independent of each other in their own proper compartments. If you feel embarrassed, it is psychological. If you blush, it is physical. But it is because we are embarrassed that we blush.

— The Non-Doing, Itsuo Tsuda

There is a deep and dynamic relationship between the evolutionary pathways of computers and humans, each influencing and helping to configure the other. Yet while machines are getting lighter, faster, easier to use, performing ever better at ever lower costs. The same cannot be said of the human, which has not kept up with the raging pace of development of the machine. Humans have not changed in any significant way in the last 200,000 years. There is, however, an illusion of productivity, which reinforces this relationship. Parallels can be drawn between this situation and the psychology of addiction. A damaging habit persists while the illusion of a perceived benefit is fed.

Where critical media theory focuses on the impact that different technologies have upon human culture, the concerns addressed in this writing are slightly different. The impact I am most concerned with is not cultural but somatic, it pertains to the body.

In the same way that when a human misuses a machine the machine eventually breaks down. When a machine misuses the human, it is the human that breaks down. A simple search on the web reveals that the cost of musculoskeletal disorders related to the workplace is around 50 billion dollars a year1. It is clear that work makes humans break down, much work today is done together with a machine. Humans are breaking down and machines lose their users at a staggering speed.

Humans continue to neglect some fundamental aspects in their integration with the machine, the effects this has on their psyco-physical condition are disastrous.

It is a well established fact that machines affect the ways humans think. Friedrich Nietszche suffered from disease, at some point he could not sit to write for extended periods. As a way to get back to his writing, Nietszche adopted Malling-Hansen’s Writing Ball (Skrivekugle) a precursor of the modern day typewriter.

One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts in music and language often depend on the quality of pen and paper.”

“You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler2, Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.  Malling-Hansen’s writing ball, with its operating difficulties, made Nietzsche into a laconic”3.

the machine

Nihil est in intellectu quod non sit prius in sensu4
Nothing is in the intellect that is not first in the senses

In her essay The Death of the Sensuous Chemist, Lissa Roberts tells a riveting story of a crucial period in the history of science that has Antoine Lavoissier as one of its protagonists. Prior to this time, chemists used their senses to analyze the matters they worked with. It was not uncommon for a chemist to taste urine and analyze its properties from what was plainly available to the senses. The training of a chemist involved years of developing a literacy of nature through the senses, gas concentrations would, for example, be determined by factors such as smell or temperature. This education was a lengthy process in which the apprentice had to be sensitized to the point that they could use their bodies to detect all the subtle discernments necessary to pursue chemistry with success.

Lavoissier was essential in formalizing the new science of chemistry by promoting an innovative synthesis of principles that were introduced by a network of fellows in the eighteenth century, including laboratory equipment, experimental techniques, pedagogical approaches and language to discuss findings.

Analysts in the new chemistry had to master new experimental techniques that required them to subordinate and discipline their own bodies in the service of the material technologies of their laboratories5.

This is not to say that chemists stopped altogether smelling, tasting, touching or listening in the course of their work. But knowledge gained in this way came to be regarded as unreliable, arrived at by intuition and more importantly, as non-discursive given the requisite for precise measurements demanded by the new language of the new chemistry. The machine began to be seen as the primary means to attain trustworthy scientific knowledge and technology became the means by which knowledge could transcend the body.

The Cartesian Machine

The most decidedly modern of machines is the digital computer, but the computer is a particular kind of machine in that it is not designed to extend human limbs, make them stronger or protect human from predator. Like the abacus and other technologies before it, the computer is a machine designed to extend the power of human reason.

The use of reason as the principal means of attaining knowledge, is a major pillar of western philosophy. As Jonathan Israel argues "after 1650, everything, no matter how fundamental or deeply rooted, was questioned in the light of philosophic reason"6. The philosophical milieu of the time was perhaps best summarized by Descartes transcending statement “Je pense, donc je suis" ("I think, therefore I am"). It is only through intellectual reasoning that human can know human. The breach between mind and body in western philosophy widened substantially during this period.

What more evidence of extreme cartesianism in the current technological worldview does one need than the dualism existing between software and hardware? The digital computer, this most ubiquitous of technologies, is a manifest case of the way technology has developed under the Cartesian worldview.

The computer can be said to be the most Cartesian of machines, for the human capacity that it codifies is logical inference itself and its distance to embodied operation the greatest yet that has existed in any other technology before it.

Mechanical Turk as the last job standing

It has been one of the most elusive promises of science that in twenty years7 computers will be as smart as humans. This particular kind of techno-utopia continues to be promoted by the aging rearguard of the Artificial Intelligence (AI) community8.

In the utopian landscapes proposed by this promise, humans are freed of mundane repetitive labour, work is performed by ever more intelligent machines and humans have plenty of free time to enjoy the wealth generated by the work of machines.

The currently existing relationship between human and machine is far from the promise of AI. The machine now pervades every aspect of the human’s working life and far from making space for more free time the machine is now portable enough and offering sufficient connectivity to make work possible anywhere anytime. Humans take this feature as an opportunity and work at all times with careless abandon. The shift in labour is not only quantitative but qualitative as well. Machines perform tasks at the convenience of humans and humans do what machines can not yet do. As machines become more capable in a general sense, humans become more specialized to fill the gap of the machine.

One example in which the machine’s capability has surpassed that of the human is the diagnosis of breast cancer. It turns out computers are more accurate in the diagnosis of breast cancer than trained humans are9. More accurate diagnosis of early signs would mean that a greater number of human lives would be saved if the machines were doing it. This also means that it would be, not only less than optimal but also irresponsible to continue giving this job to a human.

As the machine further erodes the domains that were incontestable to humans until not too long ago. It becomes inefficient, inviable and at times even immoral for humans to remain doing certain types of work. Machines however are not yet capable of carrying out all necessary work by themselves.

The use of human intelligence to perform tasks that computers are currently unable to do is becoming a niche market. Humans performing such tasks are called Mechanical Turks10. Global online retail giant Amazon offers HITs (Human Intelligence Tasks) for humans to perform in exchange of money, at the time of this writing, some of the tasks listed include: rating the credibility of a piece of content, discerning dialects from streams of arabic text, writing an engaging callout, describe the color of a product, rate images for adult-only content, proofreading a publication. At any given time there are around 3000 HITs listed on the Amazon marketplace. Some of these HITs are in fact perfectly doable with a computer today and soon most of them will.

It might seem a far fetched idea that the last jobs where human labour will be required will be those assisting machines, but the current trend is to incentivize this kind of human-machine relationship in labour and there are no signs of this trend reversing.

The codification of civilization

In one of his lectures, Daniel Dennett showed a slide containing an old instruction manual for elevator operators, from back in the day in which operating an elevator was a job carried out by a human. It contained clear and concise instructions such as "emergency exits at either side of the car must be closed when in motion". Dennett spoke of how these manuals have been progressively codified into technologies as either electronic devices or software. In the process of codification a great deal of subtlety is lost. Certain directives are dropped, some new ones created, ambiguity and moral judgement are replaced with feedback loops.

In the words of Simon Penny “any tool, soft or hard, is a mechanistic approximation of a narrow and codified aspect of human behavior. [...] Tasks which are simple and open to variation for a person, must be specified and constrained when embodied in a machine”11.

Concealment of the rules

In the process of codification certain aspects of the codified knowledge are lost. The process of trying to find the rules encoded in a system by looking at nothing other than the system itself is called Reverse Engineering. At first encounter with a codified system most humans find severe hurdles in their understanding of it. Few things about its internal workings are revealed to the untrained eye. Details of a system's inner workings are elusive even to expert eyes. Some systems enclose so many encoded abstractions that at times it is impossible to fully grasp how they all play together as a whole. This is one of the reasons why codification affects comprehension. The machine cannot easily transmit knowledge about the abstractions that it codifies. Many machines come with manuals, schematics and other aids to understand how they operate and how they are built. The general trend seems to be that the more impenetrable a system is, the more market value it has, so there exist incentives to make systems opaque.

Systems can sometimes reach such levels of complexity that indeed no single human can even begin to understand how they work. In the Flash Crash of May the 6th of 2010, the Dow Jones index plunged about 9% in the course of a morning, 600 points alone in just 5 minutes at exactly 2:45pm. The causes for this were unknown at the time but it is now suspected that they had to do with High Speed Trading. High Speed Traders are algorithms executed by very fast computers that operate on real-time market data, sometimes buying and selling within nanoseconds. These tiny transactions scratch only fractions of pennies on every transaction. But because these operations are performed in really huge numbers every day, can amount to millions and millions of dollars worth of trade. The technological arms race that these trading conditions have created is as interesting as is ludicrous. Each of these algorithms in operation get triggered on certain conditions, for example, when a particular set of shares that are interrelated present an oscillation in value, a particular algorithm might be triggered to perform a sale. Whereas another algorithm operating in the same arena, under the same conditions might trigger a buy action. Making this market a vast pool of codified rule sets that affect one another and where no single entity has an overview of how the whole works.

The new ecosystem of the machine is an Economist’s wet dream. All these trading agents, performing their actions rationally, with equal access to information12, with human emotions ruled out of the market. A perfectly rational system, the very essence of the science of Economics.

Yet what happened in the Flash Crash was unexpected and might never be fully explained. It is now thought that a glitch in price reporting might have triggered the downward spiral that was then exacerbated by High Frequency Traders, but the complexity of the system and the opacity of the rules codified in each individual algorithmic trader make an accurate assessment of the causality positively impossible.

No single human being has a detailed understanding of how these systems work.

Disembodiment

A lot of apps available today replace a technology that previously existed as a physical device, making the smartphone the ultimate generic tool that can perform the tasks of hundreds of other devices that previously required to be manipulated by a human in the physical world.

What before was pushing keys in a calculator is now tapping the touchscreen of a smartphone. What before was manipulating a water level, is now balancing a smartphone’s inclination sensor.

As devices and the software running on them become more capable, software simulation quickly becomes the dominant aspect of the machine. The more generic the hardware, the more specific the software seems to become.

All codified aspects of an activity buried in the machine exist in a realm of ideas away from human consciousness, accessible only to the expert. The machine becomes a black box. With the process disembodied, the human using the machine that makes the thing, hardly ever learns to make the thing itself. As Simon Penny put it “the process which links conceptualization to physical realization is destroyed”.

As the machine specializes it is the human that becomes stereotyped, the human becomes a “standard part”, an interchangeable element in the chain, a parameterized formula in the design of machines. The more ergonomic the design, the more stereotyped the human.

Extraordinary development of the machine

F.M. Alexander found a causal relationship between technological development in the build-up toward the first World War and the “crisis of consciousness”13 that ultimately led to war.

[...] "The extraordinary development of machinery, which demanded for its successful pursuance that the individual should be subjected to the most harmful systems of automatic training. The standardized parts of the machine made demands that tended to stereotype the human machine" [...] "The power to continue work under such conditions depended upon a process of deterioration in the individual as he is slowly being robbed of the possibility of development"

This thought of Alexander is what nourished the idea of what John Dewey called the Degeneration of Civilised Unconscious. It is important to note that Dewey was not talking about this process as a cultural trend, but rather as a tragic disconnect between means and ends. Being subject to change but never in control of the process of change itself, or at any rate aware of it at all. Change happening below the buoyancy level of collective consciousness.

Alexander understood that awareness of this process of change and the development of awareness to exert some level of control on it, was a process that had to occur through the body and one that must be experienced before it is understood.

hci: the human as appendage

Virtual Realities - Real Concussions

When I was twelve, I went to a trade fair near my town. Years later the only memory I keep is the following anecdote.

While walking around with my grandfather I saw that the local bank had a very flashy and futuristic booth where you could experience a virtual reality environment. In it there was a text that said "VIRTUAL (n) that which is real, but doesn't exist". There were three people wearing head mounted displays and looking around in slightly encroached body poses, as if they were amongst a swirling flock of birds attacking them. I had seen these VR systems in science magazines and I knew about the lore surrounding them, but I had never before experienced one myself.

I waited in line for what seemed an eternity and finally I got to the VR station. Every set had a stewardess assisting the public, my stewardess for this experience handed over the HMD14 and gave me a few explanations. I couldn't really listen to her, I was too overexcited by the presence of the Silicon Graphics stations, glistening under the theatrical effect of the spotlights.

After the stewardess reset the trip, the HMD showed what seemed like a church, with stained glass high up on some walls and very ample spaces. I couldn't really see my arms or any part of my body as I had seen in movies. The graphics were slow, I could see the frames changing and the viewpoint catching up to the position of my head after a considerable delay. I moved my head around fast, trying to explore the whole virtual space in the little time I had for living this experience. I was not aware of what was going on outside of the HMD, the outside world was entirely locked out for me.

As I moved my head around I had no perception of the location of the stewardess. While I queued I had seen her helping people to not get caught up in the cables that connected the HMD, but I had no idea where she was with respect of me, or even which way I was looking at in the real world. At some point she must have got pretty close to me because I felt a dry and strong thud, followed by a muffled shriek. Next thing I remember was my grandfather talking to me from my back, saying that I had to come out, that I had been there for too long and had hurt the stewardess. I was very confused.

When I took the HMD off, the young woman was sitting on a chair just behind the computer setup, being assisted by a colleague. My grandfather approached her to apologize in my stead. Her hand clasping her forehead. Apparently I banged her head pretty badly with one of the hard-edges of the HMD. I can't say that I was very worried about her, the excitement of the experience and the intensity of my disappointment about such crude technology, was stronger than my compassion for the stewardess at the time.

I remember the whole experience as intensely disappointing.

In retrospect and with the knowledge I have gained since, that incident has contributed considerably to shaping the way I understand the discipline of HCI (Human-Computer Interaction). With the advent of every new human-computer interaction technology, there's always a human having to make an adaptive leap and a trail of people with some kind of physical side-effect that results from maladaptation. Every new development in HCI is marketed as "easier to use", "more natural". From the mouse to the Kinect, every controller has claimed to revolutionize the way we interact with the machine and every single one of them has left a trail of injured humans along the way.

HCI is obsolete

HCI as a discipline is based on a principle that no longer holds true, that human and machine ought to be distinct. While one performs computations the other performs interactions. This is a dualistic view, consistent with the Cartesian machine and oppressive of  the body.

Most efforts in HCI have in one way or another produced gadgets that require the learning of a somatic grammar. By somatic grammar I mean a set of bodily movements that when combined can convey to the machine the intention of its user. Whether this somatic grammar involves dragging a piece of plastic over a rough mat and clicking on buttons, or calibrate thoughts with a device that picks up brainwaves, is besides the point, in either case there is a grammar of use. This grammar is composed by an ever growing set of somatic verbs, drag, drop, click, swipe, pinch, tap, rotate, shake, bump, think up, think down. None of which are part of a human’s natural use, they have to be learned, and the gadgets often need to be trained or calibrated. The phrase: “move to last photograph, select it and zoom in to see a detail” could be translated into the somatic grammar of some smartphones as: “swipe left, swipe left, swipe left, double tap, pinch, swipe”. This succession of verbs forms a sentence that expresses an intention to the machine.

There is a reason why they have to be learned, let a gadget of recent development be an example for why this is so.

At the time of this writing a gadget called MYO is being advertised that uses electromyography as a gestural interface to a computer. Electromyography is a technique that picks up electrical pulses sent by the motor control nervous system to individual muscles. MYO can understand these signals and translate them into a model of tensional patterns in the arm and fingers, allowing for the recognition of very detailed and very specific gestures. It is in MYOs website, in the FAQ section that one can find an interesting revelation about why MYO is more of the same. “We use a unique gesture that is unlikely to occur normally to enable and disable control using the MYO.” It is this “unlikely to occur normally” that pervades all the somatic verbs that enable interaction between human and machine. It is this trying to distinguish a gesture that wants to communicate an intention to the machine, from a gesture that would occur naturally, that directly opposes the possibility of integration.

New, newer, newest

Humans seem to be quite ready to adopt new technologies irrespective of the ignorance on the impact these technologies might have on them.

Wireless networking for example is a technology that has quickly spread to every corner of the planet and it was not until it was widely spread that studies were conducted to understand the impact it has on the human. In fact this remains an ongoing experiment.

This general attitude of the human on the face of new technologies, opens a realm of almost infinite possibility for the machine. The field of HCI parasitizes this special status that humans concede to technology. Humans seem to have no trouble to subject the body to untested technologies for the sake of novelty.

Psycho-physical modalities

“One of the things I always liked about the Moviola is that you stand up to work, holding the Moviola in a kind of embrace [...] Editing is a kind of surgery – and have you ever seen a surgeon sitting to perform an operation? Editing is also like cooking – and no one sits down at the stove to cook. But most of all, editing is a kind of dance – the finished film is a kind of crystallized dance – and when have you ever seen a dancer sitting down to dance?”15.

A particular form of dysfunction comes from forms of interaction that lock the human body into a single modality of use for extended periods of time. Modalities are psychophysical in the sense that both the psychological state and the physical use together conform a modality. A person can be said to be listening or speaking, or constructing and irrespective of their concrete activity we can make assumptions about the state they find themselves in.

A healthy human subject in a wakeful state of full awareness, is multimodal in state and potential. Not only is the subject fully engaged psychologically and physically, but the subject is free to change these states effortlessly, in a natural flow from one modality to the next. In this sense there are no interruptions, simply because they do not exist. When something else calls for attention a person in a wakeful state can shift modalities without ever fully abandoning their activity, only the modality and the subject of engagement change but the person never abandons a state of full engagement. This is a natural modality.

Multimodality is a word often used in HCI parlance but I argue that its meaning is perverted in discussing human forms of engagement, as it is derived from a machine centric view. An interface is said to be multimodal when it provides several distinct means for input and output of data. This is considered a good thing, as it supposedly increases the usability of a system. However this apparent beneficial effect only takes into account total data throughput between human and machine and assumes that redundancy and synergy are beneficial to the human. This definition of multimodality relies on how much attention the machine can get from the human.

An example of this is the head-mounted display (HMD), by providing visual and aural feedback as well as tactile means of navigation there is a high data throughput between the human and the machine. But the human loses awareness of the world around it. A person trying to engage somebody else that is wearing a HMD is sure to be interrupting. What one earns in immersion in a multimodal interface, one loses in awareness. In this sense even so-called multimodal HCIs are in fact, locking the human into one modality of use, this is why I resist the notion of multimodality in HCI and would rather call these interfaces multi-channel instead. These types of interfaces make full conscious awareness practically impossible. The human becomes absorbed in a single modality of use, the one which is established by the interface.

Technobondage

I call this locking the human into a modality, a relationship of Technobondage. This kind of relationship is applicable to the chainsaw as well as to the computer. All technologies bring with them implicit propositions for bondage of the human. The machine is needy and the human forgiving.

Technobondage is a form of pure intellectual arousal, a kind of fetishism, that relies in the psycho-physical subjugation of the other for a momentary sensation of pleasure derived from a sense of efficiency.

It is no wonder that humans derive pleasure from the destruction of machines. Catharsis is only possible when the dependency is broken. Only then can consciousness return. At the same time this catharsis is necessary for the death of the machine, evolution can only exist where death discards the obsolete and inadequate. The fastest that the machine becomes obsolete the better it is for the machine and its kin. It is by the destruction of technological forebears that new technologies get made.

The film Crash by David Cronenberg, based on a book of the same title by J.G. Ballard tells the intertwining stories of a group of people that derive sexual pleasure from car accidents. Not just from witnessing them but from actively causing them as well. Their maimed and scarred bodies are a testament to the effects of the machine. Blood, semen, shattered glass and the twisted metal of a crashed car all leveled in the same orgy of mechanical carouse. What then was mechanical has now become digital and with this transition the fetishistic allure has only become stronger.

The least honorable way for a machine to die is to become a museum piece. A mere display item, a historical sample, for it is then that the logic of the technobondage gets turned in its head, as it can no longer parasitize the human. The machine wants to die while in service, preferably taking its human operator with it.


notes

1. National Research Council and the Institute of Medicine (2001). Musculoskeletal disorders and the workplace: low back and upper extremities. Washington, DC: National Academy Press. Available from: http://www.nap.edu/openbook.php?isbn=0309072840

2. Gramophone, Film, Typewriter, 1999, Friedrich Kittler,  link accessed 4/11/2012 (pp. 201-8) the subject in wider scope is available at ROSA B a webzine co-produced by the CAPC musée et l’École des beaux-arts de Bordeaux, link accessed 2/2/2013.

3. The Shallows, 2003, Nicholas Carr.

4. De veritate, Thomas de Aquino, q. 2 a. 3 arg. 19 (source accessed 27 May 2013)

5. Essay The Death of The Sensuous Chemist in the book Studies in History and Philosophy of Science, p. 506

6. Israel, J. (2001), Radical Enlightenment; Philosophy and the Making of Modernity 1650–1750, Oxford, Oxford University Press, p. 3

7. At almost every milestone of AI covered by the press there’s a scientist supporting this promise that always seems to lay 20 years from now. This was the case in 1997 when Deep Blue beat Kasparov at chess. In 2011 when IBM’s computer Watson competed in television show “Jeopardy!”. The promise of AGI (Artificial General Intelligences), meaning non-specialized intelligent behaviour, has been proposed by intellectuals as Michio Kaku, Ray Kurzweil, Ray Korowai, Hans Moravec, John McCarthy, the list is much longer than this space affords. The idea of AGI seems to have captured the imagination of scientists, inventors, sci-fi writers and the general public alike, the fact that the promise has not yet materialized seems to disappoint none of them. (related link)

8. One example of a member of this community that is very much in line with this idea is inventor Ray Kurzweil, in his book How To Create a Mind as he continues to expound a variation of the Computational Theory of Consciousness as recently as 2012. This theory states in very rough terms that consciousness can be atomized into processes, individual computational tasks that together add up to generate cognitive output.

9. Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features Associated with Survival, 2009. See link to Science for full list of authors. (link)

10. The name Mechanical Turk comes from The Turk a chess playing automaton made by Wolfgang von Kempelen. It was later revealed that the machine was not an automaton at all, but was in fact a chess master hidden within the guts of the machine controlling moves against the opponent. (source)

11. Body Knowledge and the Engineering Worldview, 1996, Simon Penny. (source)

12. Equal access to information would be a market ideal that would imply equal opportunity for trade. But even though most markets are digital now a days and it would, in principle, be possible to offer all trading algorithms equal access to information. This is not what happens in practice. There are all kinds of reasons that give one trader advantages over another, from corporate politics creating vast dark pools of trade, where shares never reach public markets and are instead traded internally without public disclosure. To physical reasons, such as network latencies and processing speeds. The computer that gets and is capable of processing the information the fastest, can more quickly react to a market event. The fact is that “equal access to information” didn’t exist in the age when humans traded in stock markets and it still doesn’t exist now that it is mostly machines doing the trading.

13. Man's Supreme Inheritance, 1910, F. M. Alexander. p. 102 (Commercial Industry and Militarism)

14. HMD. Head Mounted Display, now commonly known as VR goggles.

15. In the blink of an eye - a perspective on film editing, 2001, Walter Murch.

 

16.08
17:00
wotwot
Assorted schematics
16.08
18:00
Free Software – Critisticuffs
Free
Property
On Social Criticism
in the Form of a Software Licence

The open-source/free-software movement has quite a good reputation on the Left.1 This is not simply because of the fact that open-source developers provide things for free which usually cost money, but also because the free-software movement often is regarded as an opposition or even a practical counter project to capitalist private property. Hence, this text investigates the apparent contradiction that a licence – an assertion of ownership – guarantees universal access, while being simultaneously adopted and promoted by multinational IT corporations for their own profit.

Intangible goods are different …

Indeed, at least some people within the movement do seem to be bothered about property, at least where it specifically affects digital goods. Indeed, in terms of what they actually are, physical goods and so-called “intangible” goods differ.

If someone uses my bike, I cannot use it at the same time. Ideas, however, such as those expressed in this text, can be distributed and shared with others without ever running out of them. For example, we do not know less of the content of this text when the readers know more about it. But still: reading the text, comprehending it, finding mistakes, that we might have made, are intellectual efforts every time we accomplish them – activities that are both time consuming and full of preconditions, e.g. one is required to have learned how to read. Hence, distribution is not to be had entirely “free” and without any (basic) requirements. The text itself, however, and the information it contains bears the particular feature that it can be copied (and, by implication, transferred, displayed, made available, in short: used) any number of times. Once certain (basic) requirements are established (e.g. a computer is at hand, an Internet connection is up and running), it is fairly cheap to duplicate a file containing this text – the effort becomes close to zero at some point.

… and with them, property appears differently

It seems an ‘artificial’ and unnecessary restriction to stamp private property on ideas, files or other ‘containers of information’ milling about – for the single reason that one is used to copying those files. From this, first of all, it may be noted that the quality of being property is ascribed to things. It is not a characteristic inherent to them, i.e. necessarily or naturally ‘comes with’ things. Secondly, it is apparent that it is not allowed to make copies of some files, e.g. most music. It is illegal to distribute such files. With regard to files this seems, at first sight, rather absurd since their distribution neither changes nor damages their content. So, when it comes to ‘intellectual property’, property appears differently. Namely, it appears more obviously that state authority restricts its use through patent, copyright and other laws. This way it becomes very distinctly recognisable what property actually is – a barrier.

Moreover, scientific and technical results were products of collaboration long before the beginning of digital information processing. This is because even the smallest discovery or invention is based on a host of other discoveries and inventions; so many that the respective originators only know a fraction of the sources from which their content derives. Mathematical findings are based on other mathematical findings, software is based on ideas found in other software packages or relies on those packages directly.2 Thus, in order to make progress in research and development, access to what is already known is required. If nowadays intellectual property titles continuously are used and defended, i.e. if access and applicability of existing information is restricted by law, then this prevents the development of new ideas. Property appears as something arbitrarily separating that which essentially belongs together. Not only is property a barrier to access to existing things or knowledge, but is even a barrier to the discovery and development of new ones.

The absence of property relations as norm

The concept of open source emerged alongside the development of mainframes, personal computers and the Internet and it also pushed these developments forward. The starting point for the open-source movement was the acknowledgement of some particular qualities of digital goods, especially their lossless reproducibility and the implications for software development that come with this quality. The movement’s protagonists knew how to take advantage of those qualities in their work and, hence, focused on their social requirements. It was a new phenomenon to concern oneself with this topic in the beginning of the field of computer science. From around the 1950s on, free access to and a de facto unrestricted use of all required information went without saying – at least with regard to software. This, anyhow, applied to people with the respective knowledge working at the relevant, well-equipped research institutions. Software simply was a free add-on that came with massive, expensive mainframes. Accordingly, it was openly distributed, studied and changed.

Only from the mid-1970s, a market for proprietary software developed, i.e. software that one is not allowed to freely modify and distribute. Companies such as Microsoft started doing business by selling software and especially licences granting the right to use this software.3 People such as Richard Stallman – founder of the GNU Project, the best-known free-software licence, the General Public License (GPL) – stepped up against this new movement in order to retain the status quo. Stallman and his colleagues developed software together and their demand was that others should be able to study, use and distribute their products. Indeed, from the standpoint of well-planned production of useful things, this is a sensible position.

Property – a standard for the world of physical things?

The open-source/free-software movement started off with the GNU Project. It is important to this movement today that property relating to intangible goods has to play an inferior or different role than property regarding other, i.e. material, things. The reason for this – according to this movement – is to be found in the particularity of intangible goods themselves.

For example, the German Pirate Party – as other Pirate Parties concerned with issues at the crossroad of democracy and the digital life – writes in its manifesto, “Systems that obstruct or prevent the reproduction of works on a technical level (’copy protection’, ’DRM’, etc.) artificially reduce their availability in order to turn a free good into an economical good. The creation of artificial shortage for mere economical interests appears to us as amoral; therefore we reject this procedure. […] It is our conviction that the non-commercial reproduction and use of works should be natural; and that the interests of most originators are not negatively affected by this – despite contrary statements of particular interest groups.”4

With regard to digital goods, the members of the Pirate Party complain that by means of a title of ownership access to information is “artificially” prevented, which goes against information’s “natural” feature of being copyable: “information wants to be free”. At the same time, they see no reason to make the same claim for material things. According to the logic of the party’s political programme, those are “economical goods” quite by themselves. An assumption that seems so self-evident to the authors that they do not explicitly mention it.

The GNU Project, on the contrary, explicitly addresses the assumed distinction between non-material and material: “Our ideas and intuitions about property for material objects are about whether it is right to take an object away from someone else. They don’t directly apply to making a copy of something. But the owners ask us to apply them anyway. […] But people in general are only likely to feel any sympathy with the natural rights claims for two reasons. One reason is an overstretched analogy with material objects. When I cook spaghetti, I do object if someone else eats it, because then I cannot eat it. His action hurts me exactly as much as it benefits him; only one of us can eat the spaghetti, so the question is, which one? The smallest distinction between us is enough to tip the ethical balance. But whether you run or change a program I wrote affects you directly and me only indirectly. Whether you give a copy to your friend affects you and your friend much more than it affects me. I shouldn’t have the power to tell you not to do these things. No one should.”5

However, this distinction between material and non-material goods is not correct.

1) The GNU Project claims that a difference between spaghetti and a program is that the former can only be consumed by one person, while the latter can be used by indefinitely many people. Hence, for the GNU Project the former implies the need for private property while the latter does not. Yet, under the regime of property it does not matter whether an owner actually uses her stuff or not. When people think about property in material goods, they have their personal belongings in mind, things they need more or less regularly. But this is not the main point of private property – the way it works is much more far reaching and fundamental. For example, squatted houses get evicted to stand empty again, pieces of woodland are fenced in by their owners even if they live hundreds of miles away or supermarkets lock their bins to prevent people from dumpster diving. The question whether someone could make use of something is subordinate to ownership, not the other way around. Property applies no matter whether the owner or someone else, e.g. in return for payment, uses it. Making successful claims to an absolute disposal over wealth of whatever kind and whatever quantity regardless of neediness – this is private property. Regardless of material or intangible goods, the regime of property does not care who wants to use what and how. Whereas it is true that only one person can eat one’s fill given only one serving of spaghetti, under the regime of private property to own spaghetti is the condition for eating them, but the desire to eat them does not establish ownership. So, in this respect the material vs. non-material distinction is wrong.

2) In one respect though, need does play a role, namely a negative one. Property in a machine indicates the exclusion of third parties from using that machine. One cannot enter into an ownership relation with a machine because a machine is not eligible for a legal relationship. It is the same with a disc containing a copy of a Windows operating system on it. One is not allowed to install it merely because this disc lies around somewhere unused. The particular function of a title of ownership for the owner is strictly that others may not use her property without her consent, even though they might want to and perhaps even be physically able to do so. What friends of free software notice and highlight with regard to digital goods could also be observed with regard to ordinary material things: it is a fact that property is a relationship between people in regard to things, but not immediately between things and people. If no one else is there, it does not really matter what belongs to me or what I simply use. This only becomes relevant when others want to have access, too. Property is a barrier between those who want to use a thing and the thing itself, between need and the means to satisfy it. The guarantee for property in material things does not exist despite but because people want, need, require them. To own bread and all the more to own a bread factory is significant because other people are hungry. Otherwise, what would be the point of guaranteeing the right of exclusive disposal?

3) Furthermore, with respect to reproducibility a rigorous contrast, material vs. intangible, does not exist either. It is possible to produce things and this means nothing else than to eradicate the detected scarcity. There is no such thing as a particular finite number of bread knives in the world, more can be manufactured. Indeed, one has to do something for it, but nothing simply is “in short supply”.6 However, in order to manufacture something one has to have access to the means of production which, again, are privately owned. And in this regard – again – it does not matter whether one ‘really’ needs them or whether they are currently in use.

Yet, there is indeed a difference between software and bread knives: the contemporary means of production for software meanwhile are cheap mass products that most people have at home anyways. One can write a lot of state-of-the-art software with a five year old computer from a car boot sale.7 Thus, the production of software ‘only’ requires an investment of education and labour time, while when it comes to, e.g., bread knives one is excluded from the means of production at the level of the state-of-the-art. In order to be able to produce bread knives one would indeed need the corresponding factory, and this wants to be bought first.

4) The means of production are not simply “in short supply” either but can also be produced, by and large. One is excluded from the means of production as their purpose for the owner is access to the wealth of society in the form of money. The owner knows she has to come to agreements with others in order to get their products. Hence, she uses her factory – as well as people who do not have one, i.e. workers – to manufacture something that she can sell. With the proceeds she then can either buy goods for herself or she can reinvest in workers and means of production so that another round of fun may commence. In a society based on the division of labour one is dependent on others and their products, be it intangible or material goods. Because in this society this trivial fact does not lead to a self-conscious interaction of producers but rather the regime of property prevails, one is excluded from the products of others and therefore is required to exploit their needs to one’s own advantage. This absurdity can be put differently: it is precisely because one is dependent on others that one insists on the exclusion of others from what one owns. If everyone gives only if given an equivalent in return, then certainly it makes sense to deploy what one has as means of access to the stuff under the control of others by matching their exclusion with one’s own.

Property is characterised by exclusion whether it concerns material or immaterial goods. The free-software movement disagrees though – and it shares this fallacy with the majority of people. In other words: the political wing of the free-software movement insists on drawing a strict distinction between digital and material goods in order to criticise the regime of property regarding digital goods. Yet, it is exactly their line of argument that reaffirms the exclusion from the things people need: the regime of property. Some radical activists want to use free software as a tool for the abolition of private property, for example the slogan “free software today, free carrots tomorrow” can be read this way. This is futile as the the reference to the free-software movement’s ‘criticism of property’ takes up the false idea of free software proponents that carrots can never be free and for all instead of critiquing it.

Copyleft licences – critique of property law by legal means

Access to open-source software is defined and regulated in legal terms. First of all, copyright law applies regardless of what the author chooses to do. This law forms the general basis and is applied by the state to anything it considers to have a creator. But moreover, an open-source licence determines what anyone else is allowed and not allowed to do with, say, a piece of software by means of the law – no difference from other areas of bourgeois society. Usually open-source licences allow to read, modify and further distribute the source code.8 The various licences differ considerably in terms of their precise provisions. Roughly, there are two versions of openness. The above mentioned GPL determines that any program using software parts licensed under the GPL has to entirely be licensed under the GPL or a compatible licence as well. This means that the licence is ‘virulent’ and components mutually affect each other. It is, for instance, not allowed to simply take the Linux kernel (i.e. the operating system’s core) modify it here and there and then distribute the result without also releasing the source code of the modifications. In contrast, the BSD-family of licences is less strict.9 BSD programs are part of Microsoft Windows, for example, and there is no obligation to publish any source code. The licence mainly stipulates what must happen if source code is distributed, namely that copyright holders must be named. Secondly, it provides that no one may sue the authors in case something goes wrong. An exclusion of liability: the software is provided “as is”. Both camps – GPL vs. BSD – do not get tired arguing these differences. The GPL camp holds that liberty is to be protected by force whereas the BSD camp is convinced this way liberty is lost.10 Who is right, whether this question even can be settled or not or whether it cannot be conclusively answered because this type of freedom includes its opposite – domination – is perhaps better saved for another text. Here, we may conclude, though, that this kind of practical criticism of property necessarily presupposes a title to ownership in a software product. This is the reason why Richard Stallman calls the GPL a “legal hack”, i.e. a trick on legal grounds11: one insists on one’s property by way of claiming the terms of a licence in order to guarantee free access.12

But, “you can’t hack the law”13. The legal system – guaranteed by the state’s authority – cannot be tricked: licences (no matter what kind) are legally binding contracts following the logic of the law that, if in doubt, always can be enforced in case one of the contracting parties claims its right.14 The result of this is that, e.g. scientists who make their research-software available to others have to deal with a maze of different incompatible licence versions. Hence, questions such as the following arise: am I legally allowed to combine another scientist’s open-source software with my own?15 A creative use of and tricking the law – Stallman & Co. creatively use the law – turns into principal submission to the law – the law dictates Stallman & Co. its terms – that is how the law works.

Moreover, such a “hack” develops its very own dynamic in a society of law appreciating citizens. The field in which licences are applied in this manner has meanwhile massively grown. The Creative Commons movement16 recommends scientists, creative artists as well as hobby photographers uploading their holiday snapshots to the Internet to claim ownership of their respective products of information. They are encouraged to exclude third parties more or less from using such products by choosing from a toolbox of legal restrictions. Contrary to Richard Stallman, the Creative Commons initiative by Lawrence Lessig does not problematise the really existing copyright regime. Hence, the initiative quite correctly notes: “Creative Commons licenses are copyright licenses –- plain and simple. CC-licenses are legal tools that creators can use to offer certain usage rights to the public, while reserving other rights. Without copyright, these tools don’t work.”17 Meanwhile, even things that a few years back no one would have expected to be ruled by copyright law, such as the above mentioned holiday snapshots, are now subsumed under its regime.18

How deeply ingrained the formalism of the law is in these peoples’ minds is aptly expressed by the controversy around the DevNations 2.0 Licence and its subsequent withdrawal.19 The DevNations 2.0 Licence stipulated that people from ‘developing countries’ were allowed to use products under the licence free of cost whereas people from capitalist centres were not entitled to this. Hence, it was a licence that at least acknowledged real material differences.20 The licence was withdrawn because of its discrimination against people living in rich countries. Hence, it violated the equality before the law; but this equality, i.e. non-discrimination, is a requirement for any licence hoping to be verified as an open-source licence by the Open Source Initiative. If the open-source movement is said to have started off with a criticism of property – even if restricted to intangible goods –- or that it was bothered by people being excluded from the digital wealth of societies, then it is safe to say it achieved the opposite: you cannot hack the law. What remains is to (practically) critique it.

Software commons for profits

The open-source movement succeeds because it gets along well with an IT industry whose prosperity is otherwise based on every known principle of private exploitation. In the following we give some short examples to illustrate how business and open source work hand in hand, i.e. to unpack the apparent contradiction of making money from something that is made available for free.

The Mozilla Foundation – best known for its web browser Firefox – receives a good deal of its income from Google Inc., as Google Inc. pays so that the browser’s default search engine is Google. Apple’s operating system OS X is built upon an open-source foundation: Darwin. Apple now and then even collaborates in open-source projects using the results of this collaboration to sell hardware, software packages, films and music – lately rather successfully we hear. Furthermore, according to a study only 7.7% of the development of the kernel of the Linux operating system was explicitly non-paid volunteer work.21 Red Hat Linux, IBM and Novell are the biggest companies directing their employees to collaborate on this operating system, each one of them a global player on the international IT-market. They co-develop Linux in order to do profitable business with it. For example, they sell applications that run on Linux or provide support contracts to companies: you buy our product, we make sure everything runs smoothly. Companies pay for this service even though it would be possible to compile the result by means of open-source projects themselves – to save the hassle. Google distributes its operating system Android and its web browser under an open-source licence, especially so that users of smart-phones use Google’s products by which Google directly or indirectly makes money by means of advertising. Many companies contribute to developing the GCC-Compiler because it is a central piece of infrastructure for every software company.22 Co-development is cheaper than to independently create alternatives. Meanwhile even Microsoft published some products under open-source licences.

Modern politicians concerned with the economic success of their respective nation-states have understood the power of open source – by all means, they promote and encourage the blossoming and expansion of this infrastructure which is available to all. Firstly, this is to strengthen the economy of their nation-state, secondly, it simply is cheaper for their own administrative bodies to use open-source products. By the way, long before the C6423, bourgeois states provided fundamental research and knowledge for the benefit of the national economic growth by means of its university system. It is hence fitting that the two most popular open-source licences (GPL and BSD) were developed at American top-tier universities (MIT and Berkeley).

The bourgeois state also realised that its patent law not only enables the private exploitation of innovations but also serves as a barrier – and in this regard it does appreciate the worries of open-source/free-software activists. For if existing innovations cannot be used for the development of new ones that means bad prospects for economic growth. So the bourgeois state implemented a patent law that grants patents for a certain period of time only. Regarding the exploitation and perpetuation of technology it provides a mediating form for the competing interests of individual capitalists – in the interest of total social capital. On the one hand, individual capitalists want to massively exploit their patented inventions by excluding every non-payer from the use of those patents. On the other hand, they want to use others’ patents as basis and means for their own success.

Within the cultural sector, where CC-licences are widely used, things are the same. Incidentally, this also applies to those that choose a non-commercial CC-licence for their products which allows the use on a non-commercial basis only and serves the purpose to exclude others from monetarily profiting from ones own output. This right is reserved to the person uploading a holiday snapshot or producing a music track. The whole concept has nothing to do with the critique of a society that is based on the principles of reciprocal exclusion from useful things and in which every individual necessarily relies on her own property or labour-power. There is no critique of the social conditions in which we live to be found in insisting on the right of the creator – this is the owner’s competitive position vis-a-vis the competition.


notes

1. The open-source/free-software scene partly acrimoniously fights over the question whether it is “open source” or “free software” that they develop. The former is a particular mode of developing software the latter a comprehensive approach to software in general; it is a demand, sometimes even called “philosophy”, for what one shall be able to do with software. In our text we often use the term “open source” simply because it is better known. To be entirely correct, we would have to almost always write “free software”, though, as our criticism is directed towards the comprehensive claim of this movement, as opposed to the simple endeavour of making software development more effective.

2. With regard to the production of software it is common (and quite sensible) to put frequently used features into separate packages which then are used in various products. Those packages of features are aptly called libraries.

3. Bill Gates’ letter to the Homebrew Computer Club is an interesting historical document highlighting the necessity to justify privatisation in the beginning of this new development: http://www.digibarn.com/collections/newsletters/homebrew/V2\_01/gatesletter.html (last access 14. August 2013).

4. cited after https://wiki.piratenpartei.de/Parteiprogramm (last access November 2012), our translation, emphasis added.

5. https://www.gnu.org/philosophy/why-free.html (last access November 2012), emphasis added.

6. Hence, it is ridiculous that economists, for example, constantly present beach houses and famous paintings to illustrate their theories. They choose examples that indeed have the feature of being in short supply in order to say something about things such as bread, flats, cars and clothing. In other words, they use things as examples whose quantity cannot easily be increased by production in order to explain the economy, i.e. the sphere where things are produced.

7. This is currently changing so that this statement may no longer be true in a couple of years. If software runs on large networks of computers that together calculate something then a ten year old computer may not be the adequate means of production any longer.

8. Source code means the software program in a certain language that humans are more or less able to read …well, except Perl.

9. BSD stands for Berkeley Software Distribution

10. Which licence to choose sometimes simply may have economic reasons. Most of the open-source software in the field of applied mathematics is licensed under a BSD-style licence as companies within this sector often do not intend to sell but use the software themselves. They also only collaborate on the terms that they may do so quite unrestrictedly. On the contrary, most of the open-source software in pure mathematics is licensed under the GPL: the only companies interested in these software packages are those making money from selling such software. That way the (often academic) authors protect themselves from being sold their own software as part of such commercial software.

11. It does not come as a surprise that he attempts to creatively apply the law. After all, he does not have a problem with the fact that daily needs cost money, i.e. that someone insists on his “every right” to get paid: “Many people believe that the spirit of the GNU Project is that you should not charge money for distributing copies of software, or that you should charge as little as possible – just enough to cover the cost. This is a misunderstanding. Actually, we encourage people who redistribute free software to charge as much as they wish or can.” – http://www.gnu.org/philosophy/selling.html (last access November 2012).

12. By the way: in no way does an open-source licence mean that one gives up ownership. The licence terms always apply to others, i.e. the users, only, whereas the owner is of course free to do whatever she wants with her property. This is the base of a business model by which one makes available a (restricted) version of a product as open-source software and at the same time a(n optimised) version is sold as usual.

13. Cindy Cohn, Legal Director for the Electronic Frontier Foundation. It should be noted, though, that her meaning of hacking the law is rather different, if not contrary, to ours. See http://s.shr.lc/10xUcQo.

14. In the leading capitalist countries, the GPL “trick” meanwhile has been accepted as legally binding. This means that it is possible to sue someone in case of violations against the General Public Licence. If such a lawsuit is successful a party can be forced to release all source code of its product incorporating GPL code.

15. It is possible that the answer to this question is “no”, an example from the area of mathematical software highlights this: http://gmplib.org/list-archives/gmp-discuss/2008-May/003180.html (last access November 2012)

16. The Creative Commons (CC) movement emerged in response to branches of industry where direct producers such as musicians usually sign over considerable rights to record corporations – i.e. loose the ownership in their own products. That is somewhat similar to a factory worker who also does not own one single product he manufactured. In contrast, CC-licences first of all mean the claim of ownership of one’s own product.

17. http://creativecommons.org/weblog/entry/22643 (last access November 2012)

18. On Flickr – a not as popular as it used to be photo sharing website – one is bothered with the question which licence ought to be applied to one’s photos, a rather absurd thought in the first instance.

19. See http://creativecommons.org/licenses/devnations/2.0/ and http://creativecommons.org/retiredlicenses (last access November 2012)

20. Our elaborations on property earlier indicate that poverty cannot be abolished by means of such licences.

21. In case of 25% of the work it remains unclear if anyone or anything was paid. See http://lwn.net/Articles/222773/ (last access November 2012)

22. GCC stands for the GNU Compiler Collection, a collection of compilers by the GNU Project. A compiler translates programs from the source code into a format which then can be executed on the respective computer. Free software does not make much sense without a free and reasonable compiler. If the compiler is not openly available it is in fact possible to change software in its source code, but the changes cannot be applied – unless you buy a licence for a compiler. If it is a poor compiler open-source programs are disadvantageous to the proprietary competition.

23. The Commodore 64 was a popular personal computer in the 1980s.

 

16.08
18:00
WebOfTrust – Luis Rodil-Fernández
16.08
19:00
Dinner – Rampenplan
16.08
21:00
16.08
21:00
Party – Imaginary
17.08
11:00
Brunch
17.08
12:00
The Internet of Things – Rob van Kranenburg
17.08
12:00
17.08
13:00
17.08
14:00
In search of lost time 2.0 – Janneke Belt and Paulan Korenhof
Don Ihde
Existential Technics
Chapter 3

In our presentation we explore how technologies coshape our lifeworld by providing (or not) easy accessibility to personal information. Because we have more questions than answers we submitted a theoretical background text to the reader. The text is by Don Ihde, the founding father of postphenomenology; chapter 3 from his “Existential technics”from 1983. Albeit focused on somewhat ‘aged’ technologies, his line of thought shows a method to contemplate the effect that technologies can have on our perception of the world.






17.08
15:00
Sandbox Culture – Aymeric Mansoux
17.08
16:00
Not for the Lulz – Hal Faber