Author Archives: robin

Some things I made, part 1

A few people have shown interest in some of the things I make, so I decided to put photos and sometimes descriptions up here. This is the first, it’s a sculpture I finished in March this year, which I gave to my father as a birthday present. I started it in early 2013 (yep) after seeing a piece called Two Forms with White, by Barbara Hepworth, at the Hepworth Gallery in Wakefield in late 2012. My piece was broadly influenced by that work, although I didn’t use a single block of wood, instead laminating around 20 pieces of different thickness and type. The wood used included pine, kauri, mahogany and oak. The shape was then coated in several layers of Danish oil. There’s also a small piece of aluminium in there, which I cast and then shaped to fit.

Language as alienation

This was written as a companion piece and response to an earlier post, Abstract and concrete language in debate.

Commodification is the process of valuing items according to what they can be exchanged for [1]. Exchange generally takes the form of money, but it can hypothetically be anything, as in Smith’s infamous yet rarely-existing barter [2]. The traditional critique of commodification comes from Marx and briefly states that in commodifying a thing, be it immaterial or material, we are reducing its existence to one concept and ignoring all others. In this frame, the only thing that matters is the exchange-value, the dollars we can swap it for. The other qualities of the thing are irrelevant and increasingly do not at all determine its exchange-value, as they did in the time of Marx and the other classical economists [3]. Regardless of the source of the “value” of an item or concept, through commodification, those values are stripped away, to leave only price to represent it. In so doing, we impoverish our existence, reduce ourselves to one-dimensional creatures and our limit behaviour to a single way of examining the world and our interactions.

After recently reading the first chapter of Adorno and Horkmeimer’s Dialectic of Enlightenment [4], and previously reading Writing and Seeing Architecture, by Protzamparc and Sollers [5], I realise this critique of contemporary life does not go far enough in its questioning of abstraction. As commodification reduces items and concepts to a single simplified, abstract, quantifiable representation, that of money, so science and its tool, abstract language, reduce all experience to a single concept, that of their relation to and use by humans in the pursuit of some goal. These goals in the era of modernity and postmodernity have been varied, but have mostly revolved around Kant’s suggestion that “Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance form another” [6]. From this, we see the goal as being the project of understanding the world and acting on it in a controlled, useful manner. As Adorno and Horkheimer point out, this requires the reduction of the world as it exists and as we sense it, to representations which are solely relative to us as humans, for the requirements we have, rather than in terms which are inherent to the object or concept. These two positions, that of sensing and of representing, lie along a continuum; of course there is no possibility of referring to anything without some hint of a human’s relation to that thing, but it appears science and modernity have pushed us ever closer to a more abstract, human-oriented view of the world. Similarly to commodification, this reduces our world, impoverishes our experience, alienates us from the world, exactly as we gain more understanding of it. No longer are we able or permitted to merely engage with the world according to our senses, feelings and emotions, we must abstract away from those and replace them with a world view constructed entirely of our own making, in our heads, using non-worldly, non-sense-based, images and ideas we overlay on existence. I suggest that this is similar in form to commodification; it is more expressive no doubt, but it is not the totality of the thing, merely a human-produced representation. The more we refine that representation, the more it obscures the original, tending towards Baudrillard’s simulacra [7].

It appears we have discovered the positive in the non-representational/sensory/concrete interpretation of the world, a position referenced as less important in the earlier post on this topic. We might also go along with Adorno and Horkheimer in stating that abstraction is one of the key process, if not the defining factor, of the enlightenment project, which lends a distinctly deterministic air to the process of commodification. It further  raises questions about the seeming contradiction of using abstract language to dissent against the use of markets and commodification.

There have been various techniques and mechanisms suggested to reduce this condition of being elsewhere, from meditation, to mindfulness to simply turning off the computer/mobile device (it is an interesting although unsurprising artefact of the omnipresent Unix philosophy that all communication should be in human language [8] and thus be a series of abstractions). We might add to this list anything which brings us closer to the thing itself, such as rejecting mass-produced food, walking instead of driving or engaging in immanent rather than transcendent governance. Immanent versus transcendent is probably another way of viewing abstract versus concrete. All of these ideas, while useful, feel to me trite and simplistic, as if suggesting they only solve part of the problem, as if there is a gap between the two which is unfulfilled. Is there no way to engage with complicated, non-sensory ideas and concepts which is not alienating? Is the solution to this alienation nothing more than a “balance” between the sensory and the representational? Looking ahead to fantasy science-fiction, how would so-called “thought-reading” affect this, would that still entail a level of abstraction or would the short-circuiting of language remove it?

[1] https://en.wikipedia.org/wiki/Commodification
[2] Graeber, David. Debt: The First 5,000 Years. Chapter 2
[3] https://en.wikipedia.org/wiki/Socially_necessary_labour_time
[4] Theodor W. Adorno and Max Horkheimer. Dialectic of Enlightenment. Pages 6-7
[5] Christian de Portzamparc and Philippe Sollers, Writing and seeing architecture. Page 46
[6] http://theliterarylink.com/kant.html
[7] http://web.mit.edu/allanmc/www/baudrillard.theartauction%20.pdfhttps://en.wikipedia.org/wiki/Simulacra_and_Simulation
[8] https://en.wikipedia.org/wiki/Unix_philosophy

Crypto-currences and production

Electronic/crypto currencies are a fascinating comment on value, the way production is shaped and an insight into financialisation of the world. in previous eras (let’s say 1500 – 1950, the era in which classical economics was dominant), price was a representation of the work that had gone into creating something: if a pair of shoes cost y dollars, that represented a certain number of hours of work. Similarly for a car that cost 200 times y dollars, or a house that cost 10,000 times y dollars – not only would the cost of the car be 200 times the cost of the shoes, so would the approximate hours needed to create it. There was a definite and somewhat fixed correlation between price (exchange value) and utility (use value) [1]. This has been smashed in the last 50 years, with increasing mechanisation which has reduced the amount of labour t produce anything to near zero. In parallel, there has been a rise in the use of financial instruments including futures, collaterised debt obligations, credit default swaps and derivatives. The prices of these instruments change, go up and down in a self-referential manner; their prices no longer represent the usefulness of them [2] but of socially-agreed upon importance (between traders and owners of these instruments anyway, ordinary people aren’t permitted to intervene, through price-based discrimination). The end results of this collective hallucination is a herd mentality, including tendencies towards panic and fear, then a huge drop in the price of the instruments as everyone tries to sell them. The 2008 crash is a prime example of this, but it has happened many times in the last 40 years, see also the Asian Financial Crisis of 1997 [3]. This idea of price being based on belief entirely blows apart Adam Smith’s idea that selfish behaviour by individuals would produce increased wealth for everyone:

“The rich…are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society…” [4]

So, what’s this got to do with crypto currencies? Well, there is an identical complete detachment between price of the currency and useful production. The currencies have a price, some number of dollars, but “mining” (a title which is nothing more than a manipulative attempt to link them to something useful, something tangible as if the process is searching for something useful) them creates nothing of any value, does nothing for anyone else. In fact, the mining destroys value – it uses electricity, creates carbon emissions, uses resources to create the computer along with the other knock-on effects of building, transporting, using and disposing of the components, not to mention further enslaving some workers in China who have to make the components in terrible conditions. In Smith’s terms, it is selfishness which only benefits the individual undertaking the act, there is no “promoting the happiness of mankind”.

It’s an interesting insight into why “building wealth” in the current era is at best utterly useless for most of us, at worst actively damaging. That wealth never reaches us, only piles up against us, making us less and less relevant, less and less powerful.

[1] https://en.wikipedia.org/wiki/Socially_necessary_labour_time
[2] http://www.bbc.co.uk/science/horizon/1999/midas.shtml
https://www.youtube.com/watch?v=4auzn4bK1bM
http://www.elgar.govt.nz/record=b2479645~S1
[3] https://en.wikipedia.org/wiki/1997_Asian_financial_crisis
[4] https://en.wikipedia.org/wiki/Invisible_hand#Adam_Smith

Abstract and concrete language in debate

I’m reading a fascinating book at the moment, about architecture and language. One chapter from the book has helped me understand a bit about some of the disagreements which occur between various members. This is the relevant paragraph from the book, I’ll dig into it in a moment and explain why I think it’s important:

According to our Western tradition, from the time of the ancient Greeks, the cogito, there is no thought outside of language because language is its sole vehicle. It is through language that we succeeded in freeing ourselves from the muck of the multitude of sensations enveloping us, from prejudgments and fears, in order to name, classify, choose; this is what the rational thought that came into being in ancient Greece holds. Language extracts us from the sensible world and spares us from having to experience or reexperience or mimic a thing, an affect, so as to be able to imagine it. The aim is to short-circuit experience; it is thus that the concept comes into being. We still have this preconceived idea that intelligence requires abstraction. [1]

The text is talking about the so-called “progression” from concrete (sense-based) to abstract (language-based) interaction with the world and how this shapes human behaviour. It’s rather biased, using sly language to posit one type of behaviour as bad (muck, fears), the other as good (rational, freeing), but the principle under the judgement is sound. As an example, here are some examples of concrete (sense-based) and abstract (language-based) interaction with the world:

Question: What is a car?
Concrete (sense-based) answer 1: That is a car [points to a vehicle on the side of the road].
Concrete (sense-based) answer 2: A car gets me to work in the morning
Concrete (sense-based) answer 3: Cars on the motorway next to my house keep me awake until 3am
(and so on)

Abstract (language-based) answer: A car is a motor vehicle, generally with an internal combustion engine fuelled by petrol or Diesel, although electric motor are more common recently. It generally has four wheels, although may have three or up to six. It usually seats 4 or 5 people, but may have space for as few as two or as many as seven. It can also usually carry luggage and travels at speeds of up to 100km/h, some may travel faster, up to 400km/h. (and so on)

Note that each term within the abstract answer relies upon further abstract definitions which we must understand in order that we can comprehend it, and those on further definitions, and so on. Turtles all the way down, as the late, great Terry Pratchett reminded us.

The concrete answers here rely on direct human experience and the senses, the abstract answer on language and concepts, fairly convoluted language at that. The first answers are something everyone can relate to and understand; although they may be imprecise and inaccurate, they are sufficient for ordinary day-to-day activities. The last answer is typical of an academic mind set, that is a university-based understanding of the world where everything is generalised in order to produce rules and predict behaviour, whereas the first set of answers attempts no generalisation and talks about specific examples of behaviour in a direct way. These positions (language and sense-based interactions) lie along a continuum, with pure abstraction/language at one end, pure concreteness/sensation at the other. All interactions lie in between on the continuum and there are probably no pure examples of either, but some are closer to one end than the other. I would suggest that the thinking of those with a certain type of academic background (philosophy, business, computer science, maths, sociology) lies closer to the abstract end and the thinking of those with a vocational/non-academic background (plumbing, nursing, welding) lies closer to the concrete/sense-based end, although this is not totailising. When the interactions of those two types collide there is a problem, they are talking in different languages, rooted in a different understanding of the world. An important side note here are the recent changes in non-academic/vocational; during the last half century, there have been tendencies in those areas for more abstraction and conceptualisation, less sense-based understanding. For more, see Empire, Multitude and Commonwealth by Michael Hardt and Antonion Negri, specifically the concept of cognitive labour.

So, what’s the point of this, how does this usefully relate to Tangle Ball and the rest of the world?

Firstly, I’m not entirely sure. Secondly, it indicates there are deeper differences between some humans than we perhaps account for when we have a discussion. When a person explains some abstract concept or other to someone who interacts with the world in a concrete way, they are doing something akin to speaking in a different language, producing confusion. As Yoda tells us, this confusion results in fear, anger and eventually hatred. That bad situation is partly resolved by understanding and proceeding cautiously, although there will probably always be a gap and I don’t know how to fix that.

A further, compounding part of the problem is that the two ways of seeing the world are not viewed as equal-but-different, but one (abstract/language) is seen as fundamentally more important than the other (concrete/sense). This is flawed and strays from the principle that all humans are important and valid and equal, and also negates the essential role sense-based interpretation plays in carrying out tasks involving the manipulation of physical objects. Suggestions welcome for how to fix or mitigate this problem.

Those of you who are paying attention will have noticed the irony here: the description of this idea is and can only be entirely abstract, based in language, not in the senses

[1] Christian de Portzamparc and Philippe Sollers, Writing and seeing architecture. Minneapolis, MN: University of Minnesota Press, c2008. p 46

A YaCy search engine node at Tangle Ball

As of today, Tangle Ball has attached a node to the Yacy search engine network. Yacy is a decentralised crawler and search engine software. It runs on several hundred nodes over the planet, which share their crawl indices with each other. The node hosted by Tangle Ball can be reached here, where web searches can be carried out.

What is the significance of this?

The majority of people connected to the internet use a commercial search engine, such as Google, Bing or Yahoo. There are multiple problems with this massive centralisation of searching. Researchers have found that Google promotes its own results over those of competitors, and builds up profiles of those who perform searches. The results of these profiles are used for two reasons, firstly to return results which the person searching will be more likely to want to see. This sounds great, of course we want search engines to return what we’re looking for, but it rapidly descends into a situation where we are “protected” from seeing anything we might disagree with. Eli Pariser has conducted studies into the effect of this, in an effect known as “bubbling”, the results of which are presented in his book The Filter Bubble. This is one of the reasons behind Duck Duck Go, a search engine which neither profiles nor bubbles those who use it.

However, the results from Duck Duck Go search engine are still produced by commercial entities, mainly Microsoft and Yahoo, and thus conform to their values. Particularly in the case of Microsoft, these values have long been recognised as not in the best interests of those who use their services and products. Numerous court cases, fines and criminal convictions stand testament to this.

A second use of the profiles built up from using these commercial search engines is the aggregation of the data, which is then used to target adverts at users. Google offers a variety of services, which mean most people are almost constantly logged into either Google Mail, Google Documents or some other product, meaning any searches they make are saved and stored alongside other data from their email, the places they visit, and the RSS feeds they subscribe to. These allow a sophisticated model of each person to be built up, allowing precise direction of advertising at the person. The popular retort to this, as to any advertising, is “you don’t have to buy what they advertise”, but this is only partly true as it ignores the methods which advertisers use. The success alone of marketing, promotion and advertising demonstrates that the methods used are persuasive, and can induce people to buy items they might not otherwise. For an insight into how marketing works, take a look at “The Century of the Self”, by Adam Curtis at the BBC; it’s very informative, and free to watch, available here.

Yacy suffers from none of these problems: owing to its decentralised nature, no single entity controls the search results, and no entity can profile users, as they do not have access to search data. Further, it has no single point of failure, so is tolerant to any node failing.

If you would like to take part in the Yacy network, you can do so by carrying out searches here. To contribute data to the Yacy network, you can use any internet-capable computer, one with a recent Linux-based distribution is best, although it will also work on Windows and Mac OSX. The software can be downloaded from here. The computer you install it on must be reachable from the net, that is you must configure your router to allow connections from other computers.

Yacy is free software, released under the GNU General Public License, and thus can be freely examined, used, modified and redistributed.

I can set the software to crawl any site, if there are any in particular you think worth indexing, let me know in the comments below – include a few words why it’s worthwhile.

The Digital Commons: Escape From Capital?

So, after a year of reading, study, analysis and writing, my thesis is complete. It’s on the digital commons, of course, this particular piece is an analysis to determine whether or not the digital commons represents an escape from, or a continuation of, capitalism. The full text is behind the link below.

The Digital Commons: Escape From Capital?

In the conclusion I suggested various changes which could be made to avert the encroachment of capitalist modes, as such I will be releasing various pieces of software and other artefacts over the coming months.

For those who are impatient, here’s the abstract, the conclusion is further down:

In this thesis I examine the suggestion that the digital commons represents a form of social organisation that operates outside any capitalist relationships. I do this by carrying out an analysis of the community and methods of three projects, namely Linux, a piece of software; Wikipedia, an encyclopedia; and Open Street Map, a geographic database.

Each of these projects, similarly to the rest of the digital commons, do not require any money or other commodities in return for accessing them, thus denying exchange as the dominant method of distributing resources, instead offering a more social way of conducting relations. They further allow the participation of anyone who desires to take part, in relatively unhindered ways. This is in contrast to the capitalist model of
requiring participants demonstrate their value, and take part in ways demanded by capital.

The digital commons thus appear to resist the capitalist mode of production. My analysis uses concepts from Marx’s Capital Volume 1, and Philosophic and Economic Manuscripts of 1844, with further support from Hardt and Negri’s Empire trilogy. It analyses five concepts, those of class, commodities, alienation, commodity fetishism and surplus-value.

I conclude by demonstrating that the digital commons mostly operates outside capitalist exchange relations, although there are areas where indicators of this have begun to encroach. I offer a series of suggestions to remedy this situation.

Here’s the conclusion:

This thesis has explored the relationship between the digital commons and aspects of the capitalist mode of production, taking three iconic projects: the Linux operating system kernel, the Wikipedia encyclopedia and the Open Street Map geographical database as case studies. As a result of these analyses, it appears digital commons represent a partial escape from the domination of capital.

 

As the artefacts assembled by our three case studies can be accessed by almost anybody who desires, there appear to be few class barriers in place. At the centre of this is the maxim “information wants to be free” 1 underpinning the digital commons, which results in assistance and education being widely disseminated rather than hoarded. However, there are important resources whose access is determined by a small group in each project, rather than by a wider set of commoners. This prevents all commoners who take part in the projects from attaining their full potential, favouring one group and thus one set of values over others. Despite the highly ideological suggestion that anyone can fork a project at any time and do with it as they wish, which would suggest a lack of class barriers, there is significant inertia which makes this difficult to achieve. It should be stressed however, that the exploitation and domination existing within the three case studies is relatively minor when compared to typical capitalist class relations. Those who contribute are a highly educated elite segment of society, with high levels of self-motivation and confidence, which serves to temper what the project leaders and administrators can do.

 

The artefacts assembled cannot be exchanged as commodities, due to the license under which they are released, which demands that the underlying information, be it the source code, knowledge or geographical data always be available to anyone who comes into contact with the artefact, that it remain in the commons in perpetuity.

 

This lack of commoditisation of the artefacts similarly resists the alienation of those who assemble them. The thing made by workers can be freely used by them, they make significant decisions around how it is assembled, and due to the collaborative nature essential to the process of assembly, constructive, positive, valuable relationships are built with collaborators, both within the company and without. This reinforces Stallman’s suggestion that free software, and thus the digital commons is a more social way of being 2.

 

Further, the method through which the artefacts are assembled reduces the likelihood of fetishisation. The work is necessarily communal, and involves communication and association between those commoners who make and those who use. This assists the collaboration essential for such high quality artefacts, and simultaneously invites a richer relationship between those commoners who take part. However, as has been shown, recent changes have shown there are situations where the social nature of the artefacts is being partially obscured, in favour of speed, convenience and quality, thus demonstrating a possible fetishisation.

 

The extraction of surplus-value is, however, present. The surplus extracted is not money, but in the form of symbolic capital. This recognition from others can be exchanged for other forms of capital, enabling the leaders of the three projects investigated here to gain high paying, intellectually fulfilling jobs, and to spread their political beliefs. While it appears there is thus exploitation of the commoners who contribute to these projects, it is firstly mild, and secondly does not result in a huge imbalance of wealth and opportunity, although this should not be seen as an apology for the behaviour which goes on. Whether in future this will change, and the wealth extracted will enable the emergence of a super-rich as seen in the likes of Bill Gates, the Koch brothers and Larry Ellison remains to be seen, but it appears unlikely.

 

There are however ways in which these problems could be overcome. At present, the projects are centred upon one website, and an infrastructure and values, all generally controlled by a small group who are often self-selected, or selected by some external group with their own agenda. This reflects a hierarchical set of relationships, which could possibly be addressed through further decentralisation of key resources. For examples of this, we can look at YaCy 3, a search engine released under a free software license. The software can be used in one of a number of ways, the most interesting of these is network mode, in which several computers federate their results together. Each node searches a different set of web sites, which can be customised, the results from each node are then pooled, thus when a commoner carries out a search, the terms are searched for in the databases of several computers, and the results aggregated. This model of decentralisation prevents one entity taking control over what are a large and significant set of resources, and thus decreases the possibility of exploitation, domination and the other attendant problems of minority control or ownership over the means of production.

 

Addressing the problem of capitalists continuing to extract surplus, requires a technically simple, but ideologically difficult, solution. There is a general belief within the projects discussed that any use of the artefacts is fine, so long as the license is complied with. Eric Raymond, author of the influential book on digital commons governance and other matters, The Cathedral and The Bazaar, and populariser of the term open source, is perhaps most vocal about this, stating that the copyleft tradition of Stallman’s GNU is overly restrictive of what people, by which he means businesses, can do, and that BSD-style, no copyleft licenses are the way forward 4. The majority of commoners taking part do not follow his explicit preference for no copyleft licenses, but nonetheless have no problem with business use of the artefacts, suggesting that wide spread use makes the tools better, and that sharing is inherently good. It appears they either do not have a problem with this, or perhaps more likely do not understand that this permissiveness allows for uses that they might not approve of. Should this change, a license switch to something preventing commercial use is one possibility.

1Roger Clarke, ‘Roger Clarke’s “Information Wants to Be Free …”’, Roger Clarke’s Web-Site, 2013, http://www.rogerclarke.com/II/IWtbF.html.

2Richard Stallman, Free Software Free Society: Selected Essays of Richard M. Stallman, ed. Joshua Gay, 2nd ed (Boston, MA: GNU Press, Free Software Foundation, 2010), 8.

3YaCy, ‘Home’, YaCy – The Peer to Peer Search Engine, 2013, http://yacy.net/.

4Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, ed. Tim O’Reilly, 2nd ed. (Sebastopol, California: O’Reilly, 2001), 68–69.

Upcoming Improvements to Impero

Zotero is a wonderful piece of software, I have used it for several years to manage and cite references for my university study, and it has proved useful and reliable. However, there are some areas i would like to improve it, and as it’s released under the Affero GPL, I can do exactly that.

I’ve already talked about a user running their own sync server, and made my own modifications to the Firefox extension so this could happen. The people behind Zotero aren’t interested in making data server installation easy, or in fact anything less than dangerous for one’s data, so that sub-project is ongoing, albeit slowly – analysis and documentation of the sync protocol is required, to make sure the extension and the sync server work together correctly. As I’ve already written, I’m not impressed about their insisting on storing my data on their servers, this reminds me of Google and Facebook’s attitude to users: “you are the product, not the customer”. In the situation of these two companies, the customer is whoever buys access to users’ data, and hence the ability to sell more precisely to them. There has been much written on this so I won’t go further: check out these links to find out more. Are Zotero selling, or not being careful with, their users’ data? They say not, but there are many ways to effect data transfer which get around any legal statement. Personally, I’d rather remove the possibility they could, rather than have to trust their competency and morals.

Anyway, back to the technical stuff. The next thing I will be modifying is the citation process. At the moment it is simple to do, but the process is somewhat abstract, and does not assist a user as much as it could, particularly in the case where there are lots of references being used for a given passage of text. For example, let’s say I’m writing a piece on the “digital commons” and “surplus-value”. Incidentally, I am, and the size of this task is what prompted me to think about the process. Now, I have several ideas I’ve written about this subject in note form, and will be expanding upon these in the formal text, including citations from a large number of sources. At the present, the way this is done is to write a piece of text in quote or paraphrase form, and refer to the text which the idea or quote comes from. But, which text? There are 700+ in my 3 year-old database, and I’m sure other users have much larger bibliographies. The easiest way to get around this at the moment is to use tags. So, for instance, I can tag Marx’s Capital with “surplus-value”, some quote from Linus Torvalds with “digital commons”, then when I’m working on this piece, I only bring up those items tagged appropriately and can cite them, while copying and pasting a quote from the text. This is where the potential for improvements start. It is not currently possible to select some text from a work, turn it into a quote, and have that linked to the original work by Impero. So, for instance, I am using Capital to write about several concepts: commodities, surplus-value, primitive accumulation, etc. Rather than typing in a tag and getting the entire work returned, I would like to tag individual quotes from a work with “surplus-value” or whatever, which can then be shown when say I want to see everything with that tag, i.e. when I am working on a given section. This gives me direct access to the necessary quotes rather than having to search through the entire work, and presents them all at once, in their correct context.

It is currently possible to create a note containing the text, and then use this as a citation. But inserting this note into a document does not work correctly; the author of the quote is not set as the author of the work, and the quote is put inside quotes, inside brackets (for the APA style anyway), which is wrong.

Secondly, the Zotero “Add citation” dialogue needs to change. It currently works in such a way that it is opened when I want to add a citation, and then closes when it has been added. This seems unnecessary, so I aim to have it, or its replacement, open at all times, and then use an ‘Add’ rather than ‘OK’ button to insert a citation.

Thirdly, and linked to the first change, it can get clunky typing whichever term into the tag box, so I aim to set Zotero up to check for whichever tags have been applied to the current section within Libre Office Writer, based upon cursor position. If the cursor is within a section titled “surplus-value”, then all the references and quotes with the tag “surplus-value” get shown. This will show the quote, and also the context it is in, so probably 20 words either side.

Domino is now known as Impero

So, someone pointed out to me that the name ‘Domino’ is owned by a small technology company named IBM, who are apparently somewhat trigger-happy when it comes to intellectual property, lawyers and people who cross them. Thus, Domino is now known as ‘Impero’. Development is ongoing, a number of small changes have recently been merged from the Zotero repository.

Domino: an extension of Zotero

Since starting at university, I’ve been using the ‘Zotero‘ extension for Firefox, to add in-text citations to essays I write.  This is one of the most powerful extensions I have found for Firefox, making referencing sources very simple.

However, there is room for improvement to fit my way of working.  I value having control over as much of what I do as I can, particularly online, for that reason I run my own email, blog and backup server.  Zotero has two server components, one of which stores electronic copies of references.  It also uses a central server to coordinate the data stored in a citation database, allowing access from multiple computers.  Both of these services can be hosted by the Center for History and New Media at George Mason University, the group behind Zotero.  And a fine job they do of this, I used their services for several months while I was learning to run a server and how to use WebDAV.  However, as with any online service, doing so leaves one in the hands of a third party, with a variety of undesired outcomes possible. Data breach? Request from the FBI for user data? Technical problems causing loss of service?  All these and more can cause problems, and render the user beholden to those who run the service.

As the technology to run a WebDAV server is mature and predictable, the Zotero team are willing to advertise the possibilities for a user to do so, easily allowing a user to enter their server details into the preferences for Zotero.  It seems they are not so keen to do so for the sync server however. Various reasons are cited, including a seemingly Apple-inspired ‘guaranteeing a good user experience’.  There’s some merit in this, although I disagree with it.  Fortunately, they are more prepared than Apple to let users go their own way if they want to, as the source code for the server and the extension are available under the GPL, allowing us to roll our own versions.

So, here’s Domino, my updated version of the Zotero extension, with changes which allow you to set your own sync server from the preferences dialogue. The sync server is released here. If you are going to use this, be careful: Zotero uses its own protocol, and it is not documented or looked over by a third-party such as the World Wide Web consortium, which looks after other standards such as HTML and HTTP.  As such, there are chances for things to go wrong. At the worst, you could lose or corrupt your citation database.  The Zotero team are working on the server code also and I aim to join in soon, at some point it’ll be released as a deb package.

Zotero will not merge my changes to the extension into their mainline release, so I’ll be merging the changes between each of their new releases and my fork, a few days after they release a new version.

If you’d like to contribute, my version of the code is on github here.

For more progress on Domino, check this page.

Free software and the extraction of capital

This essay will asses the relationship between free software and the capitalist mode of accumulation, namely that of the extraction of various forms of capital to produce profit. I will perform an analysis through the lens of the Marxist concept of extracting surplus from workers, utilise Bourdieu’s theory of capital, and the ideas of Hardt and Negri as they discuss the various economic paradigms, and the progression through these.

The free software movement is one which states that computer software should not have owners (Stallman, 2010, chap. 5), and that proprietary software is fundamentally unethical (Stallman, 2010, p. 5). This idea is realised through “the four freedoms” and a range of licenses, which permit anyone to: use for any purpose; modify; examine and redistribute modified copies, of the software so licensed (Free Software Foundation, 2010). These freedoms are posited as a contrast to the traditional model of software development, which rests all ownership and control of the product in its creators. As free software is not under private control, it would appear at first to escape the capitalist mode of production, and the problems which ensue from that, such as alienation, commodity fetishism and the concentration of power and wealth in the hands of a few.

For a definition of the commons, Bollier states:

commons comprises a wide range of shared assets and forms of community governance. Some are tangible, while others are more abstract, political, and cultural. The tangible assets of the commons include the vast quantities of oil, minerals, timber, grasslands, and other natural resources on public lands, as well as the broadcast airwaves and such public facilities as parks, stadiums, and civic institutions. … The commons also consists of intangible assets that are not as readily identified as belonging to the public. Such commons include the creative works and public knowledge not privatized under copyright law. … A last category of threatened commons is that of so-called ‘gift economies’. These are communities of shared values in which participants freely contribute time, energy, or property and over time receive benefits from membership in the community. The global corps of GNU/Linux software programmers is a prime example: enthusiasts volunteer their talents and in return receive useful rewards and group esteem. (2002)

Thus, free software would appear to offer an escape from the system of capitalist dominance based upon private property, as the products of free software contribute to the commons, resist attempts at monopoly control and encourage contributors to act socially.

Marx described how through the employment of workers, investors in capitalist businesses were able to amass wealth and thus power. The employer invests an amount of money into a business, to employ labour, and he labourer creates some good, be it tangible or intangible. The labourer is then paid for this work, and the company owner takes the good and sells it at some higher price, to cover other costs and to provide a profit. The money the labourer is paid is for the “necessary labour” (Marx, 1976a, p. 325), i.e. the amount the person requires to reproduce labour, that is the smallest amount possible to ensure the worker can live, eat, house themself, work fruitfully and produce offspring who will do similar. The difference between this amount and the amount the good sells for, minus other costs, which are based upon the labour of other workers, is the “surplus value”, and equals the profit to the employer (Marx, 1976a, p. 325). The good is then sold to a customer, who thus enters into a social relationship with the worker that made it. However, the customer has no knowledge of the worker, does not know the conditions they work under, their wage, their name or any other information about them, their relationship is mediated entirely through the commodity which passes from producer to consumer. Thus, despite the social relationship between the two, they are alienated from each other, and the relationship is represented through a commodity object, which is thus fetishised over the actual social relationship (Marx, 1976a, chap. 1). The worker is further alienated, from the product of their labour, for which they are not fully recompensed, as they are not paid the full exchange amount which the capitalist company obtains, and do not have control over any further part in the commodity than the work they employed to put in.

If we study the reasons participants have for contributing to free software projects, coders fall into one or more of the following three categories: firstly, coders who contribute to create something of utility to themselves, secondly, those who are paid by a company which employs them to write code in a traditional employment relationship, and finally those who write software without economic compensation, to benefit the commons (Hars & Ou, 2001). The first category does not enter into a relationship with others, so the system of capitalist exchange does not need to be considered. The second category, that of a worker being paid to contribute to a project, might seem unusual, as the company appears to be giving away the result of capital investment, thus benefiting competitors. Although this is indeed the case, the value gained in other contributors viewing, commenting on and fixing the code is perceived to outweigh any disadvantages. In the case of a traditional employee of a capitalist company, the work, be it production of knowledge, carrying out of a service or making a tangible good, will be appropriated by the company the person works for, and credited as its own. The work is then sold at some increased cost, the difference between the cost to make it and the cost it is sold for being surplus labour, which reveals itself as profit.

The employed software coder working on a free software performs necessary labour (Marx, 1976a, p. 325), as any other employee does, and this is rewarded with a wage. However, the surplus value, which nominally is used to create profit for the employer by them appropriating the work of the employee, is not solely controlled by the capitalist. Due to the nature of the license, the product of the necessary and surplus labour can be taken, used and modified by any other person, including the worker. Thus, the traditional relationship of the commons to the capitalist is changed. The use of paid workers to create surplus value is an example of the capitalist taking the commons and re-appropriating it for their own gain. However, as the work is given back to the commons, there is an argument that the employer has instead contributed to the wider sphere of human knowledge, without retaining monopoly control as the traditional copyright model does. Further, the worker is not alienated by their employer from the product of their labour, it is available for them to use as they see fit.

The second category of contributors to a project, volunteers are generally also highly-skilled, well-paid, and materially comfortable in life. According to Maslow’s Hierarchy of Needs (Maslow, 1943), as individuals attain the material comforts in life, so they are likely to turn their aspirations towards less tangible but more fulfilling achievements, such as creative pursuits. Some will start free software projects of their own, as some people will start capitalist businesses: the Linux operating system kernel, The GNU operating system and the Diaspora* [sic] distributed social networking software are examples of this situation. If a project then appears successful to others, it will gain new coders, who will lend their assistance and improve the software. The person(s) who started the project are acknowledged as the leader(s), and often jokingly referred to as the “benevolent dictator for life” (Rivlin, 2003), although their power is contingent, because as Raymond put it, “the culture’s ‘big men’ and tribal elders are required to talk softly and humorously deprecate themselves at every turn in order to maintain their status.” (2002). As leaders, they will make the final decision of what code goes into the ‘official’ releases, and be recognised as the leader in the wider free software community.

Although there may be hundreds of coders working on a project, as there is an easily identifiable leader, he or she will generally receive the majority of the credit for the project. Each coder will carry out enough work to produce the piece of code they wish to work on, thus producing a useful addition to the software. As suggested above by Maslow, the coder will gain symbolic capital, defined by Bourdieu as “the acquisition of a reputation for competence and and image of respectability” (1984, p. 291) and as “predisposition to function as symbolic capital, i.e., to be unrecognized as capital and recognized as legitimate competence, as authority exerting an effect of (mis)recognition … the specifically symbolic logic of distinction” (Bourdieu, 1986). This capital will be attained through working on the project, and being recognised by: other coders involved in the project and else where; the readers of their blog; their friends and colleagues, and they may occasionally be featured in articles on technology web news sites (KernelTrap, 2002; Mills, 2007). Each coder adds their piece of effort to the project, gaining enough small acknowledgements for their work along the way to feel they should continue coding, which could be looked at as necessary labour (Marx, 1976a, p. 325). Contemporaneously, the project leader gains a smaller acknowledgement for the improvements to the project as a whole, which in the case of a large project can be significant over time. In the terms expressed by Marx, although the coder carries out a certain amount of work, it is then handed over to the project, represented in the eyes of the public by the leader who accrues similar small amounts of capital from all coders on the project. This profit is surplus value (Marx, 1976a, p. 325). Similarly to the employed coder, the economic value of the project does not belong to the leaders, there is no surplus extracted there, as all can use it.

To take a concrete example, Linus Torvalds, originator and head of the Linux kernel is known for his work throughout the free software world, and feted as one of its most important contributors (Veltman, 2006, p. 92). The perhaps surprising part of this, is that Torvalds does not write code for the project any more, he merely manages others, and makes grand decisions as to which concepts, not actual code, will be allowed into the mainline, or official, release of the project (Stout, 2007). Drawing a parallel with a traditional capitalist company, Linus can be seen as the original investor who started the organisation, who manages the workers, and who takes a dividend each year, despite not carrying out any productive work. Linus’ original investment in 1991 was economic and cultural capital, in the form of time and a part-finished degree in computer science (Calore, 2009). While he was the only contributor, the project progressed slowly, and the originator gained symbolic, social and cultural capital solely through his own efforts, thus resembling a member of the petit bourgeois. As others saw the value in the project, they offered small pieces of code to solve small problems and progress the code. These were incorporated, thus rapidly improving the software, and the standing of Torvalds.

Like consumers of any other product, users of Linux will not have be aware of who had made the specific change unless they make an effort to read the list of changes for each release, thus resulting in the coder being alienated from the product of their labour and the users of the software (Marx, 1959, p. 29), who fetishise (Marx, 1976a, chap. 1) the software over the social relationship which should be prevalent. For each contribution, which results in a small gain in symbolic capital to the coder, Linus takes a smaller gain in those forms of capital, in a way analogous to a business investor extracting surplus economic capital from her employees, despite not having written the code in question. The capitalist investor possesses no particular values, other than to whom and where she was born, yet due to the capital she is able to invest, she can amass significant economic power from the work of others. Over 18 years, these small gains in capital have also added up for Linus Torvalds, and such is now the symbolic capital expropriated that he is able to continue extracting this capital fro Linux, while reinvesting capital in writing code for other projects, in this case ‘Git’ (Torvalds, 2005), which has attracted coders in part due to the fame of its principal architect. The surplus value of the coders on this project is also extracted and transferred to the nominal leader, and so the cycle continues, with the person at the top continuously and increasingly benefiting from the work of others, at their cost.

The different forms of capital can readily be exchanged for one another. As such, Linus has been offered book contracts (Torvalds, 2001), is regularly interviewed for a range of publications (Calore, 2009; Rivlin, 2003), has gained jobs at high prestige technology companies (Martin Burns, 2002), and been invited to various conferences as guest speaker. The other coders on the Linux project have also gained, through skills learned, social connections and prestige for being part of what is a key project in free software, although none in the same way as Linus.

Free software is constructed in such a way as to allow a range of choices to address most needs, for instance in the field of desktop operating systems there are hundreds to choose from, with around six distributions, or collections of software, covering the majority of users, through being recognised as well-supported, stable and aimed at the average user (Distrowatch.com, 2011). In order for the leaders of each of these projects to increase their symbolic capital, they must continuously attract new users, be regularly mentioned in the relevant media outlets and generally be seen as adding to the field of free software, contributing in some meaningful way. Doing so requires a point-of-difference between their software and the other distributions. However, this has become increasingly difficult, as the components used in each project have become increasingly stable and settled, so the current versions of each operating system will contain virtually identical lists of packages. In attempting to gain users, some projects have chosen to make increasingly radical changes, such as including versions of software with new features even though they are untested and unstable (Canonical Ltd., 2008), and changing the entire user experience, often negatively as far as users are concerned (Collins, 2011). Although this keeps the projects in the headlines on technology news sites, and thus attracts new users, it turns off experienced users, who are increasingly moving to more stable systems (Parfeni, 2011).

This proliferation of systems, declining opportunities to attract new users, and increasingly risky attempts to do so, demonstrates the tendency of the rate of profit to fall, and the efforts capitalist companies go to in seeking new consumers (Marx, 1976b, chap. 3), so they can continue extracting increased surplus value as profit Each project must put in more and more effort, in increasingly risky areas, thus requiring increased maintenance and bug-fixing, to attract users and be appreciated in the eyes of others.

According to Hardt and Negri, since the Middle Ages, there have been three economic paradigms, identified by the three forms from which profit is extracted. These are: land, which can be rented out to others or mined for minerals; tangible, movable products, which are manufactured by exploited workers and sold at a profit; and services, which involve the creation and manipulation of knowledge and affect, and the care of other humans, again by exploited workers (2000, p. 280). Looking more closely at these phases, we can see a procession. The first phase relied mainly upon the extraction of profit from raw materials, such as the earth itself, coal and crops, with little if any processing by humans. The second phase still required raw materials, such as iron ore, bauxite, rubber and oil, but also required a significant amount of technical processing by humans to turn these materials into commodities which were then sold, with profit extracted from the surplus labour of workers. Thus the products of the first phase were important in a supporting role to the production of the commodities, in the form of land for the factory, food for workers, fuel for smelters and machinery, and materials to fashion, but the majority of the value of the commodity was generated by activities resting on these resources, the working of those raw materials into useful items by humans. The latter of the phases listed above, the knowledge, affect and care industry, entails workers collecting and manipulating data and information, or performing some sort of service work, which can then be rented to others. Again, this phase relies on the other phases: from the first phase, land for offices, data centres, laboratories, hospitals, financial institutes, and research centres; food for workers, fuel for power; plus from the second phase: commodities including computers, medical equipment, office supplies, and laboratory and testing equipment, to carry out the work. Similarly to the previous phase, these materials and items are not directly the source of the creation of profit, but are required, the generation of profit relies and rests on their existence.

In the context of IT, this change in the dominant paradigm was most aptly demonstrated by the handover of power from the mighty IBM to new upstart Microsoft in 1979, when the latter retained control over their operating system software MS-DOS, despite the former agreeing to install it on their new desktop computer range. The significance of this apparent triviality was illustrated in the film ‘Pirates of Silicon Valley’, during a scene depicting the negotiations between the two companies, in which everyone but Bill Gates’ character froze as he broke the ‘fourth wall’, turning to the camera and explaining the consequences of the mistake IBM had made (Burke, 1999). IBM, the dominant power in computing of the time, were convinced high profit continued to lie in physical commodities, the computer hardware they manufactured, and were unconcerned by lack of ownership of the software. Microsoft recognised the value of immaterial labour, and soon eclipsed IBM in value and influence of the industry, a position which they held for around 20 years.

Microsoft’s method of generating profit was to dominate the field of software, their products enabling users to create, publish and manipulate data, while ignoring the hardware, which was seen as a commodity platform upon which to build (Paulson, 2010). Further, the company wasn’t particularly interested what its customers were doing with their computers, so long as they were using Windows, Office and other technologies, to work with that data, as demonstrated by a lack of effort to control the creation or distribution of information. As Microsoft were increasing their dominance, the free software GNU Project was developing a free alternative, to firstly the Unix operating system (Stallman, 2010, p. 9), and later to Microsoft products. Fuelled by the rise in highly capable, cost-free software which competed with and undercut Microsoft, so commoditising the market, the dominance of that company faded in the early 2000s (Ahmad, 2009), to be replaced by a range of companies which built on the products of the free software movement, by relying on the use value, but no longer having any interest in the exchange value of the software (Marx, 1976a, p. 126). The power Microsoft retains today through its desktop software products is due in significant part to ‘vendor lock-in’ (Duke, n.d.), the process of using closed standards, only allowing their software to interact with data in ways prescribed by the vendor. Google, Apple and Facebook, the dominant powers in computing today, would not have existed in their current form were it not for various pieces of free software (Rooney, 2011). Notably, the prime method of profit making of these companies is through content, rather than via a software or hardware platform. Apple and Google both provide platforms, such as the iPhone and Gmail, although neither companies makes large profit directly from these platforms, sometimes to the point of giving them away, subsidised heavily via their profit-making content divisions (Chen, 2008).

Returning to the economic paradigms discussed by Hardt and Negri, we have a series of sub-phases, each building on the sub-phase before. Within the third, knowledge, phase, the first sub-phase of IT, computer software, such as operating systems, web servers and email servers, was a potential source of high profits through the 1980s and 1990s, but due to high competition, predominantly from the free software movement, the rate of profit has dropped considerably, with for instance the free software ‘Apache’ web server being used to host over 60% of all web sites (Netcraft Ltd., 2011). Conversely, the capitalist companies from the next sub-phase were returning high profits and growth, through extensive use of these free products to sell other services. This sub-phase is noticeable for its reliance on creating and manipulating data, rather than producing the tools to do so, although both still come under the umbrella of knowledge production. This trend was mirrored in the free software world, as the field of software stabilised, thus realising fewer opportunities for increasing one’s capital through the extraction of surplus in this area.

As the falling rate of profit reduced the potential to gain symbolic capital through free software, so open data projects, which produce large sets of data under open licences, became more prevalent, providing further areas for open content contributors to invest their capital. These initially included Wikipedia, the web-based encyclopedia which anyone can edit, in 2001 (“Wikipedia:About,” n.d.). Growth of this project was high for several years, with a large number of new editors joining, but has since become so small as to find attracting new users very difficult (Chi, 2009; Moeller & Zachte, 2009). Similarly, OpenStreetMap, which aims to map the world, was begun in 2004, and grew at a very high rate once it became known in the mainstream technology press. However, now that the majority of streets and significant geographical data in western countries are mapped, the project is finding it difficult to attract new users, unless they are willing to work on adding increasingly esoteric minutiae, which has little obvious effect on the map, and thus provides a less obvious gain in symbolic capital attained by the user (Fairhurst, 2011). For the leaders of the project, this represents higher and higher effort to be put in, for comparatively smaller returns, again the rate of profit is falling. Rather than the previous, relatively passive method of attracting new users and expanding into other areas, the project founders and leading lights are now aggressively pushing the project to map less well-covered areas, such as a recent effort in a slum in Africa (Map Kibera, 2011); starting a sub-group to create maps in areas such as Haiti, to help out after natural disasters (Humanitarian OpenStreetMap Team, 2011); and providing economic grants for those who will map in less-developed countries (Black, 2008). This closely follows the capitalist need to seek out new markets and territories, once all existing ones are saturated, to continuously push for more growth, to arrest the falling rate of profit.

According to Hardt and Negri,

You can think and form relationships not only on the job buy also in the street, at home, with your neighbors and friends. The capacities of biopolitical labor-power exceed work and spill over into life. We hesitate to use the word “excess” for this capacity because from the perspective of society as a whole it is never too much. It is excess only from the perspective of capital because it does not produce economic value that can be captured by the individual capitalist (2011)

The capitalist mode of production brings organisational structure to the production of value, but in doing so fetters the productivity of the commons, the productivity of the commons is higher when capital stays external to the production process. This hands-off approach to managing production can be seen extensively in free software, through the self-organising, decentralised model it utilises (Ingo, 2006, p. 38), eschewing traditional management forms with chains of responsibility. Economic forms of capital are prevalent in free software, as when technology companies including advertising provider Google, software support company Red Hat and software and services provider Novell employ coders to commit code to various projects such as the Linux kernel (The Linux Foundation, 2009). However, the final decision of whether the code is accepted, is left up to the project itself, which is usually free of corporate management. There are numerous, generally temporary exceptions to this rule, including OpenOffice.org, the free software office suite, which was recently acquired by software developer Oracle. Within a few months of the acquisition, the number of senior developers involved in the project dropped significantly, most of them citing interference from Oracle in the management of the software, and those who left set up their own fork of the project, based on the Oracle version (Clarke, 2010). Correspondingly, a number of software collections also stopped including the Oracle software, and instead used the version released by the new, again community-managed, offshoot (Sneddon, 2010). Due to the license which OpenOffice.org is released under, all of Oracle’s efforts to take direct control of the project were easily sidestepped. Oracle may possess the copyright to all of the original code, through purchasing the project, but this comes to naught once that code is released, it can be taken and modified by anyone who sees fit.

This increased productivity of the commons can be seen in the response to flaws with the software: as there is no hierarchical structure enforced by, for example, employment contract, problems reported by users can and are taken on by volunteer coders who will work on the flaw until it is fixed, without needing to consult line managers, and align with a corporate strategy. If the most recognised source for the software does not respond quickly, either due to financial or technical reasons, because of the nature of the licence, other coders are able to fix the problem, including those hired by customers. For those not paid, symbolic capital continues to play a part here: although the coders may appear to be unpaid volunteers, in reality there is kudos to be gained by solving a problem quickly, pushing coders to compete against each other, even while sharing their advances.

Despite this realisation that capital should not get too close to free software, the products of free software are still utilised by many corporates: free software forms the key infrastructure for a high proportion of web servers (Netcraft Ltd., 2011), and is extensively used in mobile phones (Germain, 2011) and financial trading (Jackson, 2011). The free software model thus forms a highly effective method for producing efficient software useful to capital. The decentralised, hard-to-control model disciplines capital into keeping its distance, forcing corporations to realise that if they get too close, try to control too much, they will lose out by wasting resources and appearing as bad citizens of the free software community, thus losing symbolic capital in the eyes of potential investors and customers.

Conclusion

The preceding analysis of free software and its relationship to capitalism demonstrates four areas in which the former is relevant to the latter.

Firstly, free software claims to form a part of the commons, and to a certain extent, this is true: the data and code in the projects are licensed in a way which allows all to take benefit from using them, they cannot be monopolised, owned and locked-down as capitalism has done with the tangible assets of the commons, and many parts of the intangible commons. Further, it appears that not only is free software not enclosable, but whenever any attempt to control it is exerted by an external entity, the project radically changes direction, sheds itself of regulation and begins where it left off, more wary of interference from capital.

Secondly, however, the paradigm of free software shows that ownership of the thing is not necessarily required to extract profit with it, there are still opportunities for the capitalist mode of accumulation despite this lack of close control of it. The high quality, efficient tools provided by free software are readily used by capitalist organisations to sell and promote other intangible products, and to manipulate various forms of data, particularly financial instruments, a growth industry in modern knowledge capitalism, at greater margins than had free software not existed. This high quality is due largely to the aforementioned ability of free software to keep capital from taking a part in its development, due to its apparent inefficiency at managing the commons.

Thirdly, although free software cannot be owned and controlled as physical objects can, thus apparently foiling the extraction of surplus value as economic profit from alienated employees, the nominal leaders of each free software project appear to take a significant part of the credit for the project they steer, thus extracting symbolic capital from other, less prominent coders of the project. This is despite not being involved in much, or in some cases any, of the actual code-writing, thus mirroring the extraction of profit through surplus labour adopted by capitalism.

Finally, the tendency of the rate of profit to fall seems to pervade free software in the same way as it affects capitalism. Certain free software projects have been shown to have difficulty extracting profit, in the form of surplus symbolic capital, and this in turn, has caused a turn to open data, which initially showed itself to be an area with potentiality for growth and profit, although it too has now suffered the same fate as free software.

References

Ahmad, A. (2009). Google beating the evil empire | Malay Mail Online. Retrieved November 3, 2011, from http://www.mmail.com.my/content/google-beating-evil-empire

Black, N. (2008). CloudMade?» OpenStreetMap Grants. Retrieved October 29, 2011, from http://blog.cloudmade.com/2008/03/17/openstreetmap-grants/

Bollier, D. (2002). Reclaiming the Commons. Retrieved November 3, 2011, from http://bostonreview.net/BR27.3/bollier.html

Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. London: Routledge & Kegan Paul.

Bourdieu, P. (1986). The Forms of Capital. Retrieved November 5, 2011, from http://www.marxists.org/reference/subject/philosophy/works/fr/bourdieu-forms-capital.htm

Burke, M. (1999). Pirates of Silicon Valley.

Calore, M. (2009). Aug. 25, 1991: Kid From Helsinki Foments Linux Revolution | This Day In Tech | Wired.com. Retrieved November 5, 2011, from http://www.wired.com/thisdayintech/2009/08/0825-torvalds-starts-linux

Canonical Ltd. (2008). “firefox-3.0” source package?: Hardy (8.04)?: Ubuntu. Retrieved October 29, 2011, from https://launchpad.net/ubuntu/hardy/+source/firefox-3.0/3.0~b5+nobinonly-0ubuntu3

Chen, J. (2008). AT&T’s 3G iPhone Is $199 This Summer | Gizmodo Australia. Retrieved November 3, 2011, from http://www.gizmodo.com.au/2008/04/atts_3g_iphone_is_199_this_summer-2/

Chi, E. H. (2009, July 22). PART 1: The slowing growth of Wikipedia: some data, models, and explanations. Augmented Social Cognition Research Blog from PARC. Retrieved November 3, 2011, from http://asc-parc.blogspot.com/2009/07/part-1-slowing-growth-of-wikipedia-some.html

Clarke, G. (2010). OpenOffice files Oracle divorce papers • The Register. Retrieved October 30, 2011, from http://www.theregister.co.uk/2010/09/28/openoffice_independence_from_oracle/

Collins, B. (2011). Ubuntu Unity: the great divider | PC Pro blog. Retrieved October 24, 2011, from http://www.pcpro.co.uk/blogs/2011/05/03/ubuntu-unity-the-great-divider/

Distrowatch.com. (2011). DistroWatch.com: Put the fun back into computing. Use Linux, BSD. Retrieved October 30, 2011, from http://distrowatch.com/

Duke, O. (n.d.). Open Sesame | Love Learning. Retrieved November 3, 2011, from http://www.reedlearning.co.uk/learn-about/1/ll-open-standards

Fairhurst, R. (2011). File:Osmdbstats8.png – OpenStreetMap Wiki. Retrieved October 29, 2011, from https://wiki.openstreetmap.org/wiki/File:Osmdbstats8.png

Free Software Foundation. (2010). The Free Software Definition – GNU Project – Free Software Foundation. Retrieved August 29, 2011, from https://www.gnu.org/philosophy/free-sw.html

Germain, Ja. M. (2011). Linux News: Android: How Linuxy Is Android? Retrieved October 29, 2011, from http://www.linuxinsider.com/story/How-Linuxy-Is-Android-73523.html

Hardt, M., & Negri, A. (2000). Empire. Cambridge, Mass: Harvard University Press.

Hardt, M., & Negri, A. (2011). Commonwealth. Cambridge, Massachusetts: Belknap Press of Harvard University Press.

Hars, A., & Ou, S. (2001). Working for Free? – Motivations of Participating in Open Source Projects. Hawaii International Conference on System Sciences (Vol. 7, p. 7014). Los Alamitos, CA, USA: IEEE Computer Society. doi:http://doi.ieeecomputersociety.org/10.1109/HICSS.2001.927045

Humanitarian OpenStreetMap Team. (2011). Humanitarian OpenStreetMap Team?» Using OpenStreetMap for Humanitarian Response & Economic Development. Retrieved November 3, 2011, from http://hot.openstreetmap.org/weblog/

Ingo, H. (2006). Open Life: The Philosophy of Open Source. (S. Torvalds, Trans.). Lulu.com. Retrieved from www.openlife.cc

Jackson, J. (2011). How Linux mastered Wall Street | ITworld. Retrieved October 29, 2011, from http://www.itworld.com/open-source/193823/how-linux-mastered-wall-street

KernelTrap. (2002). Interview: Andrew Morton | KernelTrap. Retrieved October 30, 2011, from http://www.kerneltrap.org/node/10

Map Kibera. (2011). Map Kibera. Retrieved October 29, 2011, from http://mapkibera.org/

Martin Burns. (2002). Where all the Work’s Hiding | evolt.org. Retrieved October 30, 2011, from http://evolt.org/Where_all_the_Works_Hiding

Marx, K. (1959). Economic & Philosophic Manuscripts. (M. Mulligan, Trans.). marxists.org. Retrieved from http://www.marxists.org/archive/marx/works/download/pdf/Economic-Philosophic-Manuscripts-1844.pdf Retrieved on 2011-11-03

Marx, K. (1976a). Capital: A Critique of Political Economy (Vol. 1). Harmondsworth: Penguin Books in association with New Left Review.

Marx, K. (1976b). Capital: A Critique of Political Economy. The Pelican Marx library (Vol. 3). Harmondsworth: Penguin Books in association with New Left Review.

Maslow, A. (1943). A Theory of Human Motivation. Psychological Review, 50(4), 370-396.

Mills, A. (2007). Why I quit: kernel developer Con Kolivas. Retrieved October 30, 2011, from http://apcmag.com/why_i_quit_kernel_developer_con_kolivas.htm

Moeller, E., & Zachte, E. (2009). Wikimedia blog?» Blog Archive?» Wikipedia’s Volunteer Story. Retrieved November 3, 2011, from http://blog.wikimedia.org/2009/11/26/wikipedias-volunteer-story/

Netcraft Ltd. (2011). May 2011 Web Server Survey | Netcraft. Retrieved October 29, 2011, from http://news.netcraft.com/archives/2011/05/02/may-2011-web-server-survey.html

Parfeni, L. (2011). Linus Torvalds Drops Gnome 3 for Xfce, Calls It “Crazy” – Softpedia. Retrieved October 29, 2011, from http://news.softpedia.com/news/Linus-Torvalds-Drops-Gnome-3-for-Xfce-Calls-It-Crazy-215074.shtml

Paulson, R. (2010). Application of the theoretical tools of the culture industry to the concept of free culture. Retrieved October 25, 2010, from http://bumblepuppy.org/blog/?p=4

Raymond, E. S. (2002). Homesteading the Noosphere. Retrieved June 3, 2010, from http://www.catb.org/~esr/writings/cathedral-bazaar/homesteading/ar01s10.html

Rivlin, G. (2003, November). Wired 11.11: Leader of the Free World. Retrieved from http://www.wired.com/wired/archive/11.11/linus.html

Rooney, P. (2011). IT Management: Red Hat CEO: Google, Facebook owe it all to Linux, open source. IT Management. Retrieved October 25, 2011, from http://si-management.blogspot.com/2011/08/red-hat-ceo-google-facebook-owe-it-all.html

Sneddon, J. (2010). LibreOffice – Google, Novell sponsored OpenOffice fork launched. Retrieved October 29, 2011, from http://www.omgubuntu.co.uk/2010/09/libreoffice-google-novell-sponsored-openoffice-fork-launched/

Stallman, R. (2010). Free Software Free Society: Selected Essays of Richard M. Stallman. (J. Gay, Ed.) (2nd ed.). Boston, MA: GNU Press, Free Software Foundation.

Stout, K. L. (2007). CNN.com – Reclusive Linux founder opens up – May 18, 2006. Retrieved October 30, 2011, from http://edition.cnn.com/2006/BUSINESS/05/18/global.office.linustorvalds/

The Linux Foundation. (2009). Linux Kernel Development. Retrieved from https://www.linuxfoundation.org/sites/main/files/publications/whowriteslinux.pdf

Torvalds, L. (2001). Just For Fun: The Story of an Accidental Revolutionary. London: Texere.

Torvalds, L. (2005). “Re: Kernel SCM saga..” – MARC. Retrieved from http://marc.info/?l=linux-kernel&m=111288700902396

Veltman, K. H. (2006). Understanding new media: augmented knowledge & culture. University of Calgary Press.

Wikipedia:About. (n.d.).Wikipedia. Retrieved October 29, 2011, from https://secure.wikimedia.org/wikipedia/en/wiki/Wikipedia:About