Visualizzazione post con etichetta Mediascape. Mostra tutti i post
Visualizzazione post con etichetta Mediascape. Mostra tutti i post

sabato 1 luglio 2017

Demystification Committee :: E' tempo di EMPIRE MANAGEMENT !! @ Fussler Research Archive

Global Gateway, Seychelles: l’indirizzo legale di una delle sussidiarie di EMPIRE MANAGEMENT.


"Nell’inventare l’homo oeconomicus, gli economisti si sono impegnati in una doppia astrazione. Da un lato, l’aver concepito un uomo con nulla di umano in cuore; dall’altro l’aver rappresentato questo individuo come distaccato da qualsiasi gruppo, societá, partito, setta, o comunità di qualunque tipo."—Gabriel Tarde, Psychologie économique, 1902



Le dimensioni e la complessità del sistema finanziario offshore (colloquialmente noto come "paradiso fiscale”) stanno venendo sempre piú alla luce, così come le nefaste conseguenze economiche di questo apparato in continua crescita. Ma cosa significa essere offshore? Dove si trova il paradiso fiscale? In mare aperto? E, sopratutto, é possibile visitarlo?

Per rispondere a queste domande, il Demystification Committee - un collettivo che si impegna in ricerca artistica - ha creato una struttura aziendale internazionale che attraversa il Regno Unito, le Seychelles e Saint Vincent e Grenadine: un Veicolo di Investigazione Offshore. A capo di questa struttura siede EMPIRE MANAGEMENT, una societá per azioni britannica.

Lunedì 10 Luglio, EMPIRE MANAGEMENT vi invita ad una riunione generale finalizzata ad individuare nuovi investitori, tutti voi, rendervi famigliari con la struttura aziendale e discutere possibili strategie di business. Diventando azionisti scoprirete quali trucchi e cortocircuiti legali permettono ad EMPIRE MANAGEMENT di operare a distanza, rimanere anonimi ed effettuare operazioni finanziarie. Potrete inoltre proporre, discutere e votare possibili investimenti futuri.

Le azioni costano soltanto 1€ ed in caso di necessità EMPIRE MANAGEMENT si impegna a ricomprarle a fine giornata. Potrete infatti concludere il rapporto con la societá a fine riunione o, se lo desiderate, rimanere azionisti a tempo indeterminato. Questo vi darà diritto di partecipare alle prossime riunioni generali e sperimentare ulteriormente con le tattiche del mondo offshore.

Per partecipare, mandate una email a secretary@empire.management indicando il nome, la professione e il metodo di partecipazione (fisica o virtuale via Skype).

Lunedì 10 Luglio, 14:00-18:00
PretiumOffices
Potsdamer Platz 5
10785, Berlino


Read more @ Transmediale

giovedì 4 febbraio 2016

Tarde, un teorico dei media Intervista a Tony D. Sampson, a cura di Jussi Parikka - Traduzione di Alessandro Cattini - Revisione/editing di Obsolete Capitalism


Tarde, un teorico dei media


Intervista a Tony D. Sampson, a cura di Jussi Parikka


Traduzione di Alessandro Cattini

Revisione/editing di Obsolete Capitalism

Free download, Click Here

Questa intervista si concentra su una monografia recentemente pubblicata da Tony D. Sampson, ​Virality: la Teoria del Contagio nell’Era dei Network​, descritta da Brian Rotman come “capace di offrire una nuova teoria del virale quale evento sociologico.” In questa conversazione, Parikka e Sampson discutono di Gabriel Tarde e della teoria dell’assemblaggio, e del perché Tarde dovrebbe essere interpretato come un teorico dei media interessato ad una concezione sonnambulistica del sociale. L’interesse di Sampson al non­cognitivo – e al capitalismo non­cognitivo –suscita risonanze dai recenti dibattiti sulle affezioni, ma con un’attenzione speciale agli sviluppi nel campo dell’​interaction design e della ricerca.

Jussi Parikka: Mi piacerebbe cominciare con il chiederti perché ti sei avvicinato al tuo argomento – l’odierna cultura dei network – attraverso Gabriel Tarde, un teorico sociale del XIX secolo? Che cos’è che permette di considerare Tarde come un’adeguata risorsa teoretica per un’analisi della cultura della rete digitale, dove l’azione non risiede soltanto nel contagio umano, ma passa anche per agenti non umani?

Tony Sampson: ​È stata Tiziana Terranova che, per prima, mi suggerì Tarde, ormai un bel po’ di
tempo fa. Stavo cercando di analizzare a fondo queste idee che avevo sui contagi della cultura della rete. Fino a quel momento avevo provato a sviluppare un approccio alle reti simile a quello della teoria dell’assemblaggio, facendo riferimento al materiale fornito dagli studi sulle reti e dalle scienze informatiche. Volevo tenermi ben lontano da un’interpretazione metaforica del contagio digitale, che mi sembrava essere il peggior punto di partenza possibile. Questo approccio ha funzionato bene fino a un certo punto, ma poi la tesi dell’imitazione di Tarde mi ha aperto una vasta gamma di nuove possibilità. È interessante notare che ho potuto volgere di nuovo lo sguardo su Deleuze attraverso il lavoro di Tarde. E’ stato come arrivare nuovamente a lui, intraprendendo un percorso del tutto nuovo. Sebbene Deleuze non abbia scritto un libro su Tarde – e vorrei l’avesse fatto – penso che sia stato influenzato da lui almeno tanto quanto lo fu da Spinoza, Bergson o Nietzsche. È questo il punto che Francois Dosse evidenzia in ​Intersecting Lives​. La cosa più importante è che Tarde mi ha permesso di rileggere la teoria dell’assemblaggio come una teoria sociale o, più precisamente, una teoria della soggettivazione sociale. Mi sento di dire che Tarde è forse il primo teorico dell’assemblaggio, nella misura in cui è davvero preoccupato solamente del desiderio e della relazionalità sociale.
Un’altra cosa importante riguardo al ruolo che Tarde svolge in ​Virality è che non fa distinzioni tra natura e società o, similmente, tra biologia e cultura. In questo modo mi ha aiutato a demolire l’artificio del contagio metaforico, che fa sembrare che il biologico stia sempre invadendo il sociale, almeno nei punti in cui il linguaggio e la retorica della biologia sembrano imporsi sui fenomeni
sociali. Una volta che l’artificio è stato rimosso, tuttavia, vediamo che è il contrario: ciò che è biologico è sempre sociale, ed è ciò che è sociale ad essere contagioso. Perciò, quello che in ​Virality io chiamo la risurrezione di Tarde, lo colloca come teorico dei media all’interno di una zona indistinta tra natura e società. E non è stato difficile da fare. Dopotutto, quando scrive di propagazione imitativa o della suggestionabilità dell’imitazione, Tarde vuole davvero indicare una mediazione monadologica che non fa distinzioni tra umani e non umani, esattamente come non cerca di separare gli stati inconsci da quelli consci o le abitudini meccaniche da un senso di volizione. Secondo lui tutti i fenomeni sono fenomeni sociali, tutte le cose sono società. Quindi come Whitehead, in un certo senso, pone gli atomi, le cellule e le persone sullo stesso piano: una società di cose. Questo è anche il motivo per cui ritengo importante evidenziare che ci sono reti nelle masse e masse nelle reti.

Jussi Parikka: “Virality" avanza un’idea intrigante riguardo la teoria dei media del sonnambulo – potresti dirci qualcosa in più riguardo a questo concetto e alla sua relazione con la non­volizione?

Tony Sampson: Il concetto di “sonnambulo” proviene, ovviamente, ancora una volta da Tarde, e ciò che cerco di fare nel libro è cogliere in che modo questo concetto risuoni con la cultura della rete. Mi sembra che la tendenza al contagio nelle reti sia legata alle implicite funzioni cerebrali che Tarde descrive come associazioni inconsce – attraverso le quali egli afferma che il sociale assembla sé stesso. Questa relazione tra la viralità e l’associazione inconscia potrebbe essere interpretata, se vogliamo, come la diffusione di un capriccioso stato di falsa coscienza, in cui, da un lato, il sociale è infettato dalla suggestionabilità dell’imitazione al livello della funzione cerebrale e, dall’altro lato, ci rendiamo conto che ciascuno è tenuto così impegnato e così distratto da non comprendere realmente che i suoi sentimenti/sensazioni vengono manovrate in direzione di questo o quel fine.
L’idea dei media sonnambuli, o dell’ipnosi dei media, in molti sensi è simile al lavoro di Jonathan Crary sulle tecnologie dell’attenzione. Infatti Crary fornisce un sorprendente riposizionamento della tesi relativa all’economia dell’attenzione. A differenza dell’apporto dato dai guru delle scuole di business, che considerano l’attenzione una preziosa risorsa per cui lottare, egli comprende la natura controllante e disciplinare dell’attenzione. Fuller e Goffeys si sono similmente riferiti a questo come l’economia della disattenzione, che come Crary non fa distinzioni tra attenzione e disattenzione. Non sono poli opposti.

Jussi Parikka: Relativamente a queste idee, insisti con il parlare di capitalismo non­cognitivo e delle sue tecniche. Perché questa enfasi, che ti porta in una direzione leggermente diversa rispetto ai tuoi primi anni di studio sulla teoria culturale e politica del capitalismo cognitivo? Che cos’è che rende diverso questo approccio?

Tony Sampson: ​Dunque, sì, il capitalismo non­cognitivo non si discosta troppo dal famigliare flusso di lavoro taylorista e post­taylorista. In termini di lavoro uomo­computer possiamo pensare a questo come a un mutamento di relazioni ergonomiche: il dal migliore adattamento fisico possibile stabilito tra uomo e macchina durante processo di produzione, a un modello cognitivo focalizzato sul lavoro mentale. Vediamo questo mutamento di paradigmi ovunque nella letteratura e nelle pratiche relative alla ​Human Computer Interaction ​(HCI), anche se ora qualcosa sta cambiando. L’enfasi è posta sempre di più sul lavoro delle emozioni, degli affetti e delle esperienze, che vengono misurati usando la biometria e le neurotecnologie insieme a strumenti cognitivi più tradizionali che indagano la memoria e l’attenzione. Questo è solo un aspetto della neurocultura in cui ci troviamo oggi, dove non
è la persona, ma il neurone o forse persino la neurotrasmissione stessa che viene messa all’opera in ogni modo per produrre un nuovo tipo di soggettività molecolare.
Fino alla seconda fase di scrittura del libro non avevo cominciato a leggere il lavoro dello psicologo sociale Robert Zajonc, sulle preferenze che non richiedono inferenze; vale a dire la sua idea che le sensazioni potrebbero avere pensieri propri. Infatti, se i commercianti, i politici e i designer possono farci sentire in un certo modo, allora possono anche avere influenza sul nostro modo di pensare. Questo rispecchia la tendenza del design commerciale oggi, a cogliere l’importanza della relazione tra emozioni e conoscenza. Ma Zajonc va persino oltre dicendo che i sistemi affettivi sono sia indipendenti da, e forse persino più forti dei sistemi cognitivi. Potenzialmente i gli uomini del marketing, i politici e i designer non hanno nemmeno la seccatura di doversi appellare al pensiero. Penso che questa sia la traiettoria seguita dal capitalismo non­cognitivo.
In aggiunta al lavoro delle neurotrasmissioni c’è anche questo ben pubblicizzato mutamento nella tecnologia dei media verso il cosiddetto ​ubicomp ​(computazione ubiqua). Anche questo è importante. Qui vediamo interazioni sconosciute al soggetto (​nontask interactions​) verificarsi anche al di sotto del livello di attenzione. Il calcolo pervasivo funziona anche mediante la produzione di interazioni che lavorano sull’utente, semplicemente grazie al fatto che l’utente viene in contatto con una zona “calda” o diventa parte di una rete fra dispositivi, scatenando così un evento di cui non avrà mai bisogno di essere a conoscenza.

Jussi Parikka: Le tue idee sembrano essere strettamente in relazione con “Evil Media”, un recente libro di MatthewFuller e Andrew Goffey. C’è un più vasto interesse per gli aspetti non comunicativi e non rappresentazionali della cultura dei media?

Tony Sampson: ​Assolutamente sì, il che costituisce anche il motivo per cui sono stato così felice di parlare di ​Virality per la prima volta insieme a Matt e Andy presso il Goldsmiths. Penso che ci sia una bella sincronia tra il mio libro e ciò che loro chiamano il ​grigiore non intrusivo di alcune pratiche mediatiche. Non si tratta soltanto dell’uso strategico dei media per scopi specifici, o della rivelazione di una qualche ideologia incorporata o nascosta; al contrario, [la sincronia riscontrata nelle due opere] indica le conseguenze derivanti dalla non intenzionalità e dalla riappropriazione di incidenti devianti. Ho scritto dello stratagemma immunologico come un certo tipo di allarmismo ingannevole originato da incidenti della computer science negli anni settanta e ottanta del Novecento. Questo è il modo in cui vedo la cultura virale. Non è come piacerebbe al marketing virale, cioè una procedura che conduce passo dopo passo ad un costo di marketing pari a zero. Al contrario vediamo che un imprenditore digitale deve mettere in moto la viralità ottimizzando marchi così che essi siano più efficaci dei loro rivali e il loro potenziale si diffonda il più possibile. Nel marketing in rete nulla è dato per certo. Tutto ciò che puoi realmente fare è ammazzare il tempo mentre aspetti di gestire il prossimo incidente casuale.
Un’altra connessione che ho recentemente instaurato con ​Evil Media è con il gruppo di artisti YoHa. Mi hanno chiesto un contributo per il loro progetto ​Evil Media, Curiosity Cabinet che sarà esposto a Berlino nell’anno nuovo. Ho optato per ​Modafinil​. Questo neurofarmaco viene usato principalmente per trattare i disturbi del sonno, alcuni dei quali sono direttamente connessi al malfunzionamento dei processi di lavoro, come i disordini derivanti da un cambio di lavoro. Questo sarebbe già abbastanza
orribile, ma il ​grigiore di ​Modafinil diviene chiaro nell’abuso che ne fanno gli studenti e i soldati che hanno bisogno di rimanere attenti durante gli esami universitari o sul campo di battaglia.

Jussi Parikka: Nonostante le differenze da Evil Media, sembra che anche tu parli d’amore nel tuo libro – puoi approfondire questo aspetto, in relazione alle affetti?

Tony Sampson: ​Dunque, c’è questa cosa davvero intrigante e machiavellica in ​Evil Media​, giusto? È che la paura è preferibile all’amore. Il mio lavoro semplicemente gira quell’idea su sé stessa. Tarde scrive riguardo all’amore in diverse occasioni, nel suo romanzo ​Underground Man e nella parte extra­logica di ​The Laws of Imitation​. Egli pensa che l’amore sia, sebbene spesso transitorio, molto più contagioso della paura. Egli lo considera come una relazione di potere asimmetrica nella quale è principalmente chi ama a copiare il suo amato. Mi sono ispirato a questo e a un paio di altri autori. Teresa Brennan, per esempio, scrive che l’amore, a differenza della paura, non ha bisogno di un medium a cui appigliarsi. L’amore per Brennan è contemporaneamente affezione e medium, il che in qualche modo incrementa il suo contagio affettivo. L’amore di Michael Hardt come concetto politico è ugualmente interessante, secondo me. La sua concezione che l’amore per la famiglia, per la razza, per dio e per la nazione tenda a unificare i popoli in modi che sono “dannosi” diventa significativa, penso, per comprendere l’amore come un Trojan molto più efficace e sinistro della paura. Infatti, solo perché un’esperienza ti fa sentire bene non significa che sarà buona per te. Guardo all’Obama Love sotto questa luce – come un tipo di pratica mediatica grigia e virale dell’amore. Al di là degli ovvi modi di servirsi dell’amore nella sua campagna, come i siti, le magliette e le spille di ​I Love Obama,​ ci sono anche quelle immagini aptiche del presidente con la sua famiglia alla vigilia della suo primo trionfo elettorale. Sentiamo come questo ragazzo molto cool voglia instaurare una nuova collaborazione con il Medio Oriente e chiudere Guantanamo, ma tutto quello che otteniamo sono le impennate nei numeri delle truppe, il suo iniziale supporto al regime di Mubarack, e l’inarrestabile ascesa dei droni. I suoi sostenitori affermano che vuole vedere Guantanamo chiusa, quindi deve essere o disonesto o del tutto incapace. Questo è il ​grigiore​ dell’amore verso Obama.

Jussi Parikka: Una delle parti più intriganti del libro è quella in cui analizzi le tecnologie concrete che stanno emergendo, come quelle tecniche di progettazione di interfacce di design che rientrano nella sfera dell’involontario. Si tratta di un’altra specie di livello di modulazione delle affetti, per esempio per ciò che riguarda il design di interfacce basate sugli affetti? E ciò come sta in relazione con il recente più ampio dibattito riguardante gli affetti nelle “teoria della cultura”?

Tony Sampson: ​Ritengo che la teoria del sonnambulismo dei media sia un utile strumento per comprendere il cosiddetto terzo paradigma di HCI, l’interazione uomo­computer. È questa la mossa che permette di sfruttare i già menzionati emozioni e affetti, il contesto sociale e l’elaborazione dell’esperienza. Infatti, in quanto parte di questo movimento, i consulenti di design dell’esperienza e i neuromarketer stanno velocemente diventando il prossimo grande fenomeno nel business della persuasione. I loro più grandi clienti a quanto pare sono le banche e altri istituti finanziari. Non sorprende che queste imprese abbiano un problema d’immagine al momento. Quindi sono desiderose di guadagnare la possibilità di mettere in contatto l’utente finale con il loro brand attraverso il livello viscerale della elaborazione dell’esperienza, facendo leva direttamente sugli appetiti. Questo è ciò che il ​design​ emotivo promette di fare.
Ecco che cosa sta accadendo: ho partecipato a un certo numero di eventi relativi all’industria del design recentemente, dove tecniche biometriche sono messe in pratica da ​designer di ​app​, giochi pubblicitari, ​e­Commerce​. Si stanno entusiasticamente collegando gli affetti generati dall’utente a strumenti di rilevazioni di galvanizzazione della pelle ed elettroencefalogramma,assieme a software di riconoscimento facciale e posturale con tecnologie del tracciamento oculare ​per esplorare come certi stati emotivi di valenza affettiva rilevabili possano corrispondere a cose come l’identificazione di un brand e l’intenzione all’acquisto. C’è qui il desiderio di comprendere che cosa accada all’utente a livello ​non conscio della elaborazione dell’esperienza, cosicché i ​brand ​possano essere innestati e gli utenti manovrati e condotti verso certe finestre di opportunità.
Di nuovo, queste pratiche concrete sono immerse nel ​grigiore. Queste tecnologie e questi metodi erano indirizzati, in origine, al trattamento neurologico di condizioni come i deficit di attenzione e la demenza. Ma, ora, non vi sono secondi fini nascosti nella loro riproposizione. Non vi è nessuno sforzo di mascherare l’invadenza di queste tecniche di marketing. La pratica della persuasione, che era divenuta una specie di taboo nelle vecchie arene mediatiche, è ritornata, sembrerebbe, per vendicarsi.

L’intervista è stata pubblicata online, in lingua inglese, il 25 gennaio 2013 sulla rivista ​Theory, Culture and Society​, che qui pubblicamente ringraziamo per il permesso alla traduzione e pubblicazione. L’originale dell’intervista è rintracciabile al seguente indirizzo: http://www.theoryculturesociety.org/tarde­as­media­theorist­an­interview­with­tony­d­sampson­by­ju ssi­parikka/

Biografie:
page5image12752 page5image12912

Tony D. Sampson
Inglese, insegna ​Digital Culture and Communication ​presso la School of Arts and Digital Industries dell'University of East London (UEL, UK). Ama lavorare ad eventi artistici sperimentali che coinvolgono musica, video e filosofia. La sua ricerca analizza il ‘lato oscuro’ che si sta realizzando tra sociologia, marketing, cultura digitale e neuroscienze, in particolare la deriva contagiosa e virale che si dipana nelle micro­relazioni di massa circolanti nei ​New Media​. É co­editore (con Jussi Parikka) del libro ​Spam Book: On Viruses, Porn, and Other Anomalies From the Dark Side of Digital Culture (Cresskill, NJ: Hampton Press, 2009). Il suo ultimo libro ​Virality: Contagion Theory in the Age of Networks ​incrocia la micro­sociologia di Gabriel Tarde con la filosofia dell'evento di Gilles Deleuze; è stato pubblicato nel Giugno del 2012 dalla University of Minnesota Press. Ha un blog personale, Virality. Il prossimo libro uscirà nella primavera del 2017 con il titolo ​The Assemblage Brain: Sense Making in Times of Neurocapitalism ​per la Minnesota University Press.

Jussi Parikka
Finlandese, insegna ​ ​Technological Culture & Aesthetics at University of Southampton (UK) ed é docente in ​Digital Culture Theory ​alla University of Turku in Finlandia. Parikka è un noto teorico dei New Media ​a livello internazionale. Tra le pubblicazioni, da segnalare: ​What is Media Archaeology? (Polity: Cambridge, 2012); ​Insect Media: An Archaeology of Animals and Technology ​(University of Minnesota Press: Minneapolis, 2010) ​Posthumanities­series; Digital Contagions. A Media Archaeology of Computer Viruses ​(Peter Lang: New York, 2007); e, con Erkki Huhtamo, ​Media Archæology: Approaches, Applications, and Implications ​(University of California Press, Los Angeles, 2011). ​É co­editore (Tony D. Sampson) del libro ​Spam Book: On Viruses, Porn, and Other Anomalies From the Dark Side of Digital Culture ​(Cresskill, NJ: Hampton Press, 2009). Le più recenti uscite sono:​ il testo ​The Anthrobscene ​(University of Minnesota Press, 2014) e l’opera ​A Geology of Media ​(University of Minnesota Press, 2015)​. ​Sta attualmente lavorando alla pubblicazione (2016) per la seconda edizione di ​Digital Contagions​. Ha un blog personale, Machinology. 

martedì 2 febbraio 2016

WhatsApp annuncia: 1 miliardo di utenti @ Il Sole 24 ore, 02.Feb.2016


Il 19 febbraio 2014 Zuckerberg aveva staccato un assegno da 19 miliardi di dollari, pur di portare a casa WhatsApp, una piattaforma che all’epoca poteva contare su 450 milioni di utenti. Oggi gli utenti sono più che raddoppiati, e lo stesso Zuckerberg ha voluto festeggiare il traguardo del miliardo di utenti con un post su Facebook nel quale si è congratulato con Koum (che intanto è rimasto Ceo della società). I numeri ufficiali aggiungono: 42 miliardi di messaggi scambiati ogni giorno, 1,6 miliardi di foto al giorno, 250 milioni di video al giorno e 1 miliardo di gruppi creati. L’articolo su Il Sole 24 Ore.com 

domenica 22 febbraio 2015

Evan Selinger on The Formula: How Algorithms Solve All Our Problems — And Create More @ LARB, 17Feb2015


Evan Selinger on The Formula: How Algorithms Solve All Our Problems — And Create More @ Los Angeles Review of Books
17 February 2015  - Read more @ LARB


The Black Box Within: Quantified Selves, Self-Directed Surveillance, and the Dark Side of Datification


WHAT HAPPENS WHEN “life as we know it” becomes a series of occasions to collect, analyze, and use data to determine what’s true, opportune, or even right to do? According to Luke Dormehl, much more than we bargained for.
Philosophers of technology as well as researchers in related fields have had a great deal to say about the dark side of datification, with fierce debate raging in particular over the threat of algocracy,” or “rule by algorithm.” It’s an important debate. Unfortunately, though, the typical scholarly contributions use rarified language and aren’t always accessible to broader audiences.
To his credit, Dormehl, a senior writer at Fast Company and a journalist covering the “digital humanities,” attempts precisely to broaden the conversation. This means that less charitable readers may well see The FormulaHow Algorithms Solve All Our Problems — And Create More as a watered down, derivative version of Evgeny Morozov’s take on solutionism.” But I think Dormehl’s book serves a crucial purpose as a user-friendly primer. The bite-sized bits of high theory (including snippets from Zygmunt Bauman, Gilles Deleuze, Michel Foucault, Bruno Latour, Michel Serres, and others) and law scholar commentary (including brilliant reflections from Danielle Citron and Harry Surden) have the virtue of being painlessly illuminating. And the slim package of four chapters and a conclusion is quite sufficient to help readers better appreciate the subtle societal trade-offs involved in technological innovation.
To Thine Own Algorithms Be True
I’ll be the first to admit that “formula” is a terrible guiding metaphor for inspiring critical conversation about how technology is used, designed, and viewed. The word designates abstractions: mathematical relations and symbolically expressed rules. Indeed, the term seems divorced from the gritty specifics of human labor and social norms.
Dormehl, however, has the opposite agenda from those espousing computational purity. To cut through the “cyberbole,” he coins an idiosyncratic definition: “[I] use The Formula much as the late American political scientist and communications theorist Harold Lasswell used the word ‘technique’: referring […] to ‘the ensemble of practices by which one uses available resources to achieve values.’” The Formula, then, is meant to be an existential, sociological, and, at times, historical investigation into individuals, groups, organizations, and institutions embracing a “particular form of techno-rationality”: the conviction that all problems can be formulated in computational terms and “solved with the right algorithm.”
In the first chapter, “The Quantified Selves,” Dormehl outlines how data-oriented enterprises determine who we are and what makes us tick. Readers have undoubtedly noticed that the so-called Quantified Self movement is escalating in popularity. Broadly speaking, the Quantified Self (QS) draws on “body-hacking” and “somatic surveillance,” practices that, as their names suggest, subject our personal activities and choices to data-driven scrutiny. Typical endeavors include tracking and analyzing exercise regimes, identifying sleep patterns, and pinpointing bad habits that subvert goals. Recently democratized consumer technologies (especially smartphones that run all kinds of QS apps) enable users themselves to obtain and store diagnostic data and perform the requisite calculations.
Of course, there’s nothing new about believing truth is mathematically expressed, and, to be sure, attempts to better understand ourselves will fail if they aren’t backed up by data. Here’s the rub: while individuals self-track because they hope to be empowered by what they learn, companies are studying our data trails to learn more about us and profit from that knowledge. Behaviorally-targeted advertising is big business. Call centers aim to improve customer service by pairing callers with agents who specialize in responding to different personality types. Recruiters use new techniques to discover potential employees and assess whether they’ll be good fits, such as having them play games that analyze aptitude and skill. Corporations adopt neo-Taylorist policies of observation and standardization to extract efficient robotic labor from their employees.
By juxtaposing individual with corporate agendas, Dormehl tries to do more than call attention to the naivety of living in what Deleuze calls a “society of control.” He also aims to discern basic dilemmas worthy of more scrutiny. For example, “the filter bubble problem” suggests that personalization can harm society even in cases where individuals get exactly what they think they want; for example, shortcuts for avoiding putatively irrelevant material can, in fact, promote tunnel vision and even bolster fundamentalism. And the easier it becomes to classify differences on a granular level, the easier it also becomes to engage in discriminatory behavior — especially when so many of the algorithms feeding us personalized information are veiled in black box secrecy.
The predicament I find most intriguing is the one Dormehl describes as the “innate danger in attempting to quantify the unquantifiable.” He most assuredly isn’t arguing for a spiritual reality that inherently defies science. Rather, he’s alluding to epistemological limits — how “taking complex ideas and reducing them to measurable elements” can lead people to mistake incomplete representations and partial proxies for the real thing. A related mistake comes from erroneously believing that the technologies and techniques supporting quantified analysis are untainted by human biases and prejudices. I will return to both of these points later when I discuss Dormehl’s own failure to keep them in mind.
Frictionless Relationships
In the second chapter, “The Match and the Spark,” Dormehl discusses how algorithms shape our personal relationships. While companies like eHarmony and OkCupid boast algorithms that can help lonely hearts find highly compatible partners, others cast a narrower net and cater to niche interests: BeautifulPeople.com focuses on “beautiful men and women”; FitnessSingles.com bills itself as “fun, private and secure environment to meet fit, athletic singles”; and VeggieDate.com is what you’d expect. And, what if your outlook is more in line with a “Eugenicist” orientation? No problem! GenePartner.com uses the tag line, “love is no coincidence,” and extols its ability to pair people “by analyzing their DNA.”
With so many possibilities to choose from, including technologically facilitated options for fleeting and satisfying hookups, two issues become salient. First, it would appear that “there is a formula for everyone.” Meet and meat markets have become all-inclusive by virtue, somewhat paradoxically, of becoming so specialized, thanks to the commodification of rare tastes and kinky proclivities. Second, the “antirationalist view of love” is going the way of the dodo bird. The “evidence” doesn’t support the ineffable mystery of desire or even of attachment, nor of serendipity understood as the non-jury-rigged chance encounters.
While these procedures for streamlining how we meet others might seem like improvements over older and clunkier trial-and-error approaches, Dormehl points to disconcerting repercussions. After quoting Bauman on virtual relationships taking “the waiting out of wanting, [the] sweat out of effort and [the] effort out of results,” he asks us to consider whether we’re starting to look at relationships as ephemeral bonds that can be undone almost as easily as pushing the delete key on our computers. After all, fixing stressed relationships requires work. Matchmaking companies nudge us into starting new ones rather than working on old ones. Heck, some apps purport to recognize when you’re in a room with good matches worth talking to!
Once we stop viewing other people, including romantic others, as unique, and instead see them as patterned appearances, then, Dormehl suggests, we will need to face the vexing question of why we should even bother preferring real people to simulated versions. Technology cannot yet present us with compelling doppelgangers, but, soon enough the artificial intelligence behind today’s chatbots will be able to come pretty darn close. If robots become idealized automated lovers, would there be good reason to reject their offers to be whatever we want, whenever we want, especially if they don’t impose demands in return? Similarly, what if companies could get beyond hokey schemes for enabling subscribers to send prewritten messages to their loved ones after they die, and instead offered authentic-sounding, original dialog — say, after getting a sense of a person’s personality by data mining their email, texts, and social networking commentary, much like a less buggy version of the scenario depicted in the Black Mirror episode “Be Right Back”? Would we start obsessively interacting with the equivalent of ghosts?
Although Dormehl recognizes the importance of these questions, he doesn’t try to answer them. That’s okay. A robust response requires discussing normative theories of care and character. The point of reading The Formula is to become aware of the questions, not to find answers.
Algorithmic Futures
The main virtue of the remaining chapters lies in reorienting us to the future. Chapter three, “Do Algorithms Dream of Electric Laws,” begins by discussing predictive policing. It transitions from there to explaining how automation is disrupting the practice of law by dramatically reducing the manpower and cost required to perform select legal services (e.g., “e-discovery”). Dormehl picks up steam when he gets to the topic of “Ambient Law” — “the idea that […] laws can be embedded within and enforced by our devices and environment.” For example, cars could be programmed to test the amount of alcohol in a driver’s bloodstream, only letting the ignition start if the driver is under the legal limit. Or, a “smart office” could be configured to ensure its “own internal temperature” adhered to the levels “stipulated by health and safety regulations.” In these and related cases, surveillance and enforcement are outsourced to machines designed to ensure compliance.
It may indeed seem like a good idea to minimize how often citizens break the law, but Dormehl reminds us that removing choice impacts autonomy. The crucial questions then are: How much does freedom matter? Which circumstances warrant limiting or eliminating it? And when should people be permitted to make decisions that can result in harmful outcomes? Since machines are imperfect, it’s also important to figure out how to minimize the likelihood of their making life-impacting errors while also creating effective processes for rapidly fixing mistakes after they occur.
But the issue I find most fascinating concerns the difference between “rules” and “standards.” Whereas rules involve “hard-line binary decisions with very little in the way of flexibility,” standards allow for “human discretion.” Standards can be problematic — for instance, when they lead to prejudiced behavior. But they also can beneficial — for instance, when they lead to humane treatment. A police officer could pull over a car going “marginally” faster than the legal speed limit and then, after learning the person is a tourist unfamiliar with the rules, let the driver go with a warning. In short, when should forms of discretion be operationalized in Ambient Law programs, and when should Ambient Law initiatives be avoided because the appropriate discretion can’t be coded?
In chapter four, “The Machine That Made Art,” Dormehl considers topics that link the production and reception of creative works to algorithms; for instance, he asks whether machines can predict which scripts will yield high-grossing movies; and, to this end, he examines the efficacy of data mining in determining what content will have mass appeal — an issue that’s been hotly discussed after Netflix’s successful remake of House of Cards.
The most intriguing issue, however, concerns personalization. Dormehl asks us to imagine a future where individuals can “dictate the mood they want to achieve.” Aesthetic objects are then created to produce the desired effects: not just customized playlists to help runners sprint energetically along, but “electronic novels” that “monitor the electrical activity of neurons in the brain while they are being read, leading to algorithms” that rewrite the sections ahead “to match the reactions solicited.” If the processes were effective, they’d force us to confront the question of whether art is valued because it is “a creative substitute for mind-altering drugs,” or whether such a utilitarian outlook demeans much of art’s potential to be transformative, let alone transgressive.
The Existential Conundrum of Quantifying Moods
While there is much to praise in Dormehl’s book, the examples are less than stellar. They are marred by three types of omissions: critical analysis gets left out, crucial facts go unstated, and important context gets ignored.
Let’s start with an instance where critical reflection would have been useful.
Dormehl writes:
Consider […] the story of a young female member of the Quantified Self movement, referred to only as “Angela.” Angela was working in what she considered to be her dream job, when she downloaded an app that “pinged” her multiples times each day, asking her to rate her mood each time. As patterns started to emerge in the data, Angela realized that her “mood score” showed she wasn’t very happy at work, after all. When she discovered this, she handed in her notice and quit.
That’s all Dormehl has to say about “Angela”: she used technology to track how she was feeling. The data revealed she wasn’t thriving at a particular workplace; and the newfound discovery proved to be such an eye-opener that she felt justified in making a decision that had the potential to be life-changing. Dormehl doesn’t provide more detail because, ostensibly, these remarks are sufficient to make his point.
While there is explanatory value to discussing “Angela” in this context, the example is too rich to simply be treated as one amongst many other instances where someone is trying to gain self-knowledge by turning to data-rich insights. After all, throughout The Formula, Dormehl signals that he wants us to think carefully about what it means to be human in an age where algorithms increasingly tell us who we are, what we want, and how we’ll come to behave in the future. Indeed, Dormehl goes so far as to declare that our age is marked by a “crisis of self,” where the Enlightenment conception of “autonomous individuals” is challenged by an algorithmic alternative. Individuals have become construed as “one categorizable node in an aggregate mass.”
“Angela’s” case ought to provide an opportunity to explore core questions concerning the existential implications of self-directed surveillance. Can something as subjective and experientially rich as “moods” be translated into quantified terms? Or are attempts to accomplish this bound to miss or misrepresent important details? And, what is the ethical import of people outsourcing their awareness and decision-making to an app? What does it mean that they trust this app more than their own senses to tell them how they really feel?
A good way to start answering these questions is to analyze a host of specifics that Dormehl fails to pursue. Indeed, once he decided to discuss “Angela” in The Formula, he should have been prepared to tell the reader what kind of information she provided when the app pinged her. Did the interface give “Angela” a drop-down list of moods with categories to select from? If so, that type of design might predispose her to mistake subtle feelings for cruder forms; maybe there just weren’t nuanced options to choose from. It’s a safe bet that she wasn’t thinking about the problem of reductionism when using the technology.
Dormehl also should want to know if the app presented “Angela” with a scale to rate the intensity of her moods. If it did, then we might want to ask how she determined the best way to use it. Turning first-person experience into objective-looking mathematical terms requires translating terms across domains. Poor judgment and limited skill can, obviously, compromise the results. It’s easy to imagine, for example, “Angela” rating a moment of frustration a 10 out of 10 simply because it felt intense, and it’s also easy to imagine that the app recorded her response without offering feedback — by way of questioning, as a human inquirer might, whether the intensity really merited the highest possible rating.
Another important factor Dormehl should address is how — if at all — “Angela” tried to rule out confounding variables. Without some sense of how to separate signal from noise, she can’t determine whether being at work per se led to bad moods, or if the negative affect was generated in select situations, like dealing with a particularly annoying coworker or overly demanding boss. This information is crucial for making an informed decision about whether “Angela” should quit her job or just lodge a complaint with Human Resources.
Finally, Dormehl should plumb the reasons why “Angela” wasn’t confident enough in her own abilities to determine the source of her misery. When our jobs make us miserable, most of us are well aware of it. We don’t walk around plagued by the mystery of why we’re feeling down. Perhaps, then, the most relevant issue wasn’t that self-tracking enabled “Angela” to learn more about herself. Maybe she already knew how she felt about work but was waiting to quit until she could frame the choice as an evidence-based decision. If so, the question Dormehl should be asking is whether it’s a new type of “bad faith” to turn to data to validate difficult decisions we’re already inclined to make.
Persuasive Privacy Advocacy
In order to rhetorically dramatize some of the examples, Dormehl sometimes leaves out important information. Consider his depiction of Facedeals.
A Nashville-based start up called Facedeals promises shops the opportunity to equip themselves with facial recognition-enabled cameras. Once installed, these cameras would allow retailers to scan customers and link them to their Facebook profiles, then target them with personalized offers and services based upon the “likes” they have expressed online.
The operative word here is “allow.” If we interpret the word in an active sense, we get the impression that as soon as retailers install the facial recognition cameras they immediately can use biometric data to link every customer who gets scanned to his or her corresponding Facebook profile. But this isn’t how the program actually works. It isn’t a matter of installing the technology, turning it on, and then, boom, a brave new world of commerce begins. Privacy protections, which Dormehl doesn’t mention in this context, do determine, at least to some degree, what actually happens. In order to receive the personalized deals, users need to give their permission by opting in to the program and authorizing the Facedeals app. Consequently, customers remain in control of the situation; neither technology nor businesses is calling the shots.
Dormehl’s discussion of a related surveillance program is also plagued by the problem of omitting detail to create a heightened sense of unease.
In late 2013, UK supermarket giant Tesco announced similar plans to install video screens at its checkouts around the country, using inbuilt cameras equipped with custom algorithms to work out the age and gender of individual customers. Like loyalty cards on steroids, these would then allow customers to be shown tailored advertisements, which can be altered over time, depending on both the date and time of day, along with any extra insights gained from monitoring purchases.
The prospect of surveillance leading to “extra insights” sounds especially ominous if we don’t know about the constraints that limit what data can be collected and analyzed. Since Dormehl doesn’t identify any of them, I’ll highlight the most crucial limitation. When the program was announced, Tesco clarified that it won’t record personal information, including customers’ images. Had Dormehl mentioned this commitment, he would have given readers less reason to crinkle their faces in disgust and horror.
By mentioning such conspicuously absent detail, I’m not suggesting that there aren’t good reasons to be concerned about corporate surveillance creep. To the contrary, I believe it’s essential to have robust privacy discussions about the matter before it’s too late; technology develops rapidly, and consumer regulations obviously have a hard time keeping up. But in order for privacy advocates to stand a shot of being taken seriously, they need to express justifiable propositions. They can’t make credible cases for protecting privacy by omitting highly pertinent facts.
Why Universities Embrace Surveillance
At time Dormehl fails to capture the full significance of what makes an example troubling. For example, after introducing us to CourseSmart’s educational technology, which “uses algorithms to track whether students are skipping pages in their textbooks, not highlighting significant passages, hardly bothering to take notes, or even failing to study at all,” Dormehl only points to one worrisome implication: the tool — and others like it — allows people to be judged by ideologically questionable “engagement” metrics. This focus allows Dormehl to quickly segue from discussing colleges using these so-called smart textbooks to the rise of neo-Taylorist corporate practices, such as “Tesco warehouses in the UK” where “workers are made to wear arm-mounted electronic terminals so that managers can grade them on how hard they are working.”
Casting a net that unifies student and worker is problematic. Dormehl moves so quickly beyond the domain of education that he doesn’t ask an important question: beyond pedagogy, why are universities aggressively pushing for big-data driven policies? In other words, why are enhanced data collection and analysis regarded as central to the very survival of brick and mortar colleges? Such questions provide the proper context for understanding why administrators may be keen on the large-scale adoption of the type of technology CourseSmart offers, even if professors prefer more old-fashioned approaches to teaching.
Simply put, the engagement issue is a small part of a larger set of concerns — for instance, administrators wanting to increase the odds of students successfully graduating. This has come to mean their using predictive analytics and extensive surveillance to help students select right courses, do well in their classes, and avoid behaviors that compromise mental health. While these goals sound great in the abstract, in practice a host of troubling issues ensue, ranging from profiling to sensitive data being taken out of its intended context of use. In other words, the algorithms powering these operations and the logic sanctioning their use requires greater scrutiny. It might even be appropriate to consider regulating them because of ideals associated with “fair automation practices.
While I’ve been critical of how Dormehl presents some of his examples, the issues I raise are ultimately minor in comparison to what he does well. Dormehl packs a lot of content into a succinct text, and while he doesn’t offer his own theories or solutions, The Formula successfully demonstrates why academics shouldn’t be the only people interested in philosophical questions concerning technology.
¤

Luke Dormehl is a journalist and filmmaker and author of The Apple Revolution: The Real Story of How Steve Jobs and the Crazy Ones Took Over the World. He writes for Fast Company.



mercoledì 28 gennaio 2015

EVGENY MOROZOV: SOCIALIZE THE DATA CENTRES! @ New Left Review, N. 91 - January 2015


Read more @ New left review
Your work traces a distinctive path—unlike that of any other technology critic—from a grounding in the politics of post-Cold War Eastern Europe, via critique of Silicon Valley patter, to socio-historical debates around the relations between the Internet and neoliberalism. What was the background that produced this evolution? 
I was born in 1984, in the Minsk region of Belarus, in a new mining town called Soligorsk, founded in the late fifties. More or less the whole labour force was brought in from outside, and there’s little sense of national belonging. My father’s family came from the north of Russia; my mother, who was born near Moscow, arrived in the seventies with a degree in mining from Ukraine. The town is dominated by one huge state-owned enterprise that mines potassium and produces fertilizers which sell very well on the world market: it’s still the most profitable company in Belarus. My entire family worked for it, from grandparents to uncles and aunts. The USSR dissolved when I was seven, and while there may have been all sorts of problems with living in a small city like Soligorsk, they were not linked to the USSR’s disappearance. Under Lukashenko, who came to power when I was ten, Belarus was officially bilingual, but Russian was the dominant language, and growing up in Soligorsk felt just like being in a province of Russia. We were much more connected to events in Moscow than in Minsk. Initially there was no Belarusian television; the national media were not very strong, so the newspapers we got, and most of the TV programmes we watched at home, were Russian. People in Kaliningrad probably felt more cut off than I did in Soligorsk. Later, Lukashenko realized that if he didn’t control the flow of media in the country, he could lose the ability to make a case for Belarus to exist as an independent state, however pro-Russian. So he started limiting Russian programming to three or four hours a day, and mixing in some local news and Belarusian programming. But then people like my parents bought satellite dishes and continued watching Russian TV, not particularly because they mistrusted Lukashenko’s politics, but because the local stuff was so boring. 
How did you come to leave Belarus? 
My cousin was lucky enough to have studied for her bachelor’s degree in St Petersburg, before moving to Holland. So there was an expectation in my family that I might be able to do something outside the country. I wanted to spend a year in a high school in the US, but that didn’t work out. The next best thing was to go to the American University in Bulgaria, which had been set up in the early nineties with Soros and USAID—and maybe some State Department—money, in a former school for communist leaders in a small town called Blagoevgrad, near the border with Macedonia and Greece. Like Soligorsk it’s a small town, of 70,000 people; an odd, poor place, where a lot of the students came from the former Soviet bloc or adjacent countries: Bulgaria, Romania, Yugoslavia, Georgia, Armenia, Azerbaijan, Mongolia. Many, like myself, were on scholarships. There was a lot of ethnic tension on the campus when I arrived, in 2001, soon after the Kosovo conflict. I spent four years there, and learnt far more about the former Soviet Union than I ever did in Belarus. 
What were you studying? 
The mission statement of the university was to educate the future leaders of the region, its alumni set for political careers in government or civil society. Some did that, but its graduates mostly found themselves working in business—in consulting, auditing or accounting firms. I ended up double-majoring in business administration and economics. My initial ambition was to work in an investment bank. What saved me from that was a ten-week internship at JP Morgan in Bournemouth, of all places, making sure all the trades went through; so if any of the traders mistyped ‘0’ as ‘1’, you would have to catch it. I never understood why they couldn’t just automate the process. I realized investment banking was probably not for me. 
What did you do after graduating in Bulgaria? 
I decided to take a year out at the European College of Liberal Arts, a small outfit, now part of Bard College, that was also set up with American money—in this case by a private USphilanthropist obsessed with liberal arts education. It wasn’t a degree programme, but you could do a proper humanities course there for a year, with all expenses paid. The programme I ended up on focused on three thinkers: Freud, Marx and Foucault, in succession. For nine months we read very widely; Lukács on the novel, Jameson, Norbert Elias, a lot of secondary literature. It was a very intellectually stimulating programme. But while I knew I didn’t want to do investment banking, I also didn’t want to be an academic. So I thought most of this study was useless. In retrospect, of course, I’m glad I did it. 
How did you get from investment banking to writing on new media? 
A key influence on me was an Anglo-Dutch war reporter, Aernout Van Lynden, who lectured in Blagoevgrad because he was married to the Dutch ambassador to Bulgaria. The cultural standards on campus were low, but he was a genuine intellectual, who encouraged us to read the New York Review of Books and the FT every day. Living in Blagoevgrad—in the middle of nowhere, essentially—those were not at all the kinds of things people read. Most students were just focused on their careers. It was due to him that I started reading long-form journalism and experimenting seriously with writing in English. At the same time, in the last year or so of college I noticed that there was a sudden flow of articles dedicated to blogging—not just blogging as a phenomenon in itself, but as a political tool. This was during the 2004 US presidential election, when Howard Dean was running for the nomination of the Democratic Party. His campaign was marked by the horizontal deployment of micro-fundraising and blogging, and an emancipatory rhetoric—‘finally we can bypass the entrenched institutions that fund elections, and the mainstream media that sway them’. At roughly the same time, in late 2004, I saw the same wave of excitement about the use of these tools in the Orange Revolution in Ukraine, where LiveJournal—a blogging platform that was very popular in the Russian-speaking world—played a significant role. 
So I felt there was something interesting here. In America you had this discourse about the democratization of access to the media and to fundraising, and you could already see results of these changes in Ukraine, and earlier in Georgia and Serbia. Activists in Otpor!, the American-sponsored opposition in Serbia, were reporting that they had learnt how to organize protests by playing computer games. To me, this clicked: the computer games, the text messaging, the blogging . . . My interest in these technologies intensified. The following year, I picked up a book written by leading analysts of the Howard Dean moment, an edited collection called Blog!. I was perhaps a bit ahead of my cohort in Europe in understanding that a major transformation was under way. 
At this point you started writing about politics? 
No, that came earlier. Around 2003, when I was at a summer school in Berlin, I met a Russian student of journalism who was freelancing for Akzia, a paper I’d never heard of, and she introduced me to the editor. Akzia was distributed free in Russian cafés and places where hipsters and intellectuals hung out, and had a quite active online presence. It wasn’t just an entertainment and culture publication: it featured political pieces about Russian youth and other movements, some more radical than others. They offered me a column, which is how I started in journalism—I was writing in Russian long before English. But not about Russia: the column was called Kosmopolit and covered a global beat—American elections, citizen journalism and mobile technology in Brazil, online publishing and copyright, architecture, you name it. Back in those days I wasn’t much preoccupied with Russian politics. Had I been, given that I was coming out of the American University in Bulgaria, where we were fed the gospel of neoliberalism on a daily basis, I would have probably inclined toward a Khodorkovsky-like alternative to Putin. On foreign policy issues, I identified with smaller states like Moldova or Georgia in their various squabbles with Russia, in part because of my background—I was still naive enough to believe that Belarus could one day join the EU
How and when did you connect politics and technology in your work? 
After 2004, I believed the story that the protesters in Ukraine and elsewhere were mobilized through text messaging and blogs. There were elections coming up in Belarus in March 2006, so I asked myself—what’s going to happen there? At this point I started collaborating with an NGO in Prague called Transitions Online, which used to be a print magazine called just Transitions, and in the late nineties became online-only. To pay for this, they had to develop all sorts of secondary activities, so they transformed themselves into an NGO, initially focused on teaching journalists from the former Soviet bloc how to do investigative reporting, or Roma who wanted to write about their lives—whatever there was money for. A lot of the funding came from parts of the Soros network concerned with education or regional issues. Other sources of money included the National Endowment for Democracy, Internews, maybe the German Marshall Fund, and alongside these American organizations, the Czech government and the Swedish International Development Agency. A lot of it was project-by-project. Eventually Transitions Online began to express an interest in new media—blogging, social networking etc. I offered to write some posts for them on what was happening in this area, and eventually took over the Belarus blog. When it became clear how quickly the new media space was developing across the former Soviet Union, we agreed that I would work for them full-time. That meant travelling quite widely in the former Soviet Union, doing training sessions for them. 
Where were you based in these years? 
I stayed in Berlin for three and a half years—a year in the European College of Liberal Arts, then two and a half years working for the NGO. But by August 2008 I had become frustrated not only with NGO work, but also with the attitude of many funders and their assumptions about technology and politics. Soros had created Open Society Fellowships that allowed you to work on a project from wherever you wanted. On getting one of these, I had to decide where to be based, and reckoned it would probably be easier to get a book published if I moved to New York. I was already doing a lot of writing—nothing very deep, but a lot of opinion pieces, freelancing for The Economist; of course, my name was not attached to the articles, but I worked quite a bit on their quarterly technology supplements and the international section of the magazine. I already had some ideas about what was wrong with much of the received wisdom about technology and politics. 
What were these? 
I was frustrated not only with the lack of the kind of results we had expected from our projects, but also the potential damage we could be causing. We were supposed to be saving the world by helping to promote democracy, but it seemed clear to me that many people, even in countries like Belarus or Moldova, or in the Caucasus, who could have been working on interesting projects with new media on their own, would eventually be spoiled by us. We would arrive with a lot of money, and put them on a grant, and they would soon start thinking very differently: ‘Great, even if I fail I can get another grant.’ Later I began to question our objectives too, but back then I believed in them, and thought that if our aim was to promote an independent culture of publishing and conversation—a kind of Habermasian public sphere—trying to engineer it by doling out money was the wrong way to go about it. 
At the same time, while the governments in power in these countries were supposed to be our allies—at least, nobody said they were our enemies—it was clear their priorities were the opposite of ours. We thought all we needed to do was make these independent voices heard. But governments very quickly began deploying tools, techniques and strategies in this new media space that were much smarter than we had anticipated—not only stepping up surveillance, but creating their own propaganda by hiring bloggers, manipulating online conversations, carrying out denial-of-service attacks on websites. We weren’t raising the right questions about this. Of course, in retrospect there was a reason why we were not asking them. It wasn’t in the remit of the National Endowment for Democracy to be questioning whether American companies were supplying surveillance equipment to the government of Uzbekistan. 
So when I began my first book, The Net Delusion, my aim was to show that many of the tools, platforms and techniques we were celebrating as emancipatory could equally be turned against the very activists, dissidents and causes we were trying to promote. [1] Today this sounds obvious. But back then, most donors and most Western governments simply assumed that dictators—or whatever they called authoritarian governments—would never be able to control ‘the Internet’, because they were too dumb, too disorganized, too technophobic, and that this new wave of information technology would bring about their downfall. In Washington the narrative of the end of the Cold War encouraged this: if it was Radio Free Europe and Xerox machines that killed off the Soviet Union, blogs and social media could now finish the job of exporting democracy. 
It seemed clear to me that this framing of Internet freedom as a pillar of US foreign policy threatened to undermine whatever potential the new tools and platform had for creating an alternative public sphere, since the more the American state got involved in them, the more it would tip off other governments that something ought to be done about them. But I was twenty-five when I wrote The Net Delusion, and thought I might end up in a Washington think-tank, so it reads as if I’m trying to tell US policy-makers they were setting a trap for themselves, and I was advising them to act differently. Of course, I wouldn’t write it that way now. 
You weren’t aware that the NSA far exceeded any government in the world in its universal electronic surveillance? 
No, I didn’t know about the NSA. But a lot was in the open—cyber-attacks by the US government, for example. Already by 2006 or 2007 it was crystal clear that there were dedicated units within the Department of Defense whose job was to take down the websites of jihadists and other foes, even if there was typically tension between the Pentagon and the CIA, which wanted to derive intelligence from them so didn’t want them taken down. So when Hillary Clinton condemned countries that engage in cyber-attacks in her 2010 speech on Internet freedom, it was the worst kind of hypocrisy. Just as when US officials talk of supporting bloggers everywhere, you only have to look at their actual policy in countries like Azerbaijan or Saudi Arabia. It’s not just a contradiction on Internet freedom, but also on human rights and many other issues. These foreign policy contradictions were reflected in my own book, where I was trying to understand what kinds of tools and techniques Russia, China, Iran, Egypt and other such states were developing in terms of surveillance, censorship, buying bloggers, establishing control over companies, without paying attention to what the United States itself was doing. 
How would you track that today? 
Well, let’s take the example of a figure like Jared Cohen, who studied at Stanford under Larry Diamond, and marketed himself as the next defence/foreign policy Wunderkind. He published two books—one on America’s response to the Rwandan genocide and another on youth radicalization—before getting a job with the Policy Planning Staff at the State Department in 2006, aged twenty-four. There he worked with former Contra-controller John Negroponte, who was Deputy Secretary of State, and Under Secretary of State for Public Diplomacy James Glassman, author of a hymn to the ‘new economy’ shortly before the dot.com bubble collapsed. [2] But his career really took off with Obama’s election on a wave of technophoria. Staying on at State, Cohen used the anti-FARCmobilization of 2008 in Colombia to demonstrate the vital importance of ‘Internet freedom’ to the State Department, claiming it was all started by a guy on Facebook who had set up a group to protest against the FARC. In reality, of course, it was Álvaro Uribe who aired the Facebook group in a presidential address on television, and organized the whole affair. But in the State Department this became the showcase of how mass mobilization for good causes could be magicked up through the new technology. Alongside Cohen, there was now Alec Ross, in his thirties and with little background in international relations or foreign policy, whom Obama appointed as Senior Adviser to Clinton. This pair started arranging what they called ‘tech executive trips’. Since the main US cultural export and basis for soft diplomacy seemed to be technology, they decided that the CEOs of these companies could help boost America’s image abroad. So they would fly bosses from Silicon Valley over to Mexico, Syria—where they met with Assad—or Iraq as quasi-cultural ambassadors. Symbolically enough, Jared Cohen met Eric Schmidt, the Google boss who is a key Obama backer, on a trip to Baghdad. They went on to become co-authors of The New Digital Age.[3]
What was the political upshot of this agenda? 
In 2009 the tale of the State Department’s help to the Green protests in Iran got front-page treatment in the New York Times. The official story was that Twitter, not knowing much about what was happening in other parts of the world, decided to schedule maintenance of their website just as protests were brewing in Iran after the election of Ahmadinejad, triggering outrage within the Twitter community (though how many Iranians were using Twitter was much exaggerated). At this point, Cohen asked one of Twitter’s senior executives to delay their maintenance, and the story leaked (or was passed) to the New York Times. Later it was reported that Cohen got into trouble with the White House, because this could be read as American intervention in the Iranian elections. After the event, the episode was spun to suggest the US government was at least in touch with emerging media use. Actually, career diplomats hated all this. Some wrote long blog posts complaining that these two youngsters were running US foreign policy on all things digital. The episode was used by state-owned media in Russia, Iran, China and elsewhere to prove that Silicon Valley was just an extension of the State Department. In Russia, you heard the first calls in government circles for something to be done about Russian dependence on American infrastructure. Suddenly there were moves by oligarchs close to the Kremlin to buy out the owners of Russian internet companies, so that they could either be shut down or have content removed if they risked provoking any social protest. 
How far would you see the outcome of the Arab Spring as a vindication of The Net Delusion
To some extent. Many people took the book to carry a single message, even if they rarely agreed on what it was. One group of readers thought I was saying that the Internet would inevitably favour governments over protesters and dissidents; another that I was suggesting the Internet led to shallow, ineffective activism and could be dismissed by those interested in real change. Actually, my argument was that certain aspects of digital technologies are conducive to social mobilization, and others to suppression of mobilization—which of these tendencies predominates largely depends on the political dynamics in a country. I also wanted to make clear that popular discourse about these technologies was completely disconnected from three realities: that they are operated by private companies interested, above all else, in making money; that slogans like ‘Internet freedom’ have not made old-style foreign policy considerations suddenly disappear (American fascination with them has its roots in the Cold War); and that their utopian appeal cannot be squared with most of the things (cyber-attacks, surveillance, spin) the US government itself was doing online. 
So the Arab Spring did confirm many of my hunches. We learnt that Western companies were supplying surveillance technologies to Libya and Egypt; that the ease of horizontal mobilization afforded by social networks is of limited help if it doesn’t generate more lasting political structures that can contest the military rule outside the squares; that widespread celebration of the role of Twitter and Facebook in the Arab Spring led Russia, China, and Iran to take further steps to tighten control over their own online resources. Much of the talk about the Arab Spring as the arrival of a new style of digital protest, in fact, was an updated version of modernization theory, inviting us to believe that the use of sophisticated media leads to intellectual emancipation, greater respect for human rights, and so forth. One look at ISIS’s media strategy is enough to show that this is nonsense. 
What in your view are the current ownership structures of the Internet? 
I haven’t developed a complex map of the entire stack of these, and much of my current work is on the ambiguity of this term, ‘the Internet’. But obviously, from hardware to software, if we are speaking of companies, these are overwhelmingly American. Samsung may have a respectable share of the smartphone market, but its operating system—Android—is Google’s. Which raises a further question. Android is open-source, but a lot of open-source software is provided by companies with headquarters in the US. Open-source software is no doubt better than closed-source, but the fact that Android is run by Google, and integrated with other products that Google owns, lessens the benefits of this. The outcome is still one giant US company in control of a vast amount of traffic and data. The initial hope with open-source software was that anyone could examine it for any ‘backdoors’ in the code that might make it vulnerable to agencies like the NSA. But we know that there is a huge market in exploits. [4] If you have the money, you can exploit even open-source software. Who has the money? The NSA, of course. 
With free or with open-source software, at least cat-and-mouse games of hacker-versus-surveiller are possible, whereas with closed systems like Apple’s there’s little way of knowing what access organizations like the NSA might have to your data. [5] Shouldn’t one still make this distinction? 
This is where we need to be explicit about the normative benchmarks by which we want to assess the situation. If the question is just privacy, then of course open-source is far better. But that doesn’t resolve the issue of whether we want a company like Google that already has access to an enormous reservoir of personal information to continue its expansion and become the default provider of infrastructure—in health, education and everything else—for the twenty-first century. The fact that some of its services are a bit better protected from spying than Apple counterparts doesn’t address that concern. I’m no longer persuaded by the idea that open-source software offers a kind of transnational way of escaping the grip of the American behemoths. Though I would still encourage other countries or governments to start thinking about ways in which they can build their own, less compromised alternatives to them. 
Since Snowden, a lot of hackers are especially concerned with government spying. For them, that’s the problem. They’re civil libertarians, and they don’t problematize the market. Many others are concerned with censorship. For them, the freedom to express what they want to say is crucial, and it doesn’t really matter if it’s expressed on corporate platforms. I admire what Snowden did, but he is basically fine with Silicon Valley so long as we eliminate firms that have weak security practices and install far better, tighter supervision at the NSA, with more levels of transparent control and accountability. I find this agenda—and it’s shared by many American liberals—very hard to swallow, as it seems to miss the encroachment of capital into everyday life by means of Silicon Valley, which I think is probably more consequential than the encroachment of the NSA into our civil liberties. Snowden’s own proposals remain very legalistic: if we can only establish five more stages of checks and balances within the American juridical system, and a court that is better controlled by the public, everything will get better. 
These debates don’t touch on issues of ownership or bigger political questions about the market. In my more recent work, I’ve argued that we don’t yet know how to address these. The data extracted from us has a giant value that is reflected in the balance sheets of Google, Apple and other companies. Where does this value come from, in a Marxist sense? Who is working for whom when you view an ad? Why should Google or Apple be the default owners? To what extent are we being pushed to monitor, gather and sell this data? How far is this becoming a new frontier in the financialization of everyday life? You can’t address such matters in terms of civil liberties. 
Isn’t the key issue the rate and degree of monopolization in this area? These companies have grown much bigger and faster than their predecessors. It took a lot longer for oligopolies to emerge in the automobile or aircraft industries. Google only started in 1996. 
That’s a function of the nature of the service and the network effects in companies like Google and Facebook. The more people are on Facebook, the more valuable it becomes, and it doesn’t really make sense to have five competing social networks with twenty million people on each; you want all of them on one platform. It’s the same for search engines: the more people are using Google, the better it becomes, because every search is in some sense a tinkering and improvement in the service. So Google’s expansion into other domains has been very fast. Right now they do thermostats, self-driving cars, health. Google and Facebook are even trying to bring connectivity to so-called Third World countries. For them it’s important to get everyone in Africa and Asia online, because that’s the next few billion eyeballs to be converted into advertising money. But they get their customers online under very specific terms. 
Facebook takes mobile operators as partners, since in poor countries most people will get online through their mobile phones. Users pay for what they access and download, but don’t have to pay to access Facebook. Facebook comes free, and everything else is at a price—so that’s supposedly positive, because it’s better than paying for everything. The result is that all other services have to establish a presence on Facebook, which thus becomes the bottleneck and gateway through which content is fed to users. So if you wanted to provide education to students in Africa, you’d be better off doing it through Facebook, because they wouldn’t have to pay for it. You would then end up with a situation where data about what people learn is collected by a private company and used for advertising for the rest of their lives. A relationship previously mediated only in a limited sense by market forces is suddenly captured by a global American corporation, for the sole reason that Facebook became the provider of infrastructure through which people access everything else. But the case to be made here is not just against Facebook; it’s a case against neoliberalism. A lot of the Silicon Valley-bashing that is currently so popular treats the Valley as if it was its own historical force, completely unconnected from everything else. In Europe, many of those attacking Silicon Valley just represent older kinds of capitalism: publishing firms, banks etc. 
In a periodization of how all this came about, what do you see as the critical turning points in the short but fast history of the Internet, and what are the most important analytical distinctions to be made within it? 
I’m dissatisfied, as I’ve said, with the ambiguity of the term ‘the Internet’. From the fifties or sixties onwards, there were separate, parallel developments in software, in hardware, in networks. If you look back at the situation in the late seventies, you find a dozen networks connecting the globe: the payments network, the travel-reservation networks and so on. That the network which eventually became the Internet would emerge as the dominant system was not obvious. It took a lot of effort—in standards committees, and at the level of organizations like the International Telecommunications Union—to make that happen. There were also developments such as smartphone apps, which we now perceive as part of the Internet because they run on platforms produced by giant companies like Google, but which make more sense within the history of software than that of internetworking. The fact that all of those histories discursively converged on the term ‘Internet’ is itself a significant historical development. If you study the debate between 1993 and 1997, this wasn’t the most popular term to talk about these issues; that was ‘cyberspace’. 
For most of the nineties, you still had a multiplicity of different visions, interpretations, anxieties and longings for this new world, and a lot of competing terms for it—virtual reality, hypertext, World Wide Web, Internet. At some point, the Internet as a medium overtook all of them and became the organizing metacategory, while the others dropped away. What would have changed if we had continued thinking about it as a space rather than as a medium? Questions like these are important. The Net isn’t a timeless, unproblematic category. I want to understand how it became an object of analysis that incorporates all these parallel histories: in hardware, software, state-supported infrastructures, privatization of infrastructures, and strips them of their political, economic and historical contexts to generate a typical origin story: there was an invention—Vint Cerf and DARPA—and it became this fascinating new force with a life of its own. [6] Essentially, that’s our Internet discourse at present. 
But isn’t there at least one objective basis for the unity of these discourses about the Internet: that, while all these previous networks existed separately, once the basic Internet protocol—TCP/IP—came onto the scene they all tended to converge into a single integrated structure? 
I’m happy to accept the reality of the TCP/IP protocol, while also rejecting the discursive unity of the Internet as a term. My concern is that people assume there is a set of facts which derives directly from this architecture, as if the services that are built on it are not operated by companies or monitored by states. They start saying things like: it will break the Internet, or the Internet will fail, or the Internet will not accept it. This kind of talk is almost religious. I might even say that the Internet does not exist. This is not to deny that there is something which I use every day; but there’s much more continuity than many of these narratives suggest between what I use on my computer and an information system that ran in some library forty years ago, before the Internet. 
So how might we begin looking at these developments in a sharper socio-historical perspective? 
In the sixties, engineers at MIT and elsewhere had a vision of computing as a public utility that looked very much like contemporary cloud computing. Their idea was that you would have one giant computer in a place like MIT, and then in people’s houses you would get computing just as you do electricity or water. You wouldn’t need to run your own processor or have your own hardware, since it would all be centralized in one place. At that time the big computer companies like IBM were mostly supplying mainframe computing for big business—they didn’t cater to personal users, families, consumers. Thanks in part to the anti-institutional climate and counterculture of the seventies, companies like Apple challenged the dominance of those big players. It took a lot of effort by people like Steve Jobs, and their intellectual enablers in publications like the Whole Earth Catalog—Stewart Brand and the countercultural wing that was promoting this do-it-yourself paradigm—to convince consumers that computers could be owned and operated by individuals; that they were creative new tools of liberation, and not just machines of aggression and bureaucracy. 
Unless you understand this, it’s hard to see how everything got interconnected—you needed something to interconnect. At the beginning you just had the universities, and it would have stayed that way if there had been no change of mentality, no shift towards personal computing. Today the move to cloud computing is replicating some of that early rhetoric—except, of course, that companies now reject any analogy with utilities, since that might open up the possibility of a publicly run, publicly controlled infrastructure. 
How should the current phenomenon of centralized ‘big data’ be located in this broader history? 
‘Big data’ isn’t something unique to the last few years. To understand what’s driving this data collection, you need to forget Internet debates and start focusing on the data banks selling information on the secondary market—companies like Axiom and Epsilon. Who are they selling their data to? To banks, insurance companies, private investigators and so on. There was a debate in the late sixties about the role and potential abuse of data banks in America, which was not all that different from the big data debates today. At stake was whether the US should run national data banks and aggregate all the information collected by federal agencies into one giant database accessible to every single agency and every single university. It was a huge debate, including on a Congressional level. In the end the idea was killed because of privacy concerns. But a lot of scientists and companies made a case that since the data had been collected, it ought to be made accessible to other researchers, because it might help us to cure cancer—exactly the sort of rhetoric you hear now with Big Data. Nowadays the information can be produced far more easily because everything we do is tracked by phone, smart gadget, or computer, and this amplifies its volume. So much is now gathered that you can argue it deserves a new name. But these Internet debates tend to operate with a kind of amnesia, narrating everything in a kind of abstracted history of technology. 
There’s a story to be told even about Google’s main ranking algorithm, which actually comes out of decades of work on information science and indexing. The mechanism that Google uses to determine which items are relevant or not—by looking at who links to what, citation patterns etc—was developed in relation to the indexing of academic literature; it’s not their own invention. But you would never guess that without knowing something about developments in information science. Likewise, people looking at these ‘massive open online courses’ today don’t generally know that in the fifties and sixties people like B. F. Skinner were promoting what he called ‘teaching machines’ that would dispense with an instructor. There’s a continuous tradition of trying to automate education. The fact that a bunch of start-ups have now moved into the area does not erase those earlier developments. Now that ‘the Internet’ is spreading into everything—education, healthcare (with the ‘quantified self’), and all the rest—we’re in danger of ending up with a kind of idiot history, in which everything starts in Silicon Valley, and there are no other forces or causes. 
How inevitable do you regard this drive towards technical and organizational centralization over the last decade or so? 
There are tendencies towards centralization across the board, though there are also industry dynamics which lend a specific tempo to each domain and layer. So what is happening with data should be distinguished from what is happening in phone manufacturing. But Google and Facebook have figured out that they cannot be in the business of organizing the world’s knowledge if they do not also control the sensors that generate that knowledge and the gateways through which it passes. Which means that they have to be present at all levels—operating systems, data, indexing—to establish control over the entire proverbial ‘stack’. 
Can we perceive any counter-tendencies at present? 
Tension may arise when more and more industries and companies realize that, if Google’s aim is not only to organize all of the world’s knowledge, but also to run the underlying informational infrastructure of our everyday life, it will be in a good position to disrupt all of them. That may generate resistance. At present there is pressure on European policy-makers to break up Google, driven by national firms—often German capital, which, understandably, is fearful that Google could take over the auto industry. The big media empires in Germany also have reason to be worried by Google. So this kind of intra-industry fight might slow things down a little. But I don’t think it will benefit citizens all that much, since Google and Facebook are based on what seem to be natural monopolies. Feeble calls in Europe to weaken or break them up lack any alternative vision, economically, politically, or ecologically. 
You dismiss European resistance to Google as merely the opposition of old firms to newer ones. Still, isn’t this a real-world pebble on the tracks of the American juggernaut, to which you might seem to be telling people to resign themselves, since all neoliberal cows are equally black in the night? 
The continual demand by local politicians to launch a European Google, and most of the other proposals coming out of Brussels or Berlin, are either misguided or half-baked. What would a European Google do? Google today is much more than a search company. It runs an operating system for mobile phones and soon for other smart devices, a browser, an email system, and even quite a bit of cable and broadband infrastructure. There are lots of synergies across these activities; there is no way to replicate them by just dumping a dozen billion dollars on a university and asking them to come up with a better search algorithm that can outperform Google. Google will remain dominant as long as its challengers do not have the same underlying user data it controls. Better algorithms won’t suffice. 
For Europe to remain relevant, it would have to confront the fact that data, and the infrastructure (sensors, mobile phones, and so on) which produce them, are going to be the key to most domains of economic activity. It’s a shame that Google has been allowed to move in and grab all this in exchange for some free services. If Europe were really serious, it would need to establish a different legal regime around data, perhaps ensuring that they cannot be sold at all, and then get smaller enterprises to develop solutions (from search to email) on top of data so protected. 
How would you describe your political evolution since The Net Delusion
Well, I originally regarded myself as in the pragmatic centre of the spectrum, more or less social democratic in outlook. My reorientation came with an expansion of the kind of questions I was prepared to accept as legitimate. So whereas five years ago or so, I would be content to search for better, more effective ways to regulate the likes of Google and Facebook, today it’s not something I spend much time on. Instead, I am questioning who should run and own both the infrastructure and the data running through it, since I no longer believe that we can accept that all these services ought to be delivered by the market and regulated only after the fact. In the course of my genealogical research into the history of ‘the Internet’—it’s a challenge to write it both from discursive and materialistic standpoints—I’ve spent a fair amount of time trying to understand what’s been happening in Silicon Valley. For no plausible story can emerge unless Silicon Valley itself is situated within some broader historical narrative—of changes in production and consumption, changes in state forms, changes in the surveillance capabilities and needs of the USmilitary. There’s much to be learnt from Marxist historiography here, especially when most of the existing histories of ‘the Internet’ seem to be stuck in some kind of ideational irrelevance, with little to no attention to questions of capital and empire. 
At some point in the summer and fall of 2013 I started paying attention to the growing commodification of personal data. Basically, now that everything is in one way or another mediated by Silicon Valley—all these smart beds and smart cars and smart everything—it’s possible to capture and monetize every moment we spend awake (and, it seems, also asleep). So we are all invited to become data entrepreneurs curating our data portfolios. Analytically, of course, this ‘datafication’ of everything is an extension of the much broader phenomenon of the financialization of everyday life. I spent a lot of time trying to figure out why this is happening and how it can be stopped and it became obvious to me that the answers to these questions had far more to do with politics than with technology. I also realized that I could continue coming up with alternative policy proposals all I wanted, but they still wouldn’t be accepted, for structural reasons. The reason why Europe has such a hard time formulating an alternative project to Silicon Valley has little to do with any lack of knowledge or skills in Europe. It’s just that the kind of interventions that would have to be made—lessening dependence on American companies, promoting initiatives that do not default to competitiveness and entrepreneurship, finding money to invest in infrastructure that would favour the interests of citizens—go clean against what the neoliberal Europe of today stands for. Not to mention the way in which lobbyists representing big technology companies dominate the debate in Brussels. In other words, to understand Europe’s dealings with ‘the Internet’ we are far better off historicizing Europe rather than ‘the Internet’. Once I had done some work on the most elementary, perhaps even superficial level—for example, by looking at the evolution of antitrust and competition law in Europe, or the dissemination of various ideas that used to be associated with the Third Way under the innocent-sounding label of ‘social innovation’—I found it very hard not to question my own social democratic complacency. 
What are the political implications of the spreading of the Internet into everything, and massive centralized data-gathering? 
Technology companies can enact all sorts of political agendas, and right now the dominant agendas enforce neoliberalism and austerity, using centralized data to identify immigrants to be deported, or poor people likely to default on their debts. Yet I believe there is a huge positive potential in the accumulation of more data, in a good institutional—and by that I mean political—setup. Once you monitor one part of my activity and offer me some proposals or predictions about it, it’s reasonable to suppose your service would be better if you also monitored my other activities. The fact that Google monitors my Web searches, my email, my location, makes its predictions in each of these categories much more accurate than if it were to monitor only one of them. If you take this logic to its ultimate conclusion, it becomes clear you don’t want two hundred different providers of information services—you want just one, because the scale-effects make things much easier for users. The big question, of course, is whether that player has to be a private capitalist corporation, or some federated, publicly-run set of services that could reach a data-sharing agreement free of monitoring by intelligence agencies. 
Public transportation would probably work much better if we could coordinate it based on everybody’s location, with some kind of predictive analytic of where you need to pick people up, as opposed to the present rigid systems, with trains that sometimes don’t carry any passengers. That would not just cut costs, but could help to engineer a more environment-friendly infrastructure. I wouldn’t want to oblige everyone to wear an electronic bracelet. But I am not against monitoring devices as such, though perhaps they should operate at country level—they needn’t be global. If you’re trying to figure out how a non-neoliberal regime can function in the twenty-first century and still be constructive towards both environment and technology, you have to tackle these kinds of questions. There’s no avoiding them. You will need some kind of basic planning and thinking about an overall informational infrastructure for our communal living, rather than just a clutch of services any company can provide. Social democrats will tell you: it’s okay, we’ll just regulate private firms to do it. But I don’t think that’s plausible. It’s very hard to imagine what regulating Google would mean at this point. For them, regulating Google means making it pay more tax. Fine, let it pay more tax. But this would do nothing to address the more fundamental issues. For the moment we don’t have the power and resources to tackle these. There is no political will to develop the necessary alternative vision in Europe. Things, of course, might change—who knows what will happen if Podemos and Syriza win the elections next year? Right now all we can do is try to articulate some kind of utopian vision of what a non-neoliberal, but technology-friendly, world might look like. 
What would be the prerequisites for the relatively benign centralized ‘big data’ arrangements you envisage to come into being? 
At a national level, we need governments that do not deliver the neoliberal gospel. At this point, it would take a very brave one to say, we just don’t think private companies should run these things. We also need governments that would take a bet and say: we believe in the privacy of individuals, so we are not going to subject everything they do to monitoring, and we’ll have a strong legal system to back up all requests for data. But this is where it gets tricky, because you could end up with so much legalism corroding the infrastructure that it becomes counterproductive. The question is how can we build a system that will actually favour citizens, and perhaps even favour some kind of competition in its search engines. It’s primarily from data and not their algorithms that powerful companies currently derive their advantages, and the only way to curb that power is to take the data completely out of the market realm, so that no company can own them. Data would accrue to citizens, and could be shared at various social levels. Companies wanting to use them would have to pay some kind of licensing fee, and only be able to access attributes of the information, not the entirety of it. 
Unless we figure out a legal-social regime that will allow this stock of data to grow without it ending up in the corporate silos of Google or Facebook, we won’t get very far. But once we have it, there could be all sorts of social experimentation. With enough data you could start planning beyond the horizon of the individual consumer—at the level of communities, neighbourhoods, cities. That’s the only way to prevent centralization. Unless we change the legal status of data, we’re not going to get very far. 
You think the fundamental choice is between two different kinds of ‘big data’ world—one run by private companies such as Google and Facebook, the other by something like the state? 
I’m not saying that the system should be run by the state. But you would have at least to pass some sort of legislation to change the status of data, and you would need the state to enforce it. Certainly, the less the state is involved otherwise, the better. I’m not saying that there should be a Stasi-like operation soaking up everyone’s data. The radical left notion of the commons probably has something to contribute here. There are ways you can spell out a structure for this data storage, data ownership, data sharing, that will not just default to a centrally planned and run repository. When it’s owned by citizens, it doesn’t necessarily have to be run by the state. 
So I don’t think that those are the two only options. Another idea has been to break up the monopoly of Google and Facebook by giving citizens ownership of their data, but without changing their fundamental legal status. So you treat information about individuals as a commodity that they can sell. That’s Jaron Lanier’s model. [7] But if you turn data into a money-printing machine for citizens, whereby we all become entrepreneurs, that will extend the financialization of everyday life to the most extreme level, driving people to obsess about monetizing their thoughts, emotions, facts, ideas—because they know that, if these can only be articulated, perhaps they will find a buyer on the open market. This would produce a human landscape worse even than the current neoliberal subjectivity. I think there are only three options. We can keep these things as they are, with Google and Facebook centralizing everything and collecting all the data, on the grounds that they have the best algorithms and generate the best predictions, and so on. We can change the status of data to let citizens own and sell them. Or citizens can own their own data but not sell them, to enable a more communal planning of their lives. That’s the option I prefer. 
So you reject the idea that the future will inevitably look like more of the same: large-scale concentrations of computing power and data run by one monopoly or another? 
The ultimate battle lines are clear. It’s a question of whether all these sensors, filters, profiles and algorithms can be used by citizens and communities for some kind of emancipation from bureaucracies and companies. If current economic, social and political trends continue, we could conceivably end up with data-driven automation for the poor—so that all their time can be spent working—while the rich enjoy cultivating their senses, learning languages, getting to know art, studying. That’s what I fear. But this isn’t a matter of the future of computing as such; it’s about what it can be used for. On the one hand, we can foresee these companies extending their reach ever further into everyday life, to a point where it would become difficult to even articulate why you would want a different model, since our use of these technologies and the politics embedded in them also permits or restricts our ways of thinking about how to live. On the other hand, we can speculate about a utopian future in which technology plays the role that back in the sixties Murray Bookchin accorded it in his essays in Post-Scarcity Anarchism: helping us to live with abundance. 



Previous texts in this series have been Göran Therborn, ‘New Masses?’ (NLR 85), André Singer, ‘Rebellion in Brazil’ (NLR 85), Erdem Yörük and Murat Yüksel, ‘Class and Politics in Turkey’s Gezi Protests’ (NLR 89) and Bhaskar Sunkara, ‘Project Jacobin’ (NLR 90).



[1] The Net Delusion: How Not to Liberate the World, New York and London 2011. 
[2] James K. Glassman, Dow 36,000: The New Strategy for Profiting from the Coming Rise in the Stock Market, New York 1999. 
[3] Eric Schmidt and Jared Cohen, The New Digital Age: Reshaping the Future of People, Nations and Business, London 2013. 
[4] Exploit: computer security term for a technique which takes advantage of a technical bug or vulnerability, for example to take control of a targeted device. 
[5] The usage of the term ‘hacker’ here is that derived from the technological subculture, which has connotations of DIY experimentalism. This should be differentiated from the pop-cultural usage in which the term has come to refer to the ‘crackers’ who gain unauthorized access to computer systems. 
[6] DARPA: the Defense Advanced Research Projects Agency—a branch of the Pentagon. Vint Cerf was a key figure in it. 
[7] For Lanier, see Rob Lucas, ‘Xanadu as Phalanstery’NLR 86, March–April 2014.