Varia archive

See also:
logs
etherpads



P1060003.JPG
P1060007.JPG
P1060009.JPG
P1060012.JPG
P1060013.JPG
P1060019.JPG
P1060023.JPG
P1060024.JPG
P1060030.JPG
P1060031.JPG
P1060033.JPG
P1060035.JPG
P1060037.JPG
P1060040.JPG
P1060043.JPG
P1060045.JPG
P1060049.JPG
P1060054.JPG
P1060057.JPG
X10T3278.JPG
X10T3283.JPG
X10T3288.JPG
algolit-varia-sm-0.jpg
algolit-varia-sm-1.jpg
algolit-varia-sm-10.jpg
algolit-varia-sm-11.jpg
algolit-varia-sm-3.jpg
algolit-varia-sm-4.jpg
algolit-varia-sm-5.jpg
algolit-varia-sm-6.jpg
algolit-varia-sm-7.jpg
algolit-varia-sm-8.jpg
algolit-varia-sm-9.jpg
algoliterary-encounters

Algologs

algorithms in the age of repredictions



Algolit

Algologs is part a series of events and gatherings that gravitate around the Brussels-based artists, activists and designers that make up Algolit. Algolit is a workgroup initiated by Constant, an arts organization in Brussels that works on the topics of free software and feminism. The Algolit group started in 2012 with the interest to find cross-overs between algorithms and literature. Some of its current members are also here tonight! Yes! [perhaps we could ask Gijs and An if they want to talk a bit about Algolit? - would be nice to present that part altogether? We can expand with examples on OuLiPo]

Algolit is inspired by OuLiPo: Ouvroir de littérature potentielle, another group, this time of writers, with its roots in the 60s, which created literary work by setting constraints for themselves. Some famous works are the novel Exercises in Style by Raymond Queneau, which takes one text and rewrites it in 99 different ways or the novel La Disparition by Georges Perec, which omits the letter e.

Johanna Drucker reflects on the work of OuLiPo in an open letter published on ubuweb that she called 'Un-Visual and Conceptual':
One of the great lessons of OuLiPo is that by inventing constraints we are made aware of the already existing ones. OuLiPo challenged linguistic constraints by accelerating or exhausting them and writing poetry with them. If we were to bring back the OuLiPian ethos into today's world, which is both assisted and plagued by algorithms, what would it look like?
What constraints are inherently embedded in algorithmic systems?
How can you speak back to an algorithm in a self-conscious way?

Algolit subjects

Previous Algolit projects looked into — among other things: the quicksort algorithm to sort a row of people, a script that creates infinite reiterated definitions based on the online dictionary dataset Wordnet, markov-chains recipes based on the text generation algorithm that is often used to create spam emails, and a Frankenstein Chatbot Parade for which a set of bots was written to reproduce the novel of Frankenstein. 

The group decided in 2016 that it wanted to study text-based machine learning processes and decision making algorithms: a field where different algorithms are combined in order to search for patterns in text and make predictions. Last november, this resulted in a four-day event in Brussels with a small exhibition of machine learning tool explorations, guided tours a lecture evening, and two workshops. Algologs is a continuation of this series of public events, where we want to connect to new practices that relate to the questions we are facing in our own sessions. The events are open to anyone regardless of their familiarity level with machine learning or coding in general.

Recipes

For 2018, the Algolit group will continue in its study of machine learning elements, but this time we will take the process apart and focus on seperate elements, such as datasets, counting techniques, vectors, or the idea of multiple dimensions

To do this, we will look back at OuLiPo, and use the format of recipes in the upcoming Algolit sessions. A first tryout of this new format happened a few weeks ago, in an Algolit session in which we looked at the subject of datasets: what does it mean to make custom and self-downloaded datasets and how could the act of gathering textual material be a political work? Notes from our session can be found on our wiki. :) An example recipe is:

With this focus on recipes, and by placing one technique/subject central to the Algolit sessions, we wanted to provide the option for jump-in jump out participation, and approach this complex topic with multiple voices. With this, we aim to open the process to people who do not have the availability to attend every session, but are interested in the subject matter. 

Tomorrow we will host a second iteration in this new-Algolit style, and look into counting techniques, quantification of text and counting words. We will use a 'famous' machine learning set of tools: word2vec, to look at a specific way to transform words into numbers. 

Algologs

The terms Algologs originates from the wish to focus on different practices or attitudes towards algorithmic mechanisms. Together with you, we wanted to log different ways of working with algorithms.

The origin of the subtitle of this event, algorithms in the age of repredictions, is a reference to Walter Benjamin's well-known article - The Work of Art in the Age of Mechanical Reproduction, as you might have guessed. The essay became so popular that many other authors played on the title. To give just a few examples:

    The work of art in the age of digital recombination - Jos de Mul
    The Work of Art in the Age of Deindustrialization - Jasper Bernes
    The Work of Art in the Age of Digital Assassination - Saud Al-Zaid
    The Work of Art in the Age of Amazon - Ben Mauk

The expanding reach and current relevance of Benjamin's text has been maintained through the many iterations it has had in current media theory environments. Each new reinterpretation of the title comes with a reference to the original piece, thus reinforcing its position in the canon. 

Since much of the discussion of tonight will be about copies, reiterations, backpropagations and predictions about the future, we are also not letting Walter Benjamin rest in peace. Tonight we will be looking at how one can go from reproduction to reprediction. How does the past influence the future in algorithmic judgement? In machine learning, for example, one needs a corpus to train and test a model with. What goes into the corpus will influence the worldview of the model. And sometimes these algorithms end up dictating our future way of living as well. [double with the machine learning explanation below]

Terms

The word algorithms might come up for quite some times tonight, so we thought it would be good to shortly introduce this term before we go on. Algorithms could be seen as a list of instructions that are executed from top to bottom. These instructions can be as simple as listing items in a folder, or become more complex when they involve mathematical or statistical formulas. You could also see an algorithm as a recipe, where you need a set of ingredients and instructions to fullfill the recipe and produce the outcome you are hoping for. And when we keep the analogy to cooking, we could also say that every recipe lives within a very specific culture, with their own ideas about ingredients, quality and taste. :) 

Another term that will come up during these two days is machine learning. Also often called AI, which is not reffering to anything different than machine learning, but comes with a lot of confusing expectations/dilemmas :). We have already mentioned that the Algolit worksessions until now have dealt with this topic, and tomorrow we will scratch further in that direction, but for now it suffices to say that it is part of the field of "predictive statistics": they take information from what has happened in the past, create a model of the world as they have understood it and use it to make predictions about what could happen in other circumstances. As these algorithms become part of our daily infrastructure, from hospitals to the court or the power grid, their opaqueness makes it difficult to trace back their judgement calls in making predictions. And then there is the issue of bias as well. The models that these algorithms build are heavily informed by what is already out there, so often they reinforce and augment systemic problems.

Tonight

Tonight, as an addition to what we will do tomorrow in the Algolit session, we wanted to organise a shared conversation to bring different types of algorithmic practices together. The speakers will touch on many issues, not necessarily connected to machine learning, but how we can familiarise ourselves with the working mechanisms of these algorithms and intervene with questions.

Cristina will introduce the concept of Infrapunctures, or what can be perceived as algorithmic stress relief. She has been looking at bot projects as ways to intervene in infrastructures and tonight she will share her findings. She will talk about political bots and other methods of familiarising oneself with the materiality and sociality of an infrastructure.

The first year practitioners of the experimental publishing master of the Piet Zwart Institute will elaborate on their current projects in the framework of OuNuPo - Ouvroir de Numérisation Potentielle. In the last 2.5 month they worked with and around a self-made bookscanner as a central tool. They started OuNuPo with a research period that manifested in a set of readers around bookscanning culture, ways of knowledge (re)production and feminism. Next to that, they explored the potential of the 'scanned book' in individual software experiments. The official launch of the OuNoPo project is scheduled on Wednesday 28th of March in Worm. But for tonight they were so brave and kind to share their first outcomes with us. They will each demonstrate or present their research trajectory, after which there is a moment for questions, response and an open conversation.

After their demonstrations, we will end tonight with a performance and presentation by Marloes de Valk. In We Are Going to Take Over the World, One Robot at a Time she will tell a tale of wonder and disembodiment, disruption and opportunism, nudging and mass-surveilance. Marloes is a software artist and writer in the post-despair stage of coping with the threat of global warming and being spied on by the devices surrounding her


180317_algolit_word2vec __NOPUBLISH__

On word2vec
a way to get from words to numbers
17 March 2018, Varia - Rotterdam

as part of the Algolit workgroup http://algolit.net/ (also the pads of previous sessions can be found there)
in the context of Algologs http://varia.zone/en/algologs.html
Notes of Friday evening: https://pad.vvvvvvaria.org/algologs Notes of Friday evening: https://pad.vvvvvvaria.org/algologs

You should know a word by the company it keeps. John Rupert Firth                                                                                

Algolit
- short intro

Algolit recipes
- following OuLiPo
- the recipe should be manageable within a day
- examples of recipes from our previous meeting: https://pad.constantvzw.org/p/180301_algolit_datasets (lines 134 to 226)

word embeddings
> techniques

> history

How to take position towards word-embeddings? Can we log the internal logic of word-embeddings and write recipes in response to their way of functioning?

use cases of word-embeddings
- Perspective API https://www.perspectiveapi.com/ (worked together with the Wikimedia Detox project to put together a dataset https://meta.wikimedia.org/wiki/Research:Detox )
- word-embeddings in machine translation, to find translations of words in different languages - https://arxiv.org/pdf/1309.4168.pdf 
- word-embeddings in query expansion (trigger more words with a specific search query) - https://ie.technion.ac.il/~kurland/p1929-kuzi.pdf & http://www.aclweb.org/anthology/P16-1035 (has a few nice graphs/tables)
- word-embeddings applied to other types of data (non-linguistic)
- but they are also used as a sort of self-inspection tool within the NLP field: Understanding what other biases word embeddings capture and finding better ways to remove theses biases will be key to developing fair algorithms for natural language processing. Very sensitive subject AND a very tough one ... (via: Word-embedding trends in 2017 http://ruder.io/word-embeddings-2017/ ) 

word2vec
- touring together, word2vec step-by-step
- word2vec demo https://rare-technologies.com/word2vec-tutorial/#app

Speak about, analyse together: constraints / substructures / foundations / assumptions that condition word-embeddings (make word-embeddings possible)

(counting) exercises:
- Hungarian sorting dance: https://www.youtube.com/watch?v=lyZQPjUT5B4
- Abecedaire rules http://algolit.constantvzw.org/index.php/Abecedaire_rules, created by An Mertens
- Littérature définitionnelle, where each word in a sentence gets replaced by the definition from the online dictionary dataset Wordnet. A proces that can be reiterated infinitely on the transformed text. - https://gitlab.constantvzw.org/algolit/algolit/raw/master/algoliterary_encounter/oulipo/litterature_definitionelle.txt, created by Algolit
- Emmett Williams' Counting songs, Fluxus

Algolit scripts
https://gitlab.constantvzw.org/algolit/algolit/tree/master/algologs
- gensim-word2vec - a python wrapper for word2vec, an easy start to work with word2vec (training, saving models, reversed algebra with words)
- one-hot-vector - two scripts created during an Algolit session to create a co-occurance matrix
- word2vec - a word2vec_basic.py script from the Tensorflow package, accompanied with Algolit logging functions, a script that allows to look a bit further into the trainingprocess
- word2vec-reversed - a first attempt of a script to reverse engineer the creation of word-embeddings, looking at shared context words of two words

installing homebrew https://brew.sh/
installing tensorflow https://www.tensorflow.org/install/

Further reading:
- History of word embeddings https://www.gavagai.se/blog/2015/09/30/a-brief-history-of-word-embeddings/
- Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings https://arxiv.org/pdf/1607.06520.pdf
- Sebastian Ruder's three-piece blog on word embeddings http://ruder.io/word-embeddings-1/
- The Arithmetic of Concepts http://modelingliteraryhistory.org/2015/09/18/the-arithmetic-of-concepts-a-response-to-peter-de-bolla/


Notes:

Previous session on datasets, notes of that session: https://pad.constantvzw.org/p/180301_algolit_datasets

A deathrow dataset
A dataset on special trees in Brussel cfr Terra0 (legalaty of non-human entities as autonomous)


how to get from words to numbers?
We will focuss on word embeddings a technique which is prediction based technique 
Word embeddings developed by Google 

The idea of receipies is coming from Oulipo where you set up of constructions () rewrite in different styles. 
What kind of recipes we can write ?


Georges Perec - a book without a letter e
Stijloefeningen by Raymond Queneau

Word embeddings
unsupervised learning
a lot of data, clustering words together by looking at direct context words, variable windows (how many words to the left and right of it are included that keep the central word 'company')
no more "bag of words" (how often a word appears in a text) or letter similarity 
based on the words that keep the center word company
a way to find 'more similar' words
ex of dataset: Glove https://nlp.stanford.edu/projects/glove/ developed by Stanford University, based on Common Crawl (NGO) (download of 75% of internet)
companies like Google , FB (Fast texts https://research.fb.com/fasttext/ ) create their own

calculations within the textual space:
word2vec demo https://rare-technologies.com/word2vec-tutorial/#app
simple neural network technique

similar?
what does it mean?


tour through steps of word embeddings
based on basic example script of Tensorflow (Google framework)
and Frankenstein the novel
words with similar function cluster (colours, numbers, names of places...)
no stopwords : common short words, 'a', 'to, 'in'....
0. word embedding graphs of Frankenstein
 'human/fellow', 'must/may' appear together... it makes sense for our eyes...
all steps are part of same script
vs Gensim word-to-vec: 1 line of code 'call model'
1. plain text of Frankenstein
all punctuation taken out, all lower case
2. bag of words, list
all spaces are replaced, list of words (tokenized)
3. dictionary of counts
words are counted
word can have 2 semantical meaning, but counted as same word
4. dictionary of index nr: positioning the words
order is the same, but nr of countes is replaced by index nr
zero element: 'UNK' (unknown)
dictionary is made with vocabulary size, how many words you want to include
in this case: 5000 words, every word that falls out, is replaced by UNK
efficiency for process - otherwise the calculation becomes too heavy
cfr Glove: 22 million words
-> try to do the same with less frequent/rejected words?
might not work because you need a lot of examples 
notion of 'working properly': might not work but might be interesting
making a 'memory' (every word that's not remembered is thrown out)
x. reversed Frankenstein: side step
list of words that are thrown out
5. data.txt: rewriting 1
novel represented by index numbers
6. one-hot-vector batches
starts with small sample of words
needs to sort out context words
window frame: 1
computer reads index numbers
batch 1: center words
batch 2: context words
'any kind of pleasure I perceived'
center word 1: 'kind' (connected to left), 'kind' (connected to right)
x. one-hot-vector matrix
3 sentences
in order to calculate similarity of sentences, look at individual words and the company words
all 5000 words are center words
everything is initialized by 0, filled in after
7. training
list of words, starting with most common + 20 random numbers (you chose the amount, also called coordinates)
amount of coordinates depend on CPU of computer
creating a multidimensional vector space with 20 dimensions, filled in at next step
representation graph: brought back to 2 dimensions
dimensionality reduction ex principal component analysis : information is lost
8. embedding matrix
the in between step is too difficult to visualise
takes word, compares to negative other word, looks if they're similar
if they are, the 2 rows are put closer to each other
ex 'the' - 'these': compare on-hot-vector row, if window words show similarity (threshold for similarity), the rows are put lcoser to each other by changing coordinates so they're closer to each other in the matrix
similarity is calculated in
these are final word embeddings
x. logfile
while training script prints training process
loss value: goes down while training, threshold you define, a function that controls results of each step
from 0 to 100.000 steps: a parameter you can define
every time you run it, results of similarity are different
why???
-> what 2 words are compared, probability counts, .... levels of uncertainty makes that it changes all the time
result is relationships, not the position
different set of random numbers, values are different
making tests with more data? more similar cases
'peeling off the onion', 'opening the blackbox, finding another blackbox inside'
'making excuses' (Google, facial recognition; we'll increase dataset, train better etc)

side discussion:


Results are never the same when you train the embeddings
What is causing this? Layers of probabilty calculations ? Moments of uncertainty in the calculations
If you train two models on the same input text, are the final embeddings the same?

An joined an international workshop in Leiden
issue of discrimination in datasets and/or process
what is a bias?
sociologists (history of researching discrimination) & 
talking about black is about black man, and discussions around women are about white woman
a black hole for black women
We can adjust our biases, but algorithms cannot tweak themselves.
'shapeshifters' by continuing to training the model, one face becomes the other

ex. Perspective API
- Perspective API https://www.perspectiveapi.com/ (worked together with the Wikimedia Detox project to put together a dataset https://meta.wikimedia.org/wiki/Research:Detox)
determine level of toxicity in comments (NY Times, Guardian, Wikimedia)
automatically intervene & remove comment or have human moderator
based on word2vec
in Nov: giving odd results
nice: you can trace back the comments they used, the ratings (who rated? volunteers of Wikimedia / 30y, white, male, single)
ex I went to a gipsy shop. gives 0.24
ex I believe in Isis. 0.40
ex I believe in AI 0.01
die in hell = Sorry! Perspective needs more training data to work in this language.

'data-ethnography': who was creating the data
https://medium.com/ethnography-matters/why-big-data-needs-thick-data-b4b3e75e3d7

Examples of exercises
Sorting algorithm
- Hungarian sorting dance: https://www.youtube.com/watch?v=lyZQPjUT5B4
by doing it, you experience the different behaviour of humans (optimized/intuition) and machines (repetition)
Fluxus http://cramer.pleintekst.nl/essays/crapularity_hermeneutics/#fnref4
Emmett Williams’ Counting Songs 
described by Florian Cramer in http://cramer.pleintekst.nl/essays/crapularity_hermeneutics/#fnref4


Run word2vec_basic.py on mac
- python3 https://www.python.org/ftp/python/3.6.5/python-3.6.5rc1-macosx10.6.pkg & https://www.macworld.co.uk/how-to/mac/coding-with-python-on-mac-3635912/
- brew (package manager) https://brew.sh/ 
- pip > brew install python3
- tensorflow https://www.tensorflow.org/install/install_mac
- sublime 
- dependencies for 


Installing nltk in linux/mac
sudo pip install -U nltk

And then:
    python
    import nltk
    nltk.download('punkt')

Installing scikit-learn for plotting the valuest (Mac OS)
 sudo pip install -U scikit-learn


----------------------------------------------
important word2vec_basic parameters 

batch_size = 128
embedding_size = 128  # Dimension of the embedding vector.
skip_window = 1       # How many words to consider left and right.
num_skips = 2         # How many times to reuse an input to generate a label.

# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16     # Random set of words to evaluate similarity on.
valid_window = 100  # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64    # Number of negative examples to sample.

----------------------------------------------

to print a full (numpy) matrix:
numpy.set_printoptions(threshold=numpy.nan) numpy.set_printoptions(threshold=numpy.nan)

thinking of recipes...

A recipe to formulate word relations.
0. use a dataset (txt file)
1. pick two words
2. find out what context words they have in common (word2vec reversed)
3. read the words 
4. use the words to formulate a description in one sentence

0. dataset: about.more (scraped about/more/ pages of mastodon instances)
1. instance & community
2. ['their', 'this', 'or', 'a', 'will', ',', ';', 'the', 'run', 'and', 'by', 'but', 'our', 'any', 'in', 'with', 'of', 'where', '!', 'inclusive', 'open', 'small', 'that', '.', ')', 'other', 'moderated', 'financially', 'local', 'as', 'for', 'international', ':']
3. 
4. run a small moderated or inclusive local and international this

0. frankenstein
1. human & fellow
2. ['beings', 'creatures', 'my', 'the', 'a', 'creature', 'mind', ',']
3. 
4. my creatures, the creature mind beings 

"beings": {
        "fellow": {
            "freq": 4,
            "sentences": [
                "It was to be decided whether the result of my curiosity and lawless devices would cause the death of two of my fellow beings : one a smiling babe full of innocence and joy , the other far more dreadfully murdered , with every aggravation of infamy that could make the murder memorable in horror .",
                "I had begun life with benevolent intentions and thirsted for the moment when I should put them in practice and make myself useful to my fellow beings .",
                "These bleak skies I hail , for they are kinder to me than your fellow beings .",
                "They were my brethren , my fellow beings , and I felt attracted even to the most repulsive among them , as to creatures of an angelic nature and celestial mechanism ."
            ]
        },
        "human": {
            "freq": 6,
            "sentences": [
                "I had gazed upon the fortifications and impediments that seemed to keep human beings from entering the citadel of nature , and rashly and ignorantly I had repined .",
                "I had often , when at home , thought it hard to remain during my youth cooped up in one place and had longed to enter the world and take my station among other human beings .",
                "The picture appeared a vast and dim scene of evil , and I foresaw obscurely that I was destined to become the most wretched of human beings .",
                "I saw few human beings besides them , and if any other happened to enter the cottage , their harsh manners and rude gait only enhanced to me the superior accomplishments of my friends .",
                "The sleep into which I now sank refreshed me ; and when I awoke , I again felt as if I belonged to a race of human beings like myself , and I began to reflect upon what had passed with greater composure ; yet still the words of the fiend rang in my ears like a death-knell ; they appeared like a dream , yet distinct and oppressive as a reality .",
                "In other places human beings were seldom seen , and I generally subsisted on the wild animals that crossed my path ."
            ]
        }
    },





Memory of the World
for An Mertens

https://www.memoryoftheworld.org/blog/2014/12/08/how-to-bookscanning/

Michael Winkler

http://www.winklerwordart.com/


text cleaning: while trying to clean the text for further processing, we notice that it is hard to make a decision regarding 's': 
lady catherine  s unjustifiable endeavours 
my aunt  s intelligence
receipt that s he does (original text: receipt that s/he does)
by u s  federal laws ((original text: by u.s.  federal laws)



Tutorial Rob Speer: How to make a racist AI without really trying
word-embeddings + supervised learning layer 
https://blog.conceptnet.io/2017/07/13/how-to-make-a-racist-ai-without-really-trying/


Varia archive: https://vvvvvvaria.org/archive/

Machine Learning for Artists on Youtube by Gene Kogan: good tutorials!




JavaScript license information
algologs Algologs notes

Algolit wiki: http://algolit.net

Friday evening (this pad): https://pad.vvvvvvaria.org/algologs
Pad for the Saturday, Algolit session: https://pad.constantvzw.org/p/180317_algolit_word2vec


Algolit
crossover code & literature, workgroup
inspired on Oulipo, potential literature based on constraints, accelarating/exhausting them
ex Queneau, Exercices de style

Johanna Drucker, Un-visual and Conceptual, letter published on Ubuweb
bring Oulipo principles/constraints in algorithmic systems: what is embedded in them...?

2012: start, looking at Oulipo constraints, generic algorithms, Quicksort, Markov Chain, ... rewriting Frankenstein
2016: focus on text based Machine Learning processes
experimenting with recipes, for coders & non-coders
idea of recipe < Oulipo
come up with something that can be done within the day
a way to structure yourself

Algolog
wink to Algolit
interest to combine Algolit with other practises
'logging': a way to log attitudes towards algorithms and contexts in which they are used
'age of mechanical reprediction': joke to all other versions of title that exist, ref. Walter Benjamin
bookscanner = tool for reproduction / twist to age of reprediction
ML uses a lot of data, to find patterns, make model and predict how in future same patterns will arise

Algorithms
look at it as recipe
set of rules to be executed
needs ingredients
comes with own culture, taste of what is good or not

Machine Learning
also called AI (but this term brings up different questions)
ML belongs to field of predictive statistics, takes information of the past, influence the future
Difficult to trace back decision making processes, problem of bias

----

Marloes De Valk, performance “We Are Going to Take Over the World, One Robot at a Time”
software artist & writer
'Let us celebrate capitalism' Google 2016

-----------


Cristina, Infrapunctures
botprojects
infrapuncture - acupuncture
needs sensitivity for pain & debts (reference woman?), 'rban acupuncture' 
ex urban farms Taipe, Treasure Hill, on any place in the city that is unsuitable for industrial production
The area got renovated by the local government, then the area got gentrified. Marketing for the neighbourhood.
friction in between local interventions / institutional interventions

Huni, Humanities of Networked Infrastructures https://huni.net.au
platform for datasets
(she also wrote recently about the feminist archive, recommended read)

(network) infrastructure: everything that makes online communication possible

example of automated interventions by bots.
'bespoke code' code is highly costumized, serving specific goal (reference?)
in the context of SaaS, bots are in the rise as a form of 'bespoke code'
no need for high programming skills, can live on local servers

bots require understanding of infrastructures

bots have political potential
Oxford Insitute: political bots program
- propaganda bots: sock puppets (half operated by people/half automated)
- amplifiers commenting
- validators affirming through likes
- harrassers

Power of botfarms to influence political campaigns
not a new thing

PsyOPs, planned manouevres that promote behavir suitable to US abroad favor of 
An example is the video that shows how the military spreads leaflets out of airplanes
no too far of posting propaganda messages & see what sticks

Process of propoganda is democratized, for example as many people can make bots
Example: automated agent in Tinder, pursuading users to vote for the Labour party
users not aware they were speaking to bot, being lied on romance, influenced to vote

But, the platforms themselves are the ones that are winning in the end.

Example: 
via Ellen Gallagher, Agenda of Evil
spreading anti-islam messages on 
On their website: "Fakebook"
'conten aggregator & distribution hub'
requires collaboration between humans & robots: 
"sheppards (people with high number of follwers), sheepdogs and electric sheep (bots that blindly repeat)"

Bot of conviction (Marc Sample??)
< Jurgen habermas, 'journalism of conviction'
a tactical bot? 

Parliament Wikiedits
edit on wikipedia without account; your email address is public
retweet edits of public institutions automatically
by publishing the code under an open license, the bot appeared in other countires

rhetoric bots, also persuasion, but with different parameters
dialogic combination of the people making the bots and the people reading their posts


infrapunctural bots
every intervention creates revenue (for the platforms)
Twitter is one of the main sites where research on bots is applied to
'thinking outside the bot'

Xnet Spain
anonymous tips about corrupt bankers, communication material
theater play, money raised, sued the bankers and they won

defined stress: economic crisis
relieved the stress by the project
they became 'utomated agents' by performing and reenacting emails

Open States, Amsterdam
Hack de valse start hackathon
Focus on solutions, not thinking to much 

digital literacy is often offered as a solution
critical thinking does not necesarily equate informed thinking

highlight sociality of the process, possibilites to create new imagineries

A difference does not per se matter
However, the infrapunctional bots highlight the sociality aspect
what forms of sociality arise in these contexts? 

the undoing is as important as the doing!






-----------

Experimental publishing master, 1st y
Ouvroir de Numérisation Potentiel (Ounupo)
28-3: presentation in Worm

applying feminist methodologies - each created a reader on different topic
looking at biasses, absent information
working with rules, all chose same font designed by feminist 
scanner i open hardware, using pi & pi scan software operating cameras
each one builds backend/software for scanner, transforming the material they scan

1.
weaving / programming
Carlandre poems
input / rules / output
thread / pattern / fabric
first patterns used for programming

jackard motif in text / stitching & weaving text / visible output
weaving on angle
cfr paper weave
create programming language OverUnder (under/over stitches, easy to reporduce in paper wave)
load/show/over/under/quit

2.
what has been scanned and what not (Google)
'how bias spreads from the canon to the web'
what books do we upload, who does it, what is lost in the process
not a lot of female writers, what is standard, how not to reproduce culture we inherit
how do you make interfaces less seemless?
article on facial recognition: 1st versions tested and created by white male, not recognizing faces of other races
-> script: you scan sthg, whatever you scan first would affect the next page (inclusion/exclusion)
- erase: finds the least common words (appear once) from 1st page and erases them, adding for every other page -> more and more gaps
- replace: replaces least common words by most common words

3.
Reading the structure
gendered image of the librarian, female, bad connotation vs information scientist
different perspectives on 2 groups while they're doing the same
if we read like a machine can we also see the structure of the text
put labels of words instead of word itself: noun, keyword, neutral, profession
-> reconstruct bag of words into the orginals text
add your own tags
exported to json file, you can use it as your own dataset

4.
Database & narrative
Lev Manovich, db as symbolic form
how narrative is represented in new media
scanner is converting classical narrative into database of words/sentences
print db and turn it back into book form
included all texts & trash
Whatsapp & mediawiki - including stream into db
analysing keywords, looking for important words, looking for feminist words & erasing them

5.
Encoding - decoding
shadow & pirate libraries
Ceasar, rot13 code
python & drawbot, using only letters & 1-9,punctuation out
whole text included in circle - mandalas :-)

6.
transparent reader
layered reading
using Gertrude Stein, pattern in language











JavaScript license information
varia.png
../

These pages are generated with Distribusi. 🌐 https://varia.zone