Symposium (in)visible technologies

Published on June 2nd, 2007


* * * * * * * * * * * *

Usman Haque & Rob Davis

00:02:00 [usman haque]We are going to talk about our project, which is upstairs, Evolving Sonic Environment III, it is the third version of something we are working on for about a year now.

The first thing we would like to explain is why is this interest in adaptive systems because there is this whole territory of research going on in things that adapt, and this just doesn’t mean things that change, but things that actually modify their behavior. Actually the most important thing for us in adaptive systems is to look at ways to develop things that can develop their own categories, their own pattern recognition. And this is a very important thing these days because in ubiquitous computing terms, that is all these systems that are now starting to track our movements and watch us, there is this notion of an objective observer looking down upon us, which is tracking what we do or where we go or whatever. And these are using categories that are predetermined by a designer. Predetermined by somebody else, predetermined by systems or say a corporation that is much larger than us. And so, in a sense the prosaic goal of the project is to find ways that we can make systems that develop their own categories, which usually would mean that they develop those categories in cooperation with the people that actually exists within the systems as opposed to being designed by some god-like-objective-designer.

00:03:45 [rob davis]
Yes, systems that are somehow more synaesthetic with the occupants of an environment. That they can somehow settle into their environment spontaneously to change their own thresholds and adapt without any deliberate intention on our part in designing the certain parts of sensation or sense of registration into them. So, we can approach an environment we do not know that much about, put these systems in place and allow them to spontaneously emerge some behavior that is measurable or that might be useful to say something about their environment.

00:04:30 [usman haque]
I give you a very prosaic example. One of the most widely used objects in architecture is the thermostat, this kind of machine that senses its environment and than acts upon it. There is a dial that you set that determines what temperature you want, it measures the temperature and it outputs usually only heat, but sometimes either heat or cold in order to modulate that environments temperature. Now, that system has been determined by a designer because this designer decided that first of all we need a dial to set the temperature and secondly that actually we know what 22 means when we set the dial. If we think, again in prosaic terms, of a thermostat or a heat regulating system that could determine it’s own categories that could develop its own sensors, than you might have a system that could, for example, through its own behavior it could evolve the fact that the bills that you pay every month are an impendent of the system. Or it could evolve the fact that the colour of the clothes that someone is wearing on a particular day should have some impendent on the system. Or it might look at something like, how often do people come and visit me, meaning how comfortable are people when they come. You can start to imagine a heat regulating system which develops its own sensors, determines for itself what is important. And this determination is explicitly based on what the occupants of that space do in the space. It is not developed by a designer, it is almost developed in conversation with the people who live there. This is the whole point of having systems that can self adapt.

00:06:30 [rob davis]
yes there is some self regulation in that thermostat which you can than open up to a wider context to include other people in, extra input in that self regulation. So, the whole system becomes more subtly based on its environment than just feeding back off one single input.

00:06:49 [usman haque]
It is almost like a sort of complex ecology if you like, thinking of these systems as ecology in a wider sense. So, that is one aspect of systems that we are interested in. The other aspect is this distinction between interactive and reactive. And of course these days we can use the word interactive and it means kind of everything. We are mostly interested in – and I want to be clear in this because we use the word interaction or interactor or interactive and we mean it in a quite particular way. It goes back about 30 or 40 years to the beginnings of systems series and what interaction means as supposed to reaction. There was a 19th century model, the billiard ball model, which is basicly the idea of cause and effect. You have an input and you have an output, there is a fixed function that determines the output: cause and effect. You may not know what the input is going to be and the output may therefore be unpredictable, but the fact is that the cause and effect model means that the function itself is still fixed. I would argue that a lot of the so called interactive systems that we are given today, for example the mobile phone which is interactive, are based on a cause and effect model. We do not know what we are going to do with it or what comes out of it but the actual functions are predetermined. Now interactive in this old sense was more based the notion of a second order reaction. And this is not about mutual reaction, in other words not me affecting something that outputs back on me, that is still a first order reaction. It was about my input into something affecting the way the function calculates its output. And what that means is that you do not know at any particular moment what the output is going to be, it is not predictable but also that function may change on the basis of an output, on the basis of time or on the basis of some other dot. And that is what interactive used to mean. It was the notion that you could enter in to the reactive function. You could change it. And giving the analogy of wikipedia, which is maybe a straightforward way to describe it, wikipedia has a fixed framework but the fact that anyone can go in there and change what is there and build upon something that someone else might have done, that is getting close to what interactive might mean. Because it is people themselves who affect the way any output is calculated. When we talk about interactive, particularly in the context of the project ESE, this is the kind of thing we are interested in. Not just in us determining what an output is or even how an output is calculated, but setting up a framework that enables some other inputs in to determine how the output is calculated.

00:10:09 [rob davis]
We want a system which is adaptive over time, that changes something about those transfer functions between inputs and outputs contingent on something, previous inputs, other inputs that might emerge through accidental design even, we do not want to deliberately put too much design into a specific system and say right that is the fix functionality and that is all we want to see from that over time. We want to allow the system to adapt in ways that we maybe did not even intend originally, that there is some spontaneous emergent behavior from this system that is more interesting somehow or useful.

00:10:49 [usman haque]
Of course, this relates back to the original premises that we had, we want to somehow tackle the problems or things like ubiquitous computer where the objective idea of a camera tracking movement in the space. You have this idea that it is somehow valueless, but of course there is always someone behind that camera who is making categories, that is good behavior / that is bad behavior. You probably heard in England these days that they are using these cameras that are talking back to you. That is a very one-way sort of system category development. Somebody is sitting on the end of the camera determining what is one category or another. If it is not a person sitting there than it is a computer program there which somebody has coded. One of the scary aspects of surveillance as a whole is the fact that we have no input, no effect whatsoever on how we are sensed. It all sounds a long way from what is upstairs but we will get to why there is a link.

00:12:16 [rob davis]
There is also something almost exploitative about this one way direction, it seems corporations just want to see our behavior, measure and quantify it effectively so they can make more money. So they can produce some goods and target it in some special way to us rather than being more equative in a more directional sense. To be monitored is monitoring in some ways.

00:12:43 [usman haque]
Yes, so we’re giving the illusion of feedback because we can choose what we buy but actually it really is an illusion that we have any direct feedback up to how these categories were originally quantified. Effectively the question we have is how can we design systems that design their own operations. Because when you talk about a system that designs its own operation than you are implicitly talking about something which has to do so in concert with its environment. And environment means something very specific in this case. Because an environment is anything that is external to it, that includes an ecological notion of space but also includes human beings, it also includes any other aspect of sensory criteria that are external to something.

So, how can we design systems that are able to determine how they are going to behave. Because they are the ones that can be effective by humans being directive.

The project we have upstairs we describe as an analogue neural network, which is acoustically coupled. We could go into neural networks now, but it is a little bit long technical.. we’ll give it a go.

We are going to show a few slides, which we showed in the workshop ‘introduction into analogue neural networks’ at the beginning of this month here in Montevideo. We basically brought along a bunch of little analogue neural circuits that people could connect together and start to understand exactly what this bizarre science fiction sounding neural network is. This is our introduction to them. Biological neurons, they are what we know to be in the brain and we know that there are billions of them inside each brain. They are quite simple mechanisms.

00:15:20 [rob davis]
So, we have got a single neuron, we have branches coming out of the neuron through which we can transmit signals alongside you will find insulated by wires that these nodes are boundaries where neural signals cascade in an all or nothing process. One system fires, it produces signals, the echo down the actual to the synapters that connects other .. neurons and influence .. neuron so there is some sort of gradual dissemination over time of the influx of different species of ion. So, there is a difference in the concentration of organic and semantanic ions inside and outside of the cell membrane. And this can be manipulated every time by other neurons so there is a sudden change in chemistry depolarizes the cell and leads to this firing event.

00:16:35 [usman haque]
The point is that a neuron either fires or it does not, it is not half-on or half-off. It waits until it receives enough input and than it fires a pulse and than quickly settles back to normal. This is probably happening many times over the course of a minute. Importantly what neurons do, they sum the frequency of their input rather than just the sort of voltage level. Of course what we have in our brain is billions of these all connected together. Here we can look at the different output potential of different types of neurons. You see a diagram of a neuron as it fires. What we look at is the zero volts lies somewhere up here. Most of the time the neuron, if there is no other stimulus, will be resting at a negative voltage. If it receives a little bit of input it will not do anything, it does not provoke it. It is not until the voltage goes above a threshold that suddenly it will fire itself. So, outward the pulse, basically it amounts to electrical activity, there is a chemical relationship going on. But it effectively is a sort of voltage pulse. For quite a short time, it is only one or two milli-seconds and than it goes into a refractory period, which means that it is way below the threshold. That means that any input that comes in during that time will not sufficient to bumb it above it threshold. And the result, depending on its state, you can have different frequencies. You can see down the right hand corner, actually there is quite a lot of activity going on there. In terms of the actual output potential here based on an input you will see that it takes for example in this case four input pulses to provoke that output.

The ‘neuron action potential’, again, we have a threshold level, this is the threshold below which it will not be provoked into creating an output. It needs that much voltage at its input in order to go over the threshold. If that same kind of voltage is applied when it is in its refractory period than it will not be able to exceed the threshold and will not be able to fire. But when it is rested for a while it will be possible to provoke it with that same voltage. And this is what gives it an explicitly time oriented dimension to the way it uprights.

00:19:54 [rob davis]
In fact, it is essential that this refractory period occurs because otherwise groups of neurons would simply run away with themselves with activity. Since they became depolarized they would just begin to fire at a higher and higher rate, there would be no cut-off period. So, given the amount of bounds (?) of mili-seconds long, it means you can only fire those cells about a thousand times a second no more than that. This implicit self-regulatory mechanism within neurons that stops the activity of going out of control.


* * * * * * * * * * * *

T O    T O P