Evaluation on Reaching Hand Interactivity

Throughout my whole processing journey I had used the design iteration method of implementing a concept using the cycle of analysis, design and testing repeatedly to see what was working and what wasn’t giving me more insight on how to become a much better designer. By looking at the project I feel that I have learnt a good understanding of how the iteration cycle can help designers improve the skills in both programming and experimentation through design, but I truly believe that my project itself was far from becoming finished due to my lack of skill within the coding. I have had alot difficulty understanding the code as well as trying to keep it in my head in what mean what within the functions.

My project had quite alot of flaws such as the image being the main issue of being too small when the face has detected the side tracking of the user, to fix this I would have changed the idea of having the reaching hand appear covering the whole screen with the eye becoming more dominant to show there’s always someone watching you without the person knowing, this way it would of been more effective to make my project into a basic interaction with face detection giving a simpler approach to both the design and the programming behind it. This could also lead it to becoming more smooth when the picture is activated. My code itself was quite weak as I am no where near having the skills of a programmer however I believe my idea was still original and gave off the concept I was trying to reach to my audience.

I believe that my biggest strengths were the combination of linking the theory of the Panopticon to suit my concept worked perfectly, however my weakness was the coding behind the project. This did not surprise me as I was well aware that the coding would of let me down due to my lack of knowledge and interest in code, but when testing what I had achieved so far gave me satisfaction that I could create a basic interaction of making an image appear behind the user when there face was tracked from a side profile.

In my honest opinion of this brief I didn’t enjoy much of the project as it wasn’t anything that interested me to be motivated enough to try and push the boundaries on the other hand I am glad I still took the approach and tackled it head on as this was a whole new experience for me but if I had the choice to use processing again I definitely decline as I found this more difficult than an actual challenge.

If I had another chance of this project with the current knowledge I have gained now, I believe I could improve alot more with both the design and programming as I have learnt to use the iterations cycle to help me find the solutions and problem solving towards each issue.

With my project I wanted to explore the idea of privacy and online identity on how easy it was to invade the space of a user within the internet representing this through the use of a hand and an eye acting as the symbolic reference combining the elements of the digital words with a circuit board as the skin of the hand.

My full processing code is below showing how simple my project was but as I had the initial idea behind the concept of using the Panopticon theory I knew I was able to have the potential of pulling this idea off:


import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

PImage hand;

void setup() {
size(640, 480);

//loads the image
hand = loadImage(“Hand.png”);

video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);


void draw() {


image(video, 0, 0);

Rectangle[] faces = opencv.detect();

for (int i = 0; i < faces.length; i++) {
float x = (width * 0.5) – faces[i].x – faces[i].width;


image(hand, x, faces[i].y, faces[i].width, faces[i].height);

void captureEvent(Capture c) {