Visual Thesis

The goal of the visual component for this thesis was to create an interactive installation piece, which incorporated a natural user interface for the human-computer interaction element and a desktop interface that could be on virtually any Internet connected desktop computer. Together, through these two platforms, users can work, in collaboration, to create one visual design or create individual designs. Just as anonymity often exists on the Internet and in the Information Age, the two users could be in the same room or on the other side of the world and do not have to know each other. Whether the two users know each other or not, their presence and influence upon the final collaborative design is known and apparent.

 

The title for this exhibition is Internet Noise, which is derived from the idea that a significant amount of information created as a part of the information age is not useful information or original, while other information that is created is indeed valuable. The visual component of this exhibition plays off this notion of Internet noise by creating a dynamically changing background, which is generated by a simple algorithm that I developed to analyze an incoming sampling of Twitter feeds.  Each square of the background represents one single post on Twitter (known as a tweet) and its alpha channel is dependent on the length of the tweet and how many times it has been retweeted (Twitter terminology for reposting someone else’s message). While the alpha channel directly relates to the quantity of information created, the scale of each square is based on how many users are following the original poster, which visually simulates the information spreading to others. The visual outcome of this process creates a background that appears to be random and somewhat sporadic. The inspiration for the background was originally inspired by analog static on a television, and the usage of the squares draws reference to the digital equivalent to analog static: pixelation.

 

The human-computer interaction of this exhibition occurs when the user adds symbols to the virtual canvas by using either a natural user interface or a graphical user interface. If the background were to represent the abundance of information, the symbols represent the purposeful and important information that rises to the top, above the background. Multiple symbol sets were created for the visual component to emphasize the flexibility of the framework behind this interaction design;[1] the first group of symbols is a set of kinetic symbols for purely aesthetic purposes. The second set of symbols is based on the ancient Phoenician alphabet to tie in the communication aspect of design. The Phoenician alphabet is similar to metaphoric design in that early alphabets and hieroglyphs are based on a semiotic relationship to real objects.

 

For the natural user interface variant of this project, the interface design draws on several key features to make it more natural and easy to use. As purposed in this thesis, the interface draws the user’s focus to a point on the screen by using a slight gradient to create a focal point on the screen to highlight the area of interest on screen. Additionally, when a user hovers over an object, that object gets a slight drop shadow and moves forward in the z-axis of the design creating a visual focal point within the design that is above all other objects and reverts when the user stops hovering over that object. The gradient overlay can also change tint to reflect the task that the user has selected or alert the user of a change he or she is about to make. For example, in order to recreate the NUI equivalent of clicking a mouse, I used the gesture of swiping to select—similar to the iPhone unlock feature. When the user is about to select the delete task for the object, the gradient begins to take on a red tint and as the user gets closer to selecting the task, the gradient tint becomes vibrant.[2] Other interactions simply change the color of the gradient to visually let the user know that they have selected a task. For instance, when the user selects the move command, the gradient changes to a green tint to create a cognitive reference to a green traffic light, which represents go or move.

 

In order to create the visual component of this thesis, Adobe’s Flash Professional and its actionscript scripting language was chosen as the main development platform. Flash was favored because it could be made to run a desktop computer to interface with the Microsoft Kinect and then easily adapted for a desktop web browser using a graphical user interface. Additionally Flash CS5.5 shipped with the ability to export applications to iOS devices (such as Apple’s iPhone or iPad) allowed me to write one basic code base and interface, making minor adjustments for each target device.

 

Since Flash does not natively understand or support gesture-based natural user interfaces—much less the Microsoft Kinect—I had to write the code that served as a framework in order for Flash to understand not only one hand, but two hands. Flash natively expects interactions to be done through the use of a keyboard and mouse—a combination that would never have multiple inputs points, as would a natural user interface. This means the framework must allow Flash to understand multiple input points. Interactions such as hovering over and selecting objects with a mouse are built into the core programming language of actionscript, but these methods are not available when using a device such as the Microsoft Kinect to interface with Flash. My framework for the NUI needed to be able to understand which hand was the primary and which was the secondary hand, while ignoring any other people or objects that might come in front of the Kinect and try to interact, or even interfere with the user’s intentions. In my design, the primary hand was the hand that the user uses to select objects on the virtual canvas, while the secondary hand was used to select commands to modify and interact with the selected object.

 

While not in my original thesis proposal, I also decided to explore the new feature found in Adobe’s Flash CS5.5, which allowed for exporting a Flash project to an iOS device. However, after some initial tests, this method of creating iOS applications proved to be unacceptable for my project goals. Vector objects in motion were extremely choppy and ran at about half the speed of its desktop computer counterpart. Additionally, once when alpha channels and the Twitter-based background were utilized, the performance of the application dropped to a nearly unresponsive state. Since the iPad was not an integral part of the concept behind the visual component of this thesis, I decided not to explore it further for this thesis project. In order to develop an application for the iPad, the best and most effective route would be to write the code in the device’s native Objective-C and Cocoa languages. For the final visual component of the thesis, the installation using the Microsoft Kinect and the standard computer interface with a mouse were used to explore and incorporate both a graphical user interface and a natural user interface.

 

The final result of this thesis project allows the user to interface with the computer by either using a natural user interface or a graphical user interface to collaborate with others, using the computer as a portal for which a form of communication can occur. From the Twitter feed used to create the background, to the information transmitted back in forth between devices to mirror each user’s actions, the resulting visuals are the sum of massive amounts of information and the human experience, the most important component to successful interface design.



[1] Changing out or adding new symbols only requires updating one variable within the framework.

[2] This gradual transition between the black tinted gradient overlay to the red tint is accomplished by simply multiplying the percentage of completion for the “swipe to select” action by 255, the maximum 8-bit value of a color. This number is then added to the red channel of the RGB value of black and applied to the gradient.

Leave a Reply

Your email address will not be published. Required fields are marked *