![]() |
This page shows a selection of my interaction designs over the years, in a reverse-chronnological order. I design user experience and user interaction that show-case a variety of novel scenarios and applications that exploit emerging computational and interactive technologies, and many of them have resulted in full-fledged applications and user-evaluated, some of them in a long-term usage environment. While the high-level concepts, understanding of the technological possibilities and technical realisations come from the team efforts, the detailed interaction designs (creating UI concepts, interactive strategy, layout, visual design including icons, labels, data visualisation, etc.) were done by myself. My main challenges in these design activities are (1) there are no exemplar or precedences of successful designs that I can follow, thus requiring creativity and judgement, (2) there is no existing user-base from which requirements could be established, making the initial design stage shaky, and (3) often the back-end system can be unreliable and inaccurate (as the incorporated 'engines' of the system are the computational components that are currently under research and development). Creativity, visual literacy, sensitivity to emerging technologies, sensitivity to potential usage context, and attention to details are the most important qualities required in designing this type of interactions. | |
![]() |
Finger Occlusion-Free Sketching on Tablet
: Sketching app designed to minimise the fat-finger problem Multitouch devices are great in enhancing the overall usage barrier for beginners, but a typical usability issue with them is "fat-finger problem" or finger occlusion. In order to check how some of the basic finger occlusion-avoiding techniques have usability consequences in the subsequent user actions after the targeting/positioning action, my team designed a full-fledged tablet sketch app, every feature addressing finger occlusion issue. In designing the UI, I took particular care on the widgets on the screen - chunky, large buttons are all on the sides (near the edges of the screen) and all featuring immediate feedback to give hints on what is happening. In order to reduce the sense of visual clutter (prone when widgets need to be large), I separated two layers of widgets: (1) main, most important features are to have physical, real button-like qualities, and (2) secondary, more trancient types of widgets that are semi-transparent and blend as part of the textured canvas. The result is a visually simple but interesting shades and harder items overlaid. The app went through a usability testing with 25 participants, from which observations and insights have been collected and will be reflected in the next version. See a 1.5-min video showcasing the UI: Sketch app with finger occlusion-free features (YouTube, 2m 50s)
|
![]() |
User-Maintained Interaction for Multitouch
: Exploiting user-maintained mode for day-to-day multitouch interaction User-maintained interaction works by the user tapping a button but effective only while she keeps pressing it down. Upon releasing, the effect of the function disappears and gets back to the original state/mode. Such an interaction, while quite ubiquitous in physical world (e.g. laser pointer and piano pedal), is very seldom featured in digital world. We identify the kinds of situations such user-maintained interaction can be more useful than the normal 'persist' buttons, and design a few prototype tablet apps that incorporate the feature to test its usability. See a 1.5-min video showcasing the UI: UMM interaction for apps (YouTube, 1m 30s)
|
![]() |
Puzzle Game for the Elderly
: Bringing the usability of multi-touch app to its extreme While tablet interaction did significantly enhance the overall usability of computing, there are still categories of people who have difficulty using multi-touch applications, partly due to the finger dexterity required and partly due to overwhelming or overlly-subtle visual feedback on the screen interface. Making the touch interaction that typically considered easy-to-use even easier and simpler, our project aimed at designing a simple game for the people with dementia and the healthy elderly. The resultant interactive game is a reminiscence-inspiring, culutrally-relevant puzzle game tailored for senior people in Singapore, and is the result of extensive discussions with relevant stakeholders including geriatric care unit doctors, nurses and therapists and a series of playtests throught the development amounting to over 400 participants.
|
![]() |
Envisioning Future Video Browsing
: Increasing the granularity of access in video searching and threading. This work envisages what our future application for video consumption would be like, based on a series of interviews with researchers from various technology fields. A novel video browsing interaction was designed as a result, and illustrates how far more sophisticated and fine-grained levels of crowdsourcing on the web and a combination of emerging computational technologies will allow searching and browsing in an online video repository such as YouTube, but with the meaningfully segmented chunks of videos as the main unit of retrieval, resulting in easy concatenation of chunks to create longer videos and share. To project what the UI for such a system would be like, I assumed that the widgets and panels on screen are not rigid but semi-floating (to more naturally react to finger/hand touch), and also assumed the device itself will be a thin, transparent, AR-enabled device. See a 5-min video summarising the concept and the UI: EduBang Video Sharing (YouTube, 5 minutes)
|
![]() |
Squeezable Game
: A tangible interaction game with squeezable ball as input device. Re-thinking the way we interact with our everyday IT devices and applications today, I explored hand squeezing as a possible alternative way in which a user could engage in as the primary interaction modality. In doing so, I constructed a conceptual design space to orient myself and systematically brainstorm design possibilities for squeezable interaction. I selected various points in the design space to identify novel usage scenarios and application ideas that exploit squeezing interaction. The shaping of the design space was guided by physically instrumenting sensor-embedded squeezable objects and applications and by undertaking evaluation with 20 test users one by one. By cheaply and nimbly testing out and modifying our prototypes through a series of small user testing, I demonstrated how future interaction modalities could be explored in a systematic, pragmatic and designerly way that is cognizant of existing literature yet offers ample room for innovation. The Squeeze Game works by the player holding a sponge ball - squeeze harder and the on-screen ball goes up, unsqueeze and the ball drops: the goal is to navigate the maze without hitting the blocks (screen shot on the left). I designed all aspects of the game: overall idea, interaction, game mechanics, graphics, the path overview, etc.
|
![]() |
Optimising 3D Modeling Task
: Clear separation between object modifying and viewpoint changing 3D modeling task is a complex challenging task in any interactive platforms. In this tablet-based UI, a viewpoint change mode was designed in so that in that mode the user can comfortably but temporarily focus on shifting the camera angle and also shift back to history. In the screen shot beside, a skeuomorphic version of my design is shown.
|
![]() |
Exercising Your Cognition
: Preventing an over-reliance on our day-to-day technologies More and more scientific evidences are appearing that show how heavily relying on our increasingly more powerful and increasingly smarter applications and systems can undermine our cognitive capabilities (memory, arithmetic/spatial ability, vigilance, decision-making, etc.) when used over time. Instead of just saying "try not to use it too much if possible" as if our technology is some kind of necessary evil, or even falling back on technological pessimism, we should start designing our apps taking this issue into consideration from the start. Envisioning how our day-to-day apps might look different when we re-design based on this awareness, a new set of UIs were sketched in a team effort (see co-authors in the reference below, who all participated in the UI brainstorming and sketching). Many of them take on some simple gamification elements to keep our cognitive functions active while not compromising the accuracy/outcomes/productivity too much. A number of generic issues were identified in the UI sketching exercises and from these (initial set of) high-level design principles were compiled.
|
![]() |
Texting Augmented
: Visualising text chatting history to identify chat patterns Reviewing all major texting apps available today (WhatsApp, WeChat, iPhone Messages, KakaoTalk, etc.) show that they are basically all same in terms of how functionalities are offered and more critically what functionalities are offered. I re-designed a typical texting app by incorporating a simple visualisation of the texting threading and its history. A user can see the miniature threading of texting on the left side of the main texting UI (see left), and also slide right to see these miniature threading visualisation from past sessions (see right). Such design does not require any rocket science to implement/program, yet enhances the visibility of the threads and enhances the access to past threads of chats. |
![]() |
Visualising 20,000 Lifelog Photos
: Interactively visualising a large number of photos to identify pattern of life Using 20,000 lifelog photos, I miniaturised each photo to small thumbnail-size photo, and projected on a large, high-resolution, public display wall that has multi-touch capability. In the figure, two of my students were demonstrating interactively investigating the photos while at the same time seeing a particular pattern (the yellow stream in the middle) which characterises the lifestyle of the photo collection owner. See a short video showcasing the UI: Visualising the photos on the Wall (YouTube, 1 minute)
|
![]() |
User-centric Video Summarisation
: Very simple, interactive video editor after automatic video summarisation Video summarisation has been one of Multimedia's long-term research topics. While automatically extracting snippets of "important" of a long video is certainly an appealing idea, but when the original video is not a clean, professionally editted video (such as a movie or TV series) but an unstructured, repetitive or boring(?) video (such as a home movie), then the automatic summarisation doesn't work very well. Working with my colleague Saman who's been developing sophisticated video summarisation that are tailored for unstructured/uneditted video contents, we came up with this application that starts with showing automatic summarisation result and then easily customise which parts of video to be included/excluded. The key design feature is a clear indication of the source and target duration which changes as the user drags parameter slider bars, and a hierarchical timeline that can help the user quickly see the inclusion/exclusion in the video on overall as well as detailed view.
|
![]() |
Generic Menu System for Multi-touch Wall
: What would be an ideal menu system for large multi-touch display? Large, public multi-touch display is still not commonly sighted, but I envisage that such devices will become more widely seen in urban settings in near future: at a bus stop, in a shopping mall, airports, and in the middle of city square. For different purposes there will be different ways to provide information and menus, but what would be a reasonably generic menu system that could be used for such public multi-touch walls across application areas? Consider a typical menu style for desktop PC applications... library management system, word processor and web browser - even though their application areas are very different, a common style menu system can be used just because of the characteristics of the interaction device (keyboard and mouse, a monitor and the fact that the user is most likely sitting in front of a desk). I designed a generic menu system (seen on the left) that recognises the characteristics of public multi-touch wall device, ended up quite different from conventional menu widget as we know: two different groups of menues - Global menu with shared and awareness features, and Local menu with transient and temporary nature - it may be a long shot but I believe it is a good starting point! See a short video showcasing the UI: Generic menu system for multitouch wall (YouTube, 1m 7s)
|
![]() |
CLARITY Energy Portal
: Web access to carbon footprint-related activities CLARITY Centre I work in has been doing some great work in using various sensors deployed in houses, collecting the energy usage data around Ireland. Data includes 16 home users' home energy usage for 1 year and gets collected 24/7 and the live stream of data are managed centrally. I designed a web portal where this stream of data can be nicely visualised and monitored. I love working for such a large-scale, technologically-innovative research centre so that I can design for such novel applications that haven't been attempted by anybody else before. For this particular design, I used a slanted panel that closes and opens as the user selects one of the 3 types of the collected data (the left panel on the screen shot beside).
|
![]() |
Sentiment Analysis of Blogs
: Interactive Visualisation of Sentiment Analysis from Blogs Visualisation of blogosphere... this has been usually a word cloud-type of visualisation, and time-graphs that show sentiment changes over time. I wanted to do something different - how about making the visualisation style as if it is a web search-like interface where a user searches, queries, browses the results, refines the query, etc.? I designed a novel 'visual unit representation' of companies ('company' being the recepient of opinions and sentiment), and then stacking them as search result. As a result, this became a highly interactive, familiar web-search-like visualisation. Graphs that show trends over time are a part of the interface, a panel that slides up and down when the user needs.
|
![]() |
In-Home Display for Energy Monitoring
: Enhancing Awareness for Home Energy Use Designing for In-Home Display (IHD) is yet another example of a whole different (and new) set of design implications arising from the interaction characteristics for its usage. These include (1) always-on, (2) essential information on the initial screen (no 'main menu' screen), (3) dark background as to not blind the home user at night or brighten the room (think of a night-friendly bedside clock radio with its black background and bright digits), (4) providing context information along with the data (e.g. history, average, neighbour's data, etc.) to help make sense of the reading, etc. After identifying these issues, I designed a home power monitoring interface (see photo on the left) that uses simple finger touch as its main input and visualising a rich set of data reading in a easy, interesting and informative way.
|
![]() ![]() |
Interactive TV for Multimedia
: Multimedia functionality in an interactive TV Interactive TV brings in a very distinctive interaction characteristics, namely (1) lean-back, (2) use of remote control, and (3) multiple levels of viewer attention. I studied these characteristics and turned them into a set of design strategies (for these characteristics provide a number of design implications), and designed a complete iTV interface that balances the powerful functionalities in Multimedia technology with extremely simple and streamlined viewer-TV interaction with a conventional remote control (see photo on the left). Literature review, compiling the available guidelines, fleshing out the design options, coming up with my solutions and sketching - the process took me full-time 4 months of work, but the result was definitely worth-while. In my opinion, the progress in the iTV community is slow, especially in terms of interaction design - no agreed body of design guidelines or knowledge, no standardised widget sets for TV, nobody knows what is the best way for iTV interface. During creating my own solution, I ended up helping adding more interaction design knowledge to the community - check my paper below.
|
![]() |
Map Browsing for In-Car Video
: UAV-like Video Data Browsing with Geographic Navigation A car user installs an in-car video recorder that records the scene in front of the car along with GPS device. Once a road trip is done with the car, GPS data and video data are put together to provide a map-based browsing and searching interface. I came up with the idea of map browsing as the base interface where basically the whole UI is the map, and upon request relevant panels slide in and out on top of the map. A 'trip' is represented as a single colour line on the map, which threads a set of 'points' on the line. Selecting a point then slides in a playback panel and shows video captured at that point onwards. A snapshot on the left.
|
![]() |
Mo Músaem Fíorúil - My Virtual Museum
: Museum artefact browser A museum visitor taking a number of photos of the exhibited artefacts comes back home, and gets confused which photo is which artefact, cannot remember what artefact had which history or what title, etc. She uploads all these photos to the system, and it automatically identifies same artefacts from many photos, categorises them, and links them to the museum website's authentic photos and information. The user can manually correct if any categorisation was done incorrectly, by easily dragging items around to different groupings - the dynamic Flash interface doesn't flash; only to enhance the usability in a quiet way. Front-end interface designed by me (snapshop on the left), back-end engine researched and developed by Michael, Flash interface implemented by Sorin.
|
![]() |
Body Sensor Visualisation
: Sports Analysis using Body Sensor Data A soccer player wired with BodyMedia device that captures a number of body response data along with a GPS device and two video streams capturing his movements gets his data all recorded and analysed at the end of the game. Data is analysed, synchronised and presented to an interface that displays sensed data beside video and soccer field location of the player all in one synchronised way. Juxtaposition of different sources of data with different characteristics (but connected by time dimension) can reveal many interesting and potentially useful facts - the main benefit of any visualisation. Snapshot on the left.
|
![]() |
My Friends' Faces
: Photo and Video Blog with Automatic Face Annotation Hundreds of holiday photos... uploading them all to Flickr is fine but what about the time-consuming manual annotation and captioning on each photo? I designed a web service interface that leverages our group's automatic face detection and recognition technique, whereby the interface starts with showing 'my friends' faces' list (automatically generating by detecting and recognising faces in photos, then cropping into small 'face icons' - rather than a set of photos, and presents photos organised by face icons... thus the interaction style is quite different from conventional photo management services - in this design I tried to exploit the face recognition tool to the maximum. See snapshop on the left, this design took me about 3 months full-time.
|
![]() |
My Visual Diary
: SenseCam Image Browser SenseCam is a wearable digital camera with sensors wired in, automatically triggering photo capture when something happens in the surrounding. In a usual day, wearing a SenseCam will result in 1,500 - 3,000 photos, in effect visually archiving a person's day. Once there are this many photos, organising, annotating and retrieving becomes a headache. In CDVP we develop automatic organisation tools to group the photos by individual events of the day, to determine repeating or unique patterns of the events, and to establish similarities among those events. I designed this web-based interface, called My Visual Diary, that presents the photos as an interactive comic-book style interface, as a result of the automatic analysis of the photos. In technical Multimedia circle, mapping the 'importance' with a size of an image is something that not many people can come up with. See a screen shot on the left.
|
![]() |
MediAssist
: Online Personal Photo Management![]()
|
![]() |
Físchlár-DiamondTouch
: Collaborative video searching on a TableTop Moving out of the conventional mouse, keyboard, monitor for single-user video searching, Físchlár-DT is a TableTop video search interface on top of Físchlár Digital Video System. The Tabletop is based on DiamondTouch table, with DiamondSpin software toolkit. Designing a collaborative interface on which two users working together to search for video shots require consdering interesting issues such as division of the task between the users, workspace awareness, and widget/object coordination policy between the users. I explored these issues by designing, implementing and experimenting with users and assessing in terms of search performance, amount/kinds of interaction bewteen the users and their personality type matching. The interface design was done collaboratively between me, Colum and Sinéad, and demonstrated at TRECVid2005 at Gaithersburg, Maryland, in November 2005.
|
![]() |
BBC Rushes Explorer
: Object-based query using any external images It is good to provide Relevance Feedback feature so that some relevant image/object in the database can be used for formulating subsequent query... but the starting point (initial query) is often not supported and relies on text-query. BBC Rushes Explorer allows the user to use any image external search engine (e.g. Google Image Search) and incorporate those images directly into the system to formulate initial query, as well as using the database images/objects (50 hours of BBC rushes footage, in participation to TRECVid2005 activity). The incorporated images can be further segmented to object by the user, and the object is then used for RF query. Using interactive object segmentation interface and object-based RF feature of this system, the overall interaction is smooth, user's RF is open to *any* image or object outside of the database. To me, this application is one of those that use most novel yet realistic (in terms of the current status of Multimedia research) scenarios... A snapshot on the left.
|
![]() |
Film Event Browser
: Searching for action, dialogue, and montage scenes in movies 'Scene detection' is still not fully explored area except special genre such as news stories or sports event scenes. The system automatically finds where exciting (action) scenes, dialogue, or montages scenes in films are, and helps the user quickly spot those particular scenes. These three particular scenes are a 'prescribed' set of parameters pre-defined for the users, and more generic query formulation is provided in which the user can adjust these parameters to customise the query formulation beyond action/dialogue/montage scenes. A snapshot on the left.
|
![]() |
Advanced Object-based Query Interface
: Automatically splitting query objects into similar groups This is advanced version of the previous interface. When the user adds more and more example objects (and their features) as query formulation, if the added examples are not very similar to each other it tends to confuse the system and results in poor retrieval result. However, the addition of any example objects is a legitimate action allowed by the interface, yet we do not want a user to add semantically very different objects... so the solution provided in this interface is the system suggests the user the possible clusters among the query objects. Separated clusters then can be individually searched, allowing a more focused searching thus the result is better. A snapshot on the left.
|
![]() |
Object-based Query Formulation
: An object and their features as relevance feedback![]()
|
![]() |
CCTV Archive Search Interface
: Allowing efficient search through large CCTV video archive Again based on objects and unit representation, this one is applied to security and surveillance system. DCU's CCTV cameras keep recording and archiving a large volume of footages, and a security staff needs to search through this archive when a theft has happened, for example. Knowing only a approximate time/date and location still takes long time to search, possibly missing all other potentially useful information that has been captured in nearby cameras around that time, all of which could have been valuable in re-constructing the event (forensic analysis). My design (see a screen shot on the left) efficiently indicates wanted people/objects in the archive, searches the same person/object found in other nearby cameras and visualises in map and highlighting their trails by time, supporting an efficient forensic analysis of an event.
|
![]() |
Object & Link Filter Interface
: Allowing link filters for object classes This is based on L'OEUVRE interface above, but advanced in the sense that various object classifications (people's name, object, object classes, scene/background, and actions) are also automatically done and filtering based on them. Quite futuristic in terms of automatic features. |
![]() |
L'OEUVRE: Interacting with Objects
: Linking among similar objects in video through buttons This is my initial conceptual sketch on how the object-level interaction should be projected to the user. Here I used small, oval buttons that represent each of the detected objects in a keyframe, and thus interacting with objects is enabled by mouse-over or clicking on the button indirectly. I used this concept in my further object-based interfaces, integrated in a more realistic context for all the following designs I did for object-based interaction. |
Interaction with all of the above interfaces are based on whole keyframes (or segments of a video). Sub-keyframe level is only enabled on the above TREC2004 system in which regional colour and edge can be specified by the user in querying, but they still remain low-level features. L'OEUVRE Project, which investigates object-based operations in video, brings forward the whole state of the art, and the interface design has also started moving forward accordingly. Following designs allow the users the ability to interact with objects detected in video contents: viewing what has been detected; selecting one of the detected objects in a keyframe; using a selected object for subsequent querying. Based on the bottom-level interaction with objects as a 'unit representation', overall interfaces have been designed for different contexts and possible usage. | |
![]() |
Físchlár-TREC2004
: Searching video by relevance feedback based on low-level features of a keyframe This experimental video searching application uses video clip content-based relevance feedback... the user uses a group of video clips to query the video clip database, rather than typing in text query. In terms of interface, I gave a lot of thoughts in making all the text, images, video clips and other administrative elements on the screen as well organised as possible, aesthetically pleasing, and the saved shot list is added on the right end of the screen showing what the user has collected so far. This is probably one of the best examples in which a great amount of different kinds of information is displayed on a single screen but they are properly grouped, separated and colour-coded in such a way that it succeeds to look not too cluttered or complex. One important key was to use different background colour for 'Saved Shots' area on the left of the screen - just this simple visual trick made the interface look 30-35% less complicated. I used this trick in some of my other object-based interfaces (see below). A screen shot on the left.
|
![]() |
Físchlár-TREC2002
: Searching through shots containing simple semantic features An experimental system that can search through videos by indoor/outdoor, existence of face, people, audio, in-video captions (open caption), and also text transcribed by automatic speech recognition. Using a 4-colour scheme assigned to the grouping of the features and icons for each feature, I tried to combine the complicated and varied elements into an organised, coherent visualisation and interaction. The system was demonstrated at TRECVid 2002 Workshop (NIST, Gaithersburg, Maryland) and drew a lot of attention. Designing this interface resulted in a number of alternative solutions for some of the features, which I used later for other design problems.
|
![]() |
Físchlár-News
: An online news story archive of daily RTE 9pm news![]()
|
![]() |
Físchlár-Nursing
: An online archive of Nursing-related video materials This is a variation of the above system, but contains a fixed number of nursing-related materials for teaching and learning in the School of Nursing. The overview of a programme is manually generated to provide a good Table of Contents of each video. Interaction stages follow exactly same as Físchlár-TV.
|
![]() |
Físchlár-TV
: An online VCR - recording, browsing and watching TV programmes The very first Físchlár system I designed its user-interface for was deployed to our university campus for over a 4-year trial period, which allowed its users to request recording of any TV programmes from 8 channels and once recorded, processed and indexed to allow browsing and playing on a web browser. The system enjoyed more than 2,000 registered users during its trial period. In designing this interface, I started with browsing vs. recording features as the two main features, and took up the look-and-feel of a black VCR box as a metaphor. As it was being used by a large number of real users, I had to be careful in upgrading or plugging-in any new features, a screen shot on the left. The interface features various within-video browsers, which was a subject of my PhD thesis. This was long before Web 2.0... I used complex frame structure (not recommended for today's Web scene) and JavaScript to make the web interface more interactive and look more proper application-like.
|
Hyowon Lee (Home Page) 2023 |