MobileHCI is a conference near and dear to my heart and one I’ve been involved in for almost 10 years. I’ve been publishing and attending the conference yearly since 2007. I’ve also helped organize different tracks over the years — I gave an invited tutorial in 2012 in San Francisco and co-organized the interactive tutorials track in 2014 in Toronto. This year’s conference took place in beautiful Florence, Italy. And I was one of the conference program chairs alongside Prof Antonio Kruger from the German Research Center for AI (DFKI); Prof Jonna Hakkila from the Industrial Design at Faculty of Art and Design, University of Lapland; and Dr Marcos Serrano, from University of Toulouse.
I also took part in an invited panel on the Future of Mobile Interaction, Computing and Life. My fellow panelists included Daniel Ashbrook, Associate Professor in Rochester Institute of Technology; Anind Dey, Director of the HCI Institute in Carnegie Mellon University (CMU); Kori Inkpen, Principal Research at Microsoft; Lucia Terrenghi, UX Researcher and Designer at Google; and Kaisa Väänänen, Professor in the Human-Centered Technology Group at Tampere University of Technology. It was an amazing conference and while there is just too much to mention, what follow’s are just a few highlights from this year’s conference.
#1 All about Emoji!
There were 3 emoji related research papers which shed light on how and why people use emoji in their communications. Super interesting and fun! We’re looking into exploring similar patterns of emoji usage in business communication at Intercom so watch this space!
- Sender-intended functions of emojis in US messaging (Cramer et al.)
- EmojiZoom: emoji entry via large overview maps (Pohl et al.)
- Smiley Face: Why We Use Emoticon Stickers in Mobile Messaging, (Lee et al.) [link to paper for those interested]
#2 The Future of Communication
Adrian David Cheok gave a super opening keynote entitled Everysense Everywhere Human Communication. He talked about new types of communication environments which use all the senses, including touch, taste, and smell, to increase support for multi-person multi-modal interaction and remote presence. Some of his quirky demo’s included:
- A device that attaches to your mobile phone and enables you to feel (and give) a kiss remotely.
- A device that attaches to your mobile phone and emits a smell/scent instead of audio sounds to act as an alternative alarm clock. He demoed an actual use case — Oscar mayer, the bacon company in the US, have an alarm called called “wake up and smell the bacon”!!
- A device that enables taste signals to be transmitted virtually. This prototype “digital taste machine” was featured on BBC One’s Tomorrow’s Food and enables people to taste certain things like sweetness, sourness, etc.
While much of what Adrian presented is pretty out there, it opens up a bunch of questions about the future of personal and digital communication 🙂
#3 Handling Notifications
Notification management was a key theme in many talks. That is, understanding if/how the growing number of mobile notifications impact on people, how people attend to notifications, the cost of interrupting the user and methods for helping them manage inbound notifications they receive on their mobile phones and smartwatches. Research included:
- Novel ambient displays for better handling of notifications on smartwatches
- Use of deep learning to identify important notifications and to use the resulting model to launch associated applications in a timely manner
- A dashboard for enabling people to reflect on the notifications they receive
- A study that explores characteristics of face-to-face conversations like depth/importance/formality as indicators of receptiveness to receiving notifications. The authors find that while engaging in certain types of conversation, in particular small talk, people are more receptive to notifications compared to during other more focused / goal-oriented discussions.
In fact there was an entire workshop dedicated to the topic of notifications and attention management.