AI Pin: Is the Smartphone Dead?

MidJourney: Lane Kinkade

A UX designers take on Ai Pin

The AI Pin is an incredible new wearable platform in development with hu.ma.ne. Hu.ma.ne is the aggressively tech-sounding brainchild of two former Apple executives looking to disrupt the market and overthrow the reign of the smartphone. I’m a little skeptical when a product is introduced and the term “smartphone killer” is thrown around. Don’t worry, this isn’t a teardown piece, there’s a lot to be optimistic about, so you’ll hear about the good, the bad and the chrome finishes.

I’ll provide the disclaimer that I haven’t interacted with the physical product itself (it’s not available to the general public yet). I’m a UX designer reacting to the robust concept video outlining software features and hardware demonstrations. I have experience with AI Integration, gesture control and wearable UX.

The system uses 3 major inputs, voice, camera and gesture/touch, which I’ll get into later. The voice input feels natural, it’s not always listening but it perks up when it’s given a command. I’m hoping this next generation of AI empowered voice assistants can rebuild the broken wall of trust left in the wake of Alexa, google home, etc.

The camera function is remarkable, but limited by battery life, requiring activation through a command, similar to the voice input. In the demonstration video, the AI utilizes computer vision to identify objects held in hand, performing calculations for practical nutrition information. The camera recognizes a handful of almonds, counts them, and provides an estimated protein intake. It should be noted that in the concept video;  X user @thecreatornate pointed out, this isn't quite right, adding that 15 grams of protein would be more accurate if Chaudhry was holding about 60 almonds.

This presents exciting opportunities for training and onboarding. Imagine learning how to julienne a carrot with a projection defining cut lines on your soon to be sliced vegetable. Or projecting a translation for dishes on a restaurant menu in an unfamiliar language. Could it make us better cooks, travelers, communicators? Could we reduce our screen time and stay in the moment? It could help erode the barrier between the seemingly infinite knowledge of the internet, and its application in real life. 

I could see this identifying plants quickly; maybe my dog ate something she wasn’t supposed to. Is this poisonous? If this could give me a response faster than my smartphone’s image recognition, then I would default to AI Pin.

The voice and camera combination have compelling use cases, however the feature that is baffling me is the gesture control. In my previous role as a UX designer at Jaguar / Land Rover we explored gesture control as a safety measure to perform some basic actions while driving, without needing to get distracted by tapping a screen or fumbling around for buttons. While the interaction looks great for marketing videos, through extensive user testing we discovered that this gimmick’s charm wore off quickly with users.

A lot of issues with gesture control could be solved by usability testing in the natural environments that people would experience the product. Do we want to look like dorks popping invisible bubbles out of the sky while waiting for the bus? Do we want to be swiping our hands in the air for next song on Spotify Spotify  like a third-rate Harry Potter?

The concept video raises some suspicion, there’s a feature demonstrating, I don’t know if I’m ready to trust an AI to make purchasing decisions on my behalf. I’m fairly certain that Alexa and google home can be empowered to do this; show of hands; who has been comfortable enough with this idea to set it up? 

I want to be a positive advocate for technological innovation and avoid the temptation to troll, tear down, or only focus on the negative. I’m excited about the idea and direction of expanding beyond the smartphone, and the idea of designing a product with AI as the foundation. So whether or not this will be successful, it’s clearly innovative and has the potential to disrupt the market.

To wrap it up; I don’t believe this device will replace any smart phones. It’s being marketed as a standalone device, but I think it could be a part of a growing ecosystem and slotted in with a greater range of wearables such as smart watches and offload its processing to a phone. Will I be buying one? Probably not, but I’ll keep an eye on it and see if it makes waves within the industry.

The product is currently on pre-order for early 2024, I’ll be excited to see some of the early reactions when it’s in the hands of consumers. They are starting at $699 price point and $24/monthly for their service subscription.

About the Author

Lane Kinkade is a seasoned designer holding an MFA in UX Design from SCAD. Over the past 15 years, Lane has skillfully woven expertise into the fabric of renowned companies such as Intel, Veeva, Ubiquiti, and Jaguar-Land Rover. At Jaguar Landrover, he focused on introducing tech like autonomous driving and self-parking to an older demographic uncomfortable with novel technology. He is a freelance designer and a full-time professor at SCAD.

Incorporating AI seamlessly into the UX process, he leverages its power to enhance user interactions, optimize experiences, and craft innovative storyboarding that adds a new dimension to design narratives.

Dedicated to fostering the next generation of UX designers, Lane actively engages in lectures, talks, discussions, and debates. Embracing a deeply empathetic approach to UX, he prioritizes user needs and advocates for accessibility from the project's inception. Lane firmly believes that an accessible product is not merely a checkbox but an ever-evolving goal and a dynamic target to continually aspire towards. He is also a RealmIQMentor.

Lane KinkadeComment