arrow_backward Back to blog

The Future Of User Interfaces

Apple watch interface

Interfaces are intrinsic to technology 

Each piece of technology that is used by a user has an interface. Computers date back from the first half of 20th century, their physical appearance… well let’s just say, we have come a long way with the look.  Nevertheless, those big pieces of machinery were designed for users to operate them. The user physically had to get their hands on the hardware of the machine to operate.

Technology is constantly changing

Companies like Apple and Samsung are bringing new devices to the market with capacity and intelligence that definitively surpasses all of NASA’s computing power 50 years ago. This level of sophistication begins to explain the user experience, being further from the manual tweaking or direct manipulation of the technology itself. The user experience was the connector or rather the middleman between the user and the physical machine. The user needed to find value in the user experience in order to satisfy the purpose the user had in using the machine in the first place.  If we extrapolate this idea, we can predict the next trend should be about reducing physical stress from the user interaction. This notion has already been seen with each iteration of smartphones-  with the transition towards chamfered edges, bluelight/night screen filters, and each generation becoming a little more ‘wireless’ (whether it be wireless charging or wireless earbuds).

The hypothetical extreme of this will be not having to physically move your body but instead just thinking of the action that is needed, resulting in a response from the machine (creepy sci-fi, right?). Ergonomics is playing a much more prominent role now that technology has reached the point of where it is at today. Comfort and reliability are key aspects that allow the general public to reach and utilize these interfaces on a daily basis.

Voice Technology 

At one time, speech recognition was a concept that seemed like something of the distant future, machine speech recognition has now made its way to be the reality. Most smartphones and online chatbots allow for full-fledged conversations in various languages. Consistent upgrades and machines are being pushed out to implement this field and technology into the home as “Amazon’s Alexa” and “Google Home”. Of course, it is still not perfect and its application as a user interface tool is in its infancy but the technology has already pushed past many fundamental checkpoints. As a result, the artificial intelligence behind it has outperformed.

Bridging the Gap

Besides this race towards smoothness and fluidity, there is also the issue of merging with the environment. Many of the current interface prototypes are intending to bridge the gap between flat, two-dimensional screens and our 3D space. For example, we already know about augmented reality – having some form of visualization tool overlaying graphical elements and information over real world objects (be it a phone screen using the camera, a VR helmet, or special glasses). Yet developers are also leaning towards a more immersive augmented reality, whether that means aligning alongside virtual reality or somehow bridging the two to find a middle ground. Virtual reality has also began to make its mark in the market with larger companies such as HTC, Google, and Samsung paving the way with their higher end VR headsets and constantly updated software for it. Virtual reality’s claim for computer simulated, three-dimensional environments is a clear step into the more immersive and interactive interface space.

The real challenge designers and engineers are facing is to make 3D space be the interface. Ideas and prototypes of kinect-like applications, where items are accessed and arranged back and forth through air gestures are in development. The actual ‘things’ being manipulated are graphical elements projected either to or from a flat surface. There is certainly the ability to ‘read’ or track gestures in space but creating digital imagery that appears in that space as if it were an organic entity is not here…yet. However, prototypes where physical objects are manipulated around space in order to interact with technology are being developed.

There is another possible route ahead for the future of interfaces related to biotechnology, or the implant of synthetic materials/technology into the body. Research is advancing in this field, mainly for repairing body functions, leveling body chemistry, and gathering information. At the moment, common products in this area consist of synthetic body parts. Interfaces have become integral in our daily lives, ranging from the slabs of glass in our pockets we call cell phones to screens in our cars that help us navigate- there is no chance of backtracking to a simpler time without them.

So now it is time to think of a future where these goals are met and we can trigger events with our body so that we can feel and see things beyond our natural perception.

arrow_backBack

New Project Request