Things is a design concept for an app that lets users keep track of the locations of miscellaneous things around the house. This design explores interaction patterns that are heavily natural-language driven in addition to the usual ways people use an app.
I wanted to combine data structures and lookup with the flexibility of querying using natural speech; and the idea of maintaining an inventory of all the things you have lying around seemed like a good test bed for this exploration.
So far, the typical use cases for AI, specifically language models, have been conversational (like chatbots) or transactional (like image generation). I want to instead use AI and data models as a layer in the interaction stack that supplements normal app usage.
Here are a few ways I can see various kinds of models being used:
Automatically categorizing items and extrapolating additional categories that may apply.
Allowing users to draw schematics and infer a structured floor plan from their sketch.
Augmenting search and lookup by being fuzzy, and finding results that are related to the query terms.
Enabling speech-driven use of the app.
I understand that the term “AI” is used a bit too loosely. For the purposes of this writeup, I’m using AI as a placeholder for machine learning as well as statistical models that can be used to work with data.
Here's a showcase of what Things wants to be able to do:
Create lists of all the things you want to keep track of, and map them to the rooms or spaces you're familiar with.
Designed to help you find items as quickly and easily as possible, with as much context and visual help as possible.
Every item has the metadata needed to ensure it can be easily looked up, even without exact matches.
Adding an item to Things can be as easy as saying out loud what and where the item is...
...or providing more detailed information directly.
Even if you don’t want to provide details manually, Things can extract relevant metadata from your spoken item descriptions and fill in details accordingly. Multiple input modes ensures that you can be as quick or detailed as you prefer.
Define floors and rooms for your items to be mapped to. Just draw the layout you want, and let vector image analysis convert your sketch to a structured layout plan. This skeumorphic approach allows for visual associations of locations, which can be much more intuitive to read immediately than text tags.
Search isn’t restricted to name matches, or preset filters. Text-processing AI agents can enable search to be fuzzy, to have variable parameters, and present results intelligently.
All of these decisions enable the goal of making item management trivially easy, natural to use, and intuitive to take in.
Things is still only a concept, but the pace at which AI development and models are improving, it’s not a faraway possibility for it to exist.
There are still so many ways in which we can incorporate AI models into our applications, beyond the interactions we’re used to right now. AI will soon become a layer in our tech stacks, a normalized tool in our development processes.
I think it’s interesting to try and find new ways to use it, ways that don’t have to infringe upon creative endeavors.
Thanks for reading. Check out my other projects too.