As AI systems get more and more capable (and comparatively less understandable), there will be more and more leverage placed on the interfaces through which humans work with these capabilities. It seems like a question at least as important to study deeply as the AGI question.
A mathematician without the interface of modern notation is powerless. A human without the right interfaces to superhuman intelligence will be no better. Interfaces and notations form the vocabulary humans and machines use to stay mutually aligned. Interface design, then, is an AI alignment problem.
Chat isn't it. Prompt engineering isn't it. Algorithmic feeds are definitely not it. The answer will be more multimodal and lean on a balance of human abilities in language and other senses.
At the base level, the fundamental question in this space is: what is the right representation for thought? For experience? For questions? What are the right boundary objects through which both AI systems and humans will be able to speak of the same ideas? What are their rules of physics in software interfaces?
What happens if we drag-to-select a thought? Can we pinch-to-zoom on questions? Double-click on answers? Can I drag-and-drop an idea between me and you? In the physical world, humans annotate their language by an elaborate organic dance of gestures, tone, pace, and glances. How, then, do we shrug at a computer or get excited at a chatbot? How might computers give us knowing glances about ideas it's stumbled upon in our work?