On LLMs as Foundations
In a traditional CRUD (create, read, update, delete) application, the software system design has been refined over decades. At the top there is a client layer which is responsible for enabling the visual representation of the CRUD functionality. Below that there is an application layer which facilitates any business logic that needs to happen in addition to any client intents (logging, data serialization/modification, dependent logic, etc). At the foundational layer, there lies a database, a layer which has fundamentally deterministic output for any input. E.g., 'Give me all the Xs that have Y' will always return the same values, assuming nothing has changed since the request was made.
The metaphor here is that of a house being built on a solid foundation. Houses can be complex with architecture, layout flow, plumbing, electric, heating/cooling, etc, however these problems have been solved. Upon a solid foundation, we can build layers which we know, based on experiential evidence (our own and that of other builders), how to add things to get a complete and functional house.
Building on top of an LLM (or other models) as the foundation of product software requires a fundamental shift (from CRUD) paradigms. There is still a client layer, there is still and application layer, but the retrieval layer is no long deterministic. While we can reasonably expect and test (with evals) certain outputs, there is no guarantee that one input will yield the same output every time it is executed. There are ways to specify hyper parameters, temperature, decoded output, etc, but one of the most beautiful features of the AI model is that the output is dynamic.
If we try to conform this metaphor to that of a house, we end up with a real problem. Instead of building a house on top of a solid foundation (e.g., concrete) we are now building on quicksand. There can be no expectation that the foundation of our house will be the same today as it was yesterday, or even in a few seconds.
In order for product developers to account for this, they (we) need to shift the way that we design our software. This reconceptualizatin of product is what is truly interesting about this space. Since the data coming from the machine is nondeterministic, it is reasonable to assume that we should be designing client interfaces that are as dynamic as the output from the foundational layer.
What does this mean? Suppose there are 3 ways to build a UI from the 'data' layer output:
- The server sends some data in a predetermined format that the client already knows how to interpret (e.g., structured JSON). All the client has to do is plug in the fields into it's UI and display to the user.
- The server sends some general UI representation of the data (e.g., JSON which delineates the 'design' of the UI - such as an ordered list of 'objects'). The client then has to build a design system which takes this UI description from the server and (somewhat) blindly display to the user.
- The server sends explicit UI instructions (e.g., HTML, React code, python, etc) and the client effectively just has to render it as is.
The first way has been the preeminent paradigm for decades. The second way has had some usage over the past decade or so, but removes a lot of creativity and software design potential from the client side development. The third way is, most likely, the design to which we are moving.
I don't expect this shift to be ubiquitous across different software systems, there are many systems which require deterministic input and output. However having tailor made UIs for each client, which are dynamically created by an AI, will require a shift of software engineering design patterns, particularly for the top and bottom layers. The application layer should fundamentally continue to be able to leverage GOF design patterns, but the way that we create and display data will necessitate emergent software design patterns that have yet to be fully understood or realized.