A note to readers: the proper care and feeding of a now-12-month-old has taken its toll on my writing. Here, I present a timeboxed kernel (or a draft, or a sketch) of something I’ve been thinking about. It is a seed of something likely to end up on Propel’s Insights blog, as it reflects ongoing research into the use of AI in benefits navigation. My hope is this strategy yields more regular output! As always I am grateful for thoughtful comments and emails in reply.
Applying AI to benefits navigation, I think I’ve seen a shape or pattern I’d like to highlight. I doubt it’s novel, but I do think it’s underlooked.
Specifically, when you start playing around with the frontier large language models (Claude, ChatGPT, Gemini) and you try real cases of problems users have navigating a program like SNAP, I think you find yourself, well, a bit epistemically cornered?
What do I mean?
Well many questions have an answer rooted in policy.
How much money can I make and get food stamps?
This is knowable with reference to specific policy documents. In fact, I can point you to the document updating income limits (and other numbers) for the 2024 COLA adjustment on the USDA-FNS website.
A little bit complicated, because Alaska and Hawaii are different, but on the whole, one source of truth.
So there it is: one source, publicly available, on the Internet. The very same Internet that LLM model creators take as an input.
How much money in the bank can I make and get food stamps?
Now this gets one level trickier. Due to a funny rule called BBCE (Broad Based Categorical Eligibility) this actually depends on your state.
So if you were to look at, say, SNAP’s federal regulations as your source of truth, you actually will be a bit led astray.
Instead, this information requires composition across a few sources: the sources about income limits; the document describing BBCE, respective asset limits in states; and potentially reference to state regs or a policy manual that describes this.
There are other examples — especially around things like business process, e.g. what do I do to schedule an interview in my state? — but the point is that this is actually complex information, in the sense that there are multiple sources of truth that need combining.
Still: there exists an objective answer.
And then you get to another bucket of information, one a lot trickier epistemically:
How long will it take for $STATE to process my renewal?
This is a question where there is no canonical source of information. Yet, certainly, it’s a very practical question and need people have, so we can’t just say, “nope, not gonna answer that.”
One can find an answer in policy, but it is just that — the state has X days (30) to process it.
But that answers the question a bit normatively; it’s not tethered to reality, though one could use that to request a fair hearing potentially.
And this information is unlikely to be written down on the Internet, save for sources that people find quite non-canonical — this is why I’m such a big fan of Reddit and Facebook, to be frank.
I consider this sort of information illegible to the AI universe. There is not per se a source of information that can be pointed to for this.
There are answers, but the knowledge is in fact created through a process of practice and action: for example, SNAP assisters or legal aid in a state will likely have a sense of this.
And an alternative is that the people asking the questions like this… well, they themselves are actually, through their experience of getting renewals (a very practical act!) perhaps the highest fidelity source of information about this.1
And this is why some of you, dear readers, will note the allusion to Hayek’s Use of Knowledge in Society.
I think in applying modern LLMs to the practical work of helping people navigate benefits like SNAP, what I’m seeing is the same shape of problem: that knowledge itself is much more distributed, much more tacit, and requires novel design to collect upward and aggregate, if you really want to make something maximally useful on this problem.
This is quite contra the simple version I think people come to when conceptualizing this: embed policy rules, do RAG, and now you have an oracle.
The forms of knowledge needed for practical navigation are much more varied, much more complex than that.
But… I see much promise here. For one of the few ways to aggregate such information is to have something valuable enough for people to share their own (relatively low value in isolation) knowledge with.
And with that, bath time!
Absent, say, operational data being published by the state, which is somewhat rare. Yes APT is published federally but quite laggy.