Fragments compiled after midnight
Florida forks California, LLM stuff, and does the AI EO have an appeals backdoor?
Tending to a being in her fifth week of life has been my work recently. As such, most of my thoughts have been more fragments—kernels jotted down in passing—than anything else.
But there’s only upside in playing with form. So: onward with fragments!
Florida launched a new public benefits online portal. It retains its prior name (Florida MyACCESS) but is otherwise all new best I can tell.
Something that I and maybe three other people in the world may have noticed is that the new site is almost certainly a fork of California’s recent new portal, BenefitsCal. Curious how I know? Well first clue, look!
Well, I’m one of only two of those three people who reads the California county IT consortium joint power authority’s board materials. (And I think I’m the only one who reads them recreationally!)
From the November 18, 2022 meeting deck:
I don’t observe this with any judgment. I just observe it as something quite interesting and uncommon!
Also, Georgetown’s Joan Alker wrote a very concerned post about the new system going live so suddenly. (Florida’s Medicaid unwinding appears to be having some performance issues.)
I don’t have much of an opinion without using the system (or watching users use it.) I tend to find ways of understanding other than that very lossy. But! There’s at least one public bug report (affecting users with visual impairment):
(Also thanks to FLTraveler-727 for sharing a workaround with peers!)
As folks likely know, in what corners of the day I can find I’ve been doing things with LLMs. A few resources I recently found that I very much enjoyed:
Andrej Karpathy, “Intro to Large Language Models” — Likely the most valuable 1 hour overview I’ve seen of what this new wave of AI really is and the mechanics of how people are commonly using them.
Vanishing Gradients, “Data and DevOps Tools for Evaluating and Productionizing LLMs” — A very practical look at building feedback loops for iteration around deployed LLMs. I’ll pull out this diagram as particularly valuable for me:
A related point that I have been processing in the background: the common refrain that these language models’ capabilities are highly empirical phenomena.
Even the teams building these models themselves cannot fully reason about what their capabilities are — new things are discovered every day by just, well, poking at them!
I’m a tinkerer. It appears a new substrate of exploratory tinkering is here. “That’s rad.”
I very, very much enjoyed this Odd Lots podcast that walks step-by-step through the unit economics of a child care business.
Especially with an area like child care, where much of the discourse is so thoroughly normative towards one’s preferred public policy intervention, I really appreciate, well, just a calm, measured look at the accounting!
(I hereby renew my public calls for Odd Lots to do an episode on reinsurance.)
There was an AI Executive Order. And OMB issued draft guidance.
I’ve talked about this before, but being a public benefits nerd, I immediately jumped to the sections about the use of AI in government benefits.
Below is the section of the EO about that for HHS and USDA, two federal agencies responsible for a number of highly-utilized benefits (SNAP, WIC, Medicaid, TANF, etc.)
I highlighted some bits. As you read it, see if you can guess why they’re interesting to me.
Okay, time’s up! What was your guess?
Here’s mine — as much as the EO is ostensibly about AI/tech/algorithms, this seems to me to… potentially affect more?
What do I mean?
Well, if it is Administration policy that these programs (given AI and algorithms more generally) should have clear access for clients to appeal to humans (as well as “receive other customer support from a human being” in the USDA case) it sort of begs the question…do these programs more generally have sufficient appeal processing capacity? What about being able to reach a person for “customer support,” as mentioned re: USDA programs?
Put another way: if appeals volume went up 5x year over year, would today’s human-review backstop actually continue to function, capacity-wise?
So is increasing fair hearing/appeal processing capacity in these programs some ways an implied policy dictate for these benefit programs from the AI EO?
(Unrelated image:)
The answer: I don’t know!
But it’s interesting to consider that this ostensibly technology-focused policy might open such a window.
I have to wonder if this is on advocates’ radars for that very reason.1 It’s certainly implicit. But maybe all the best windows are a bit below the surface?
Lastly, a few screenshots sans context:
I never know whether to call myself an advocate. Some days, I wear such a hat? But I think most of my advocacy looks more like something James Murphy said: “the best way to complain is to make things.” (Hat tip to someone who will know if they read this.)
1. Congrats on the newborn! I'm so happy for you :-D
2. I am in awe that you made that Cali/Florida connection.
I'm fascinated by the way you're asking the AI to create prompts to use AI to navigate a benefits app. And it also makes me think about the knowledge a person would need to navigate a benefits app .... it's a lot.