Lately, I've been playing around with LLMs to write code. I find that they're great at generating small self-contained snippets. Unfortunately, anything more than that requires a human to evaluate LLM output and come up with suitable follow-up prompts. Most examples of "GPT wrote X" are this - a human serves as a REPL for the LLM, carefully coaxing it to a functional result. This is not to undersell this process - it's remarkable that it works. But can we go further? Can we use an LLM to generate ALL the code for a complex program ALL at once without any human intervention?
hey! our startup has also pivoted into this space recently. shoot me an email: firstname.lastname@example.org or you can join our discord https://discord.gg/znyraHeTBt
That's fair though it also highlights another issue: since these foundation models are trained on human generated content, the training data itself is also littered with these "ambiguous" problem statements that can bias the results. Most of the time, its a "feature not a bug" as people say things that are technically wrong but semantically understandable to other humans. Foundation models are something of a mixed beast
I wouldn't overindex on the increasing / decreasing issue, as that's a little ambiguous.
I'd use language that is opposite yours to describe changing a H2 to a H1 (increasing the level despite decreasing the number). According to the HTML spec (https://html.spec.whatwg.org/multipage/sections.html#headings-and-outlines), I'm technically wrong but wrong in a conceivably understandable way: thinking about levels as a nesting with the top level of the nesting being the highest level (albeit the smallest level number).