

231·
2 days agoThey are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren’t designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.
I fully agree. LLMs create situations that our laws aren’t prepared for and we can’t reasonably get them into a compliant state on account of how the technology works. We can’t guarantee that an LLM won’t lose coherence to the point of ignoring its rules as the context grows longer. The technology inherently can’t make that kind of guarantee.
We can try to add patches like a rules-based system that scans chats and flags them for manual review if certain terms show up but whether those patches suffice will have to be seen.
Of course most of the tech industry will instead clamor for an exception because “AI” (read: LLMs and image generation) is far too important to let petty rules hold back progress. Why, if we try to enforce those rules, China will inevitably develop Star Trek-level technology within five years and life as we know it will be doomed. Doomed I say! Or something.