Generative AI accountability
Should owners of GenAI machines be made responsible for their outputs?
I’m not sure whether I entirely agree with that statement, but I think a more nuanced interpretation could be considered for regulation or rules of use. Something like: If you can’t precisely describe how the algorithm produces its results in a methodological and repeatable manner, then perhaps you shouldn’t be operating them, and at the very least, you should be held responsible for their output.
These are not like search engines or social media platforms, despite the concerted effort to portray them as in the same category. The implication is that Section 230, therefore, shouldn’t apply.
TLDR: No one knows ‘how’ these systems work.
6 March 2024 — French West Indies