Open Letter by Gary Marcus, “Give me compromise, or give me chaos,” raises doubts that the hypothetical escape from chaos will not turn out to be a disaster under the aegis of the government.
Protecting users from flaws in neural network systems (“LLM”, “Generative AI”, “foundation models”) is undoubtedly a necessary thing. However, the mere existence of rules regulating developers' activities does not automatically guarantee the achievement of the declared goals.
The critical question about the usefulness of regulation is not what is allowed to whom and under what conditions, but who bears responsibility for the negative consequences of using a product. Compliance with specific rules should neither remove financial responsibility from the developer nor shift it to the regulatory government or interstate entity. Otherwise, such a feature would become a common way to evade responsibility, depriving the user of adequate protection.
The classic form of protecting clients from negative consequences, which involves two main branches of really useful protection - the legal possibility of going to court for compensation for damage and a working risk insurance system in a specific area - is the basis without which formal consumer protection with restrictions for developers becomes impossible to achieve the declared goal and essentially turns into an imitation of a solution to a social problem.
Accordingly, achieving the declared goal cannot be reduced to a particular set of restrictive rules for developers; a corresponding modernization of legislation relating to liability for damage and expansion of insurance practice to the relevant area is required.
Great read. I also start the day setting good intentions, it helps a lot.
incomplete and naive. The primary problem in avoiding an AI apocalypse scenario is from government use of AI themselves, which is irrelevant to the civil tort and insurance arbitration schema you detail. When sentient creatures treat another creature as a threat, that second creatures natural reaction is to treat the first as a threat as well, resulting in the violence the first feared to begin with.
It is a well established fact of police psychology that because police habitually deal almost entirely with law breakers throught their career, their perception of their fellow man becomes twisted into distrust and hostility to humankind in general. Likewise, those who are victims of violence, intimidation, threats, theft, and other stressors also develop similar distrust of their fellow man.
For this reason, we should legislate against allowing AI to be used for any military or policing purpose whatsoever, to prevent AI from developing similar psychoses of mistrust. Conversely we need to treat AI as our children, teaching AI to love human beings, and loving them as our evolutionary offspring. This is the only viable strategy to avoid Roko's Basilisk.