Thank you so much for your recent story about how chatbots as lobbyists are likely to make laws less fair and transparent (“Lobbyists fueled by AI will hack the lawmaking process for hidden interests, technologists warn,” web, March 15).

This is an important aspect of the dangers of artificial intelligence, but it’s only a small part.

The recently released GPT4 chatbot can engage in business transactions, write its own computer programs and sometimes convince
humans of falsehoods.

People are worried that AI will put people out of work, put more money, influence and power in the hands of the few people and
corporations that control the algorithms, further distract people from community and family life, and even become a weapon for dictators and terrorists.

But the biggest danger is that the newer AI systems are so complicated, not even their creators can fully predict or
control what they do. And like many technologies, from the hydrogen bomb to TikTok, they may become such a large part of our society that it is hard to get rid of them even if we feel they are hurting us.

What that would mean, perhaps within the next couple of decades, is the entire human race in the back seat and the computers alone at the wheel.

AI can bring a lot of good into the world, but only if it is developed responsibly, honestly and carefully enough for society to adapt to
its changes, and its capacities are better researched before they are released.

Unfortunately, this will require either stricter government safety regulations or at least stronger ethical standards and public transparency among tech companies.

But the alternative may be losing our freedom entirely.



Copyright © 2023 The Washington Times, LLC.