Sympathy for the Algorithm
The release of ChatGPT, a new artificial-intelligence chatbot, is forcing us to rethink what tasks can be carried out with minimal human intervention. If an AI is capable of passing the bar exam, is there any reason it can’t give sound legal advice?
STOCKHOLM – With hindsight, 2022 will be seen as the year when artificial intelligence gained street credibility. The release of ChatGPT by the San Francisco-based research laboratory OpenAI garnered great attention and raised even greater questions.
In just its first week, ChatGPT attracted more than a million users and was used to write computer programs, compose music, play games, and take the bar exam. Students discovered that it could write serviceable essays worthy of a B grade – as did teachers, albeit more slowly and to their considerable dismay.
ChatGPT is far from perfect, much as B-quality student essays are far from perfect. The information it provides is only as reliable as the information available to it, which comes from the internet. How it uses that information depends on its training, which involves supervised learning, or, put another way, questions asked and answered by humans.
To continue reading, register now.
Already have an account? Log in