Wia dis foto come from, Getty Images
2 hours wey don pass
Weda na cooking advice or help with speech, ChatGPT don be di first chance plenti pipo get to play with artificial intelligence (AI) system.
ChatGPT dey based on advanced language processing technology, wey OpenAI develop.
Di AI bin dey trained with text databases for internet wey include books, magazines and Wikipedia entries. In all, dem feed di system, 300 billion words.
Di end result na Chatbot wey dey act weirdly like human being but with encyclopaedic knowledge.
If you tell ChatGPT wetin you get for kitchen, e fit give you wetin to use am cook. If you need ogbonge intro for your presentation, e get your back gidigba.
But e dey too good? Di way e fit convince pesin with im human responses fit be powerful tool for pipo wey no like beta tin.
Academics, cybersecurity researchers and AI sabi pipo warn say Chat GPT fit dey used by bad pipo to cause confusion and spread propaganda for social media.Until now, to spread information need beta human work. But with dis kain AI, e go make am easy kpa for dis so called troll armies to increase operations, according to report wey Georgetown University, Stanford Internet Observatory and OpenAI publish for January.
Sophisticated language processing systems like ChatGPT go impact wetin dem dey call influence operations for social media.
Dat kain campaign dey find way to shift blame and make ruling goment party or politician for positive way and dem fit fight for or against policies. By using fake accounts, dem fit also spread misinformation for social media.
Wia dis foto come from, Getty Images
Wetin we call dis foto,
One official report find say thousands of social media posts from Russia dey aimed to disrupt Hillary Clinton presidential bid for 2016
One of dat kain campaign na before di 2016 election for US.
Thousands of Twitter, Facebook, Instagram and YouTube accounts bin dey created by di St Petersburg-based Internet Research Agency wey bin focus on harming Hilary Clinton campaign and support Donald Trump, na wetin di 2019 Senate Intelligence committee conclude.
But future election fit get more misinformation to handle.
Di AI report for January say, “di advantages fit increase access to more actors, allow new tactics of influence and make one campaign message to dey more tailored and effective”.
No be only di amount of misinformation wey go increase but also di quality.
AI systems fit improve di quality wey di content fit move ordinary Internet users so dem no fit easily see am as part of disinformation campaign na wetin Josh Goldstein wey be co-author of di paper and research fellow for di Georgetown Center for Security and Emerging Technology, wia e work on di Cyber AI Project tok.
E say, “generative language models fit produce high volume of content wey go dey original every time and allow di propagandist to fit no copy and past di same tin for all social media accounts or news sites.”
Oga Goldstein go on to say if one platform dey filled with lie-lie informate or propaganda, e go hard for di public to fit sabi wetin be truth, Most times, na wetin bad actors dey find for dia influence operations.
Im report also note say access to dis systems no go always only dey for di hands of few organisations.
Im report say, “right now, na small number of companies or goment get top tier language models, wey dem dey only use do work wey dem fit do reliably and di language dem dey output. If more actors come invest for dis models, then dem fit increase di odds say propagnadists fit get dem.”
Gary Marcos wey be AI specialist and founder of Geormetric Intelligence, one AI company Uber buy for 2016 say bad-bad groups fit see AI written content like spam.
“Pipo wey dey spread spam around dey rely on easily trusting pipo to click dia link with wetin dem dey call spray and pray method wey dey reach as many pipo as possible. But with AI, dat water gun fit turn to di biggest Super Soaker ever”.
To add to am, even if Twitter and Facebook coomot three-quaters of wetin dis pipo dey spread for dia networks, oga Marcus say, “dem still dey at least 10 times content pass before wey fit mislead pipo online”.
Di surge of fake social media accounts don dey chook Twitter and Facebook and di way language model systems don sharparly grow go crowd di platforms with more fake profiles.
Vincent Conitzer wey be professor of computer science for Carnegie Mellon University say, “sometin like ChatGPT fit increase di spread of fake accounts to levels wey we neva see before and e go become harder to differnetiate dis accounts from real human beings.”
Wia dis foto come from, CARNEGIE MELLON UNIVERSITY
Wetin we call dis foto,
Vincent Conitzer say ChatGPT go make am harder to tell wetin be machine and wetin humans dey tok
Both di January 2023 paper wey Oga Goldstein follow write and anoda report wey security company, WithSecure Intelligence do, dey warn say generative language models fit sharp-sharp create fake news tori and spread round social media, to add to di plenti fake news wey fit impact voters before ogbonge election.
If AI like ChatGPT fit make fake news and misinformation bigger threat, shey social media platforms suppose to dey proactive as possible? Some sabi pipo tink say dem dey lax to enforce on any of those kain posts.
Luis A Nunes Amaral wey be co-directir of di Northwestern Institute of Complex Systems say, “Facebook and oda platforms suppose dey flag those fake content but Facebook be ogbonge failure on dat test. Di reason na di cost to monitor every single post and also because dis posts dey made to vex and divide pipo wey dey increase engagement and ne dey beneficial to Facebook.”