• Automate the Boring Stuff with GPT-4 and Python

    Automate the Boring Stuff with GPT-4 and Python

    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python
    Image by Editor

    On March 14, 2023, OpenAI launched GPT-4, the newest and most powerful version of their language model. 

    Within just hours of its launch, GPT-4 stunned people by turning a hand-drawn sketch into a functional website, passing the bar exam, and generating accurate summaries of Wikipedia articles

    It also outperforms its predecessor, GPT-3.5, in solving math problems and answering questions based on logic and reasoning.

    ChatGPT, the chatbot which was built on top of GPT-3.5 and released to the public, was notorious for “hallucinating.” It would generate responses that were seemingly correct and would defend its answers with “facts”, although they were laden with errors.

    One user took to Twitter after the model insisted that elephant eggs were the largest of all land animals:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python
    Image from FioraAeterna

    And it didn’t stop there. The algorithm went on to corroborate its response with made-up facts that almost had me convinced for a moment.

    GPT-4, on the other hand, was trained to “hallucinate” less often. OpenAI’s latest model is harder to trick and does not confidently generate falsehoods as frequently.



    As a data scientist, my job requires me to find relevant data sources, preprocess large datasets, and build highly accurate machine learning models that drive business value. 

    I spend a huge portion of my day extracting data from different file formats and consolidating it in one place. 

    After ChatGPT was first launched in November 2022, I looked to the chatbot for some guidance with my daily workflows. I used the tool to save the amount of time spent on menial work – so that I could focus on coming up with new ideas and creating better models instead.

    Once GPT-4 was released, I was curious about whether it would make a difference in the work I was doing. Were there any significant benefits to using GPT-4 over its predecessors? Would it help me save more time than I already was with GPT-3.5?

    In this article, I will show you how I use ChatGPT to automate data science workflows. 

    I will create the same prompts and feed them into both GPT-4 and GPT-3.5, to see if the former indeed does perform better and result in more time savings.



    If you’d like to follow along with everything I do in this article, you need to have access to GPT-4 and GPT-3.5.




    GPT-3.5 is publicly available on OpenAI’s website. Simply navigate to https://chat.openai.com/auth/login, fill out the required details, and you will have access to the language model:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python
    Image from ChatGPT 




    GPT-4, on the other hand, is currently hidden behind a paywall. To access the model, you need to upgrade to ChatGPTPlus by clicking on “Upgrade to Plus.” 

    There is a monthly subscription fee of $20/month that can be canceled anytime:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python
    Image from ChatGPT

    If you don’t want to pay the monthly subscription fee, you can also join the API waitlist for GPT-4. Once you get access to the API, you can follow this guide to use it in Python. 

    It’s okay if you don’t currently have access to GPT-4.

    You can still follow this tutorial with the free version of ChatGPT that uses GPT-3.5 in the backend.



    1. Data Visualization


    When performing exploratory data analysis, generating a quick visualization in Python often helps me better understand the dataset. 

    Unfortunately, this task can become incredibly time-consuming – especially when you don’t know the right syntax to use to get the desired result. 

    I often find myself searching through Seaborn’s extensive documentation and using StackOverflow to generate a single Python plot.

    Let’s see if ChatGPT can help solve this problem.

    We will be using the Pima Indians Diabetes dataset in this section. You can download the dataset if you’d like to follow along with the results generated by ChatGPT.

    After downloading the dataset, let’s load it into Python using the Pandas library and print the head of the dataframe:

    import pandas as pd df = pd.read_csv(‘diabetes.csv’) df.head()


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    There are nine variables in this dataset. One of them, “Outcome”, is the target variable that tells us whether a person will develop diabetes. The remaining are independent variables used to predict the outcome.

    Okay! So I want to see which of these variables have an impact on whether a person will develop diabetes.

    To achieve this, we can create a clustered bar chart to visualize the variable “Diabetes” across all the dependent variables in the dataset.

    This is actually pretty easy to code out, but let’s start simple. We will move on to more complicated prompts as we progress through the article.


    Data Visualization with GPT-3.5


    Since I have a paid subscription to ChatGPT, the tool allows me to select the underlying model I’d like to use every time I access it.

    I am going to select GPT-3.5:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python
    Image from ChatGPT Plus

    If you don’t have a subscription, you can use the free version of ChatGPT since the chatbot uses GPT-3.5 by default.

    Now, let’s type the following prompt to generate a visualization using the diabetes dataset:


    I have a dataset with 8 independent variables and 1 dependent variable. The dependent variable, “Outcome”, tells us whether a person will develop diabetes. 
    The independent variables, “Pregnancies”, “Glucose”, “BloodPressure”, “SkinThickness”, “Insulin”, “BMI”, “DiabetesPedigreeFunction”, and “Age” are used to predict this outcome.
    Can you generate Python code to visualize all these independent variables by outcome? The output should be one clustered bar chart that is colored by the “Outcome” variable. There should be 16 bars in total, 2 for each independent variable.


    Here is the model’s response to the above prompt:

    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python


    One thing that stands out immediately is that the model assumed we wanted to import a dataset from Seaborn. It probably made this assumption since we asked it to use the Seaborn library.

    This isn’t a huge issue, we just need to change one line before running the codes.

    Here’s the complete code snippet generated by GPT-3.5:

    import seaborn as sns import matplotlib.pyplot as plt # Load the dataset dataset = pd.read_csv(“diabetes.csv”) # Create the bar chart sns.barplot( x=”variable”, y=”value”, hue=”Outcome”, data=pd.melt(dataset, id_vars=[“Outcome”]), ci=None, ) # Set the title and labels plt.title(“Dependent Variables by Outcome”) plt.xlabel(“Independent Variables”) plt.ylabel(“Value”) # Show the chart plt.show()


    You can copy and paste this into your Python IDE.

    Here is the outcome generated after running the above code:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python


    This chart looks perfect! It’s exactly how I envisioned it when typing the prompt into ChatGPT.

    One issue that stands out, however, is that the text on this chart is overlapping. I am going to ask the model if it can help us fix this, by typing the following prompt:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    The algorithm explained that we could prevent this overlap by either rotating the chart labels or adjusting the figure size. It also generated new code to help us achieve this.

    Let’s run this code to see if it gives us the desired results:

    import seaborn as sns import matplotlib.pyplot as plt # Load the dataset dataset = pd.read_csv(“diabetes.csv”) # Create the bar chart sns.barplot( x=”variable”, y=”value”, hue=”Outcome”, data=pd.melt(dataset, id_vars=[“Outcome”]), ci=None, ) # Set the title and labels plt.title(“Dependent Variables by Outcome”) plt.xlabel(“Independent Variables”) plt.ylabel(“Value”) # Rotate the x-axis labels by 45 degrees and set horizontal alignment to right plt.xticks(rotation=45, ha=”right”) # Show the chart plt.show()


    The above lines of code should generate the following output:

    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python


    This looks great! 

    I understand the dataset a lot better now by simply looking at this chart. It appears as though people with higher glucose and insulin levels are more likely to develop diabetes.

    Also, notice that the “DiabetesPedigreeFunction” variable doesn’t give us any information in this chart. This is because the feature is on a smaller scale (between 0 and 2.4). If you’d like to further experiment with ChatGPT, you can prompt it to generate multiple subplots within a single chart to solve this problem.


    Data Visualization with GPT-4


    Now, let’s feed the same prompts into GPT-4 to see if we get a different response. I am going to select the GPT-4 model within ChatGPT and type in the same prompt as before:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    Notice how GPT-4 does not assume that we will be using a dataframe that is built into Seaborn. 

    It tells us that it will use a dataframe called “df” to build the visualization, which is an improvement from the response generated by GPT-3.5.

    Here is the complete code generated by this algorithm:

    import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # Assuming your DataFrame is called df # First, you need to melt the DataFrame to make # it suitable for creating a clustered bar chart melted_df = pd.melt( df, id_vars=[“Outcome”], var_name=”Independent Variable”, value_name=”Value”, ) # Create the clustered bar chart plt.figure(figsize=(12, 6)) sns.barplot( data=melted_df, x=”Independent Variable”, y=”Value”, hue=”Outcome”, ci=None, ) # Customize the plot plt.title(“Independent Variables by Outcome”) plt.ylabel(“Average Value”) plt.xlabel(“Independent Variables”) plt.legend(title=”Outcome”, loc=”upper right”) # Show the plot plt.show()


    The above code should generate the following plot:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    This is perfect! 

    Even though we didn’t ask it to, GPT-4 has included a line of code to increase the plot size. The labels on this chart are all clearly visible, so we don’t have to go back and amend the code as we did earlier. 

    This is a step above the response generated by GPT-3.5.

    Overall, however, it appears as though GPT-3.5 and GPT-4 are both effective at generating code to perform tasks like data visualization and analysis. 

    It is important to note that since you cannot upload data into ChatGPT’s interface, you should provide the model with an accurate description of your dataset for optimum results.


    2. Working with PDF Documents


    While this isn’t a common data science use-case, I have had to extract text data from hundreds of PDF files to build a sentiment analysis model once. The data was unstructured, and I spent a lot of time extracting and preprocessing it.

    I also often work with researchers who read and create content about current events taking place in specific industries. They need to stay on top of the news, parse through company reports, and read about potential trends in the industry. 

    Instead of reading 100 pages of a company’s report, isn’t it easier to simply extract words you are interested in and only read through sentences that contain those keywords? 

    Or if you’re interested in trends, you can create an automated workflow that showcases keyword growth over time instead of going through each report manually.

    In this section, we will be using ChatGPT to analyze PDF files in Python. We will ask the chatbot to extract the contents of a PDF file and write it into a text file.

    Again, this will be done using both GPT-3.5 and GPT-4 to see if there is a significant difference in the code generated.


    Reading PDF Files with GPT-3.5


    In this section, we will be analyzing a publicly available PDF document titled A Brief Introduction to Machine Learning for Engineers. Make sure to download this file if you’d like to code along to this section.

    First, let’s ask the algorithm to generate Python code to extract data from this PDF document and save it to a text file:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python


    Here is the complete code provided by the algorithm:

    import PyPDF2 # Open the PDF file in read-binary mode with open(“Intro_to_ML.pdf”, “rb”) as pdf_file: # Create a PDF reader object pdf_reader = PyPDF2.PdfFileReader(pdf_file) # Get the total number of pages in the PDF file num_pages = pdf_reader.getNumPages() # Create a new text file with open(“output_file.txt”, “w”) as txt_file: # Loop through each page in the PDF file for page_num in range(num_pages): # Get the text from the current page page_text = pdf_reader.getPage(page_num).extractText() # Write the text to the text file txt_file.write(page_text)


    (Note: Make sure to change the PDF file name to the one you saved before running this code.)

    Unfortunately, after running the code generated by GPT-3.5, I encountered the following unicode error:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    Let’s go back to GPT-3.5 and see if the model can fix this:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    I pasted the error into ChatGPT, and the model responded that it could be fixed by changing the encoding used to “utf-8.” It also gave me some modified code that reflected this change:

    import PyPDF2 # Open the PDF file in read-binary mode with open(“Intro_to_ML.pdf”, “rb”) as pdf_file: # Create a PDF reader object pdf_reader = PyPDF2.PdfFileReader(pdf_file) # Get the total number of pages in the PDF file num_pages = pdf_reader.getNumPages() # Create a new text file with utf-8 encoding with open(“output_file.txt”, “w”, encoding=”utf-8″) as txt_file: # Loop through each page in the PDF file for page_num in range(num_pages): # Get the text from the current page page_text = pdf_reader.getPage(page_num).extractText() # Write the text to the text file txt_file.write(page_text)


    This code executed successfully and created a text file called “output_file.txt.” All the content in the PDF document have been written to the file:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python


    Reading PDF Files with GPT-4


    Now, I am going to paste the same prompt into GPT-4 to see what the model comes up with:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    Here is the complete code generated by GPT-4:

    import PyPDF2 def pdf_to_text(input_pdf, output_txt): with open(input_pdf, “rb”) as pdf_file: pdf_reader = PyPDF2.PdfFileReader(pdf_file) num_pages = pdf_reader.numPages text = “” for page_num in range(num_pages): page = pdf_reader.getPage(page_num) text += page.extractText() with open(output_txt, “w”, encoding=”utf-8″) as text_file: text_file.write(text) input_pdf = “Intro_to_ML.pdf” output_txt = “output_gpt4.txt” pdf_to_text(input_pdf, output_txt)


    Look at that!

    Unlike GPT-3.5, GPT-4 has already specified that “utf-8” encoding should be used to open the text file. We don’t need to go back and amend the code like we did previously.

    The code provided by GPT-4 should execute successfully, and you should see the contents of the PDF document in the text file that was created.

    There are many other techniques you can use to automate PDF documents with Python. If you’d like to explore this further, here are some other prompts you can type into ChatGPT:

    • Can you write Python code to merge two PDF files?
    • How can I count the occurrences of a specific word or phrase in a PDF document with Python?
    • Can you write Python code to extract tables from PDFs and write them in Excel?

    I suggest trying some of these during your free time – you’d be surprised at how quickly GPT-4 can help you accomplish menial tasks that usually take hours to perform.


    3. Sending Automated Emails


    I spend hours of my workweek reading and responding to emails. Not only is this time-consuming, but it can also be incredibly stressful to stay on top of emails when you are chasing tight deadlines.

    And although you can’t get ChatGPT to write all your emails for you (I wish), you can still use it to write programs that send scheduled emails at a specific time or modify a single email template that can be sent out to multiple people.

    In this section, we will get GPT-3.5 and GPT-4 to help us write a Python script to send automated emails.


    Sending Automated Emails with GPT-3.5


    First, let’s type the following prompt to generate codes to send an automated email:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    Here is the complete code generated by GPT-3.5 (Make sure to change the email addresses and password before running this code):

    import smtplib # Set up SMTP connection smtp_server = “smtp.gmail.com” smtp_port = 587 sender_email = “your_email@gmail.com” sender_password = “your_password” receiver_email = “receiver_email@example.com” with smtplib.SMTP(smtp_server, smtp_port) as smtp: # Start TLS encryption smtp.starttls() # Log in to your Gmail account smtp.login(sender_email, sender_password) # Compose your email message subject = “Automated email” body = “Hello,nnThis is an automated email sent from Python.” message = f”Subject: {subject}nn{body}” # Send the email smtp.sendmail(sender_email, receiver_email, message)


    Unfortunately, this code did not execute successfully for me. It generated the following error:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    Let’s paste this error into ChatGPT and see if the model can help us solve it:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python


    Okay, so the algorithm pointed out a few reasons as to why we might be running into this error.

    I know for a fact that my login credentials and email addresses were valid, and that there were no typos in the code. So these reasons can be ruled out.

    GPT-3.5 also suggests that allowing less secure apps might solve this problem.

    If you try this, however, you will not find an option in your Google account to allow access to less secure apps.

    This is because Google no longer lets users allow less secure apps due to security concerns.

    Finally, GPT-3.5 also mentions that an app password should be generated if two-factor authentication was enabled.

    I don’t have two-factor authentication enabled, so I’m going to (temporarily) give up on this model and see if GPT-4 has a solution.


    Sending Automated Emails with GPT-4


    Okay, so if you type the same prompt into GPT-4, you will find that the algorithm generates code that is very similar to what GPT-3.5 gave us. This will cause the same error that we ran into previously.

    Let’s see if GPT-4 can help us fix this error:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    GPT-4’s suggestions are very similar to what we saw previously.

    However, this time, it gives us a step-by-step breakdown of how to accomplish each step.

    GPT-4 also suggests creating an app password, so let’s give it a try.

    First, visit your Google Account, navigate to “Security”, and enable two-factor authentication. Then, in the same section, you should see an option that says “App Passwords.”

    Click on it and the following screen will appear:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python

    You can enter any name you like, and click on “Generate.”

    A new app password will appear.

    Replace your existing password in the Python code with this app password and run the code again:

    import smtplib # Set up SMTP connection smtp_server = “smtp.gmail.com” smtp_port = 587 sender_email = “your_email@gmail.com” sender_password = “YOUR_APP_PASSWORD” receiver_email = “receiver_email@example.com” with smtplib.SMTP(smtp_server, smtp_port) as smtp: # Start TLS encryption smtp.starttls() # Log in to your Gmail account smtp.login(sender_email, sender_password) # Compose your email message subject = “Automated email” body = “Hello,nnThis is an automated email sent from Python.” message = f”Subject: {subject}nn{body}” # Send the email smtp.sendmail(sender_email, receiver_email, message)


    It should run successfully this time, and your recipient will receive an email that looks like this:


    Automate the Boring Stuff with ChatGPT and PythonAutomate the Boring Stuff with ChatGPT and Python


    Thanks to ChatGPT, we have successfully sent out an automated email with Python.

    If you’d like to take this a step further, I suggest generating prompts that allow you to:

    1. Send bulk emails to multiple recipients at the same time
    2. Send scheduled emails to a predefined list of email addresses
    3. Send recipients a customized email that is tailored to their age, gender, and location.

    Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

  • Elon Musk and other tech leaders call for pause in ‘out of control’ AI race

    Elon Musk and other tech leaders call for pause in ‘out of control’ AI race

    CNN  — 

    Some of the biggest names in tech are calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

    Elon Musk was among the dozens oftech leaders, professors and researchers who signed the letter, which was published by the Future of Life Institute, a nonprofit backed by Musk.

    The letter comes just two weeks after OpenAI announced GPT-4,an even more powerful version of the technology that underpins the viral AI chatbot tool,ChatGPT. In early tests and a company demo, the technology was shown drafting lawsuits, passing standardized exams and building a working website from a hand-drawn sketch.

    The letter said the pause should apply to AI systems “more powerful than GPT-4.” It also said independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are safe “beyond a reasonable doubt.”

    “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter said. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

    If a pause is not put in place soon, the letter said governments should step in and create a moratorium.

    The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

    Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. These tools have also sparked questions around how AI can upend professions, enable students to cheat, and shift our relationship with technology.

    The letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Some governing agencies in China, the EU and Singapore have previously introduced early versions of AI governance frameworks.

    Correction: An earlier version of this story said Microsoft founder Bill Gates and OpenAI CEO Sam Altman had signed the letter. While the executives were initially listed as signatories, the non-profit behind the letter later removed their names.

  • Microsoft plans major platform upgrades for “Windows 12” that will modernize the OS with AI, faster updates, and better security

    Microsoft plans major platform upgrades for “Windows 12” that will modernize the OS with AI, faster updates, and better security

    What you need to know

    • Microsoft is once again hard at work on building out a “modern” version of Windows.
    • The project is called CorePC, and follows the same goals as Windows 10X but with native support for legacy apps on devices that require it.
    • CorePC will pave the way to new configurations of Windows that scale up and down depending on PC hardware.

    For many years, Microsoft has been trying to modernize the Windows platform. Its most recent attempt at this was with Windows Core OS, an effort that aimed to deliver a modular, UWP-first OS that stripped the platform of legacy features and app compatibility in favor of being lightweight, quicker to install updates, and much more secure.

    Unfortunately, Microsoft was never able to ship a version of Windows Core OS for traditional PC form factors, though it certainly tried. Windows 10X was Microsoft’s last attempt at this, but the project was canceled in 2021 after months of internal testing and years of development on Windows Core OS itself.

    Since then, movement on a version of Windows Core OS for traditional form factors has ground to a halt, and I’m told there are no longer any plans to ship a product on top of Windows Core OS for traditional PCs. However, I hear that the company isn’t done with trying to modernize the Windows platform to help it compete with its more modern rivals.

    According to my sources who are familiar with Microsoft’s plans, the company is once again hard at work on a new project internally that’s designed to modernize the Windows platform with many of the same innovations it was working on for Windows Core OS, but with a focus on native compatibility for legacy Win32 applications on devices where it makes sense.

    Windows 10X innovations

    The same innovations for Windows 10X apply to CorePC. (Image credit: Microsoft / Windows Central)

    The project is codenamed CorePC and is designed to be a modular and customizable variant of Windows for Microsoft to leverage different form factors with. Not all Windows PCs need the full breadth of legacy Win32 app support, and CorePC will allow Microsoft to configure “editions” of Windows with varying levels of feature and app compatibility.

    The big change with CorePC versus the current shipping version of Windows is that CorePC is state separated, just like Windows Core OS. State separation enables faster updates and a more secure platform via read-only partitions that are inaccessible to the user and third-party apps, just like on iPadOS or Android.

    The current version of Windows is not a state separated platform, meaning the entire system is installed into a single writable partition. System files, user data, and program files are all stored in the same place. CorePC splits up the OS into multiple partitions, which is key to enabling faster OS updates. State separation also enables faster and more reliable system reset functionality, which is important for Chromebook compete devices in the education sector.

    Microsoft has talked about what state separation means for Windows before. You can view that at the 22:40 mark here:

    Microsoft is essentially tackling its Windows Core OS vision from the other end of the spectrum. If Windows Core OS was an effort to “rebuild” Windows from the ground up as a modern, configurable OS without the overhead of legacy app compatibility, Windows CorePC starts with the full Windows desktop and works backwards to break it down into a modular, configurable system while maintaining native support for legacy apps and workflows where necessary.

    My sources tell me CorePC will allow Microsoft to finally deliver a version of Windows that truly competes with Chromebooks in OS footprint, performance, and capabilities. A version of Windows that only runs Edge, web apps, Android apps (via Project Latte) and Office apps, designed for low-end education PCs is already in early testing internally, and is roughly 60-75% smaller than Windows 11 SE.

    Microsoft is also working on a version of CorePC that meet the current feature set and capabilities of Windows desktop, but with state separation enabled for those faster OS updates and improved security benefits. The company is working on a compatibility layer codenamed Neon for legacy apps that require a shared state OS to function, too.

    Windows CorePC State Separated

    CorePC installs into multiple partitions, not just one. (Image credit: Windows Central)

    Lastly, I hear that Microsoft is experimenting with a version of CorePC that’s “silicon-optimized,” designed to reduce legacy overhead, focus on AI capabilities, and vertically optimize hardware and software experiences in a way similar to that of Apple Silicon. Unsurprisingly, AI experiences are a key focus for Windows going into 2024.

    Some AI features being developed include the ability for Windows to analyse content on display and provide contextual prompts to jumpstart projects or apps based on the information that’s currently being viewed. Windows may also be able to identify objects and text within images, and allow the user to easily cut out and paste those items elsewhere. Some AI features will require dedicated hardware to function.

    Of course, these plans, features, and configurations may change between now and when Microsoft is ready to start shipping devices with CorePC. Timing for when CorePC will be ready is up in the air, though I understand Microsoft aspires to have it ready in time for the next major version of the Windows client in 2024, codenamed Hudson Valley.

    Microsoft officials declined to comment on these plans.

  • GPT-5 could soon change the world in one incredible way

    GPT-5 could soon change the world in one incredible way

    GPT-4 may have only just launched, but people are already excited about the next version of the artificial intelligence (AI) chatbot technology. Now, a new claim has been made that GPT-5 will complete its training this year, and could bring a major AI revolution with it.

    The assertion comes from developer Siqi Chen on Twitter, who stated: “I have been told that GPT-5 is scheduled to complete training this December and that OpenAI expects it to achieve AGI.”

    Related Videos

    i have been told that gpt5 is scheduled to complete training this december and that openai expects it to achieve agi.

    which means we will all hotly debate as to whether it actually achieves agi.

    which means it will.

    — Siqi Chen (@blader) March 27, 2023

    AGI is the concept of “artificial general intelligence,” which refers to an AI’s ability to comprehend and learn any task or idea that humans can wrap their heads around. In other words, an AI that has achieved AGI could be indistinguishable from a human in its capabilities.

    That makes Chen’s claim pretty explosive, considering all the possibilities AGI might enable. At the positive end of the spectrum, it could massively increase the productivity of various AI-enabled processes, speeding things up for humans and eliminating monotonous drudgery and tedious work.

    At the same time, bestowing an AI with that much power could have unintended consequences — ones that we simply haven’t thought of yet. It doesn’t mean the robot apocalypse is imminent, but it certainly raises a lot of questions about what the negative effects of AGI could be.

    Potential troubles at Twitter?

    elon musk stylized imageGetty Images/Digital Trends Graphic

    If AGI goes off the rails, it could enable the spread of incredibly convincing bots on social media channels like Twitter, helping to disseminate harmful disinformation and propaganda that is increasingly difficult to detect.

    That’s something Elon Musk is evidently aware of, and the controversial billionaire has made fighting AI bots a key pillar of his tenure as Twitter CEO. Yet his latest idea of restricting the reach of accounts that have not paid for a Twitter Blue membership has not gone down well, and his time in charge has been beset by divisive moves that have had limited success, to put it mildly.

    Twitter is just one frontier in the AI-enabled future, and there are many other ways artificial intelligence could alter the way we live. If GPT-5 does indeed achieve AGI, it seems fair to say the world could change in ground-shaking ways. Whether it will be for better or for worse remains to be seen.

    Editors’ Recommendations

  • How Microsoft became tech’s top dog again

    How Microsoft became tech’s top dog again

    Few companies have experienced the ups and downs of the nearly 50-year-old Microsoft. It helped launch the PC revolution in the 1970s and became a colossus of the tech world in the 1980s and 1990s, essentially setting the tech world’s agenda. Then after a federal anti-trust suit and CEO Steve Ballmer’s poor leadership, the company fell into a tailspin and lost its tech luster during a “lost decade” that began in 2000.

    When Satya Nadella took over as CEO from Ballmer in 2014, Microsoft began a slow, steady ascent, betting on the cloud rather than relying on Windows. But its comeback wasn’t based on beating competitors with new, innovative technologies. Rather, Microsoft gained ground by leveraging already existing technologies including the cloud, Office and Windows. No one looked to it for exciting tech; no one expected it to set tech’s agenda.

    And certainly no one looked to Microsoft for how we might live and work in the future — until ChatGPT chatbot and its integration with Bing took the world by storm. In a few short months, Microsoft morphed from a business that was about exciting as a utility company to one that is setting the tech agenda and has become the clear leader in artificial intelligence (AI).

    Say good-bye, Cortana – you’re dumb as a rock

    Although it seemed to happen at warp speed, the moment was years in the making. It started when Microsoft abandoned one of its most embarrassing big failures, Cortana, the digital assistant launched in 2014 to compete with Apple’s Siri and Amazon’s Alexa.

    Cortana never got off the ground. In late 2018, I documented what an abject failure the digital assistant had become. I pointed out at the time that while Amazon had sold 50 million smart speakers running Alexa, sales of the one smart speaker with Cortana built into it were too small to measure.

    I also noted something whose importance even I didn’t recognize at the time: Cortana was moved from the AI and Research Division into the Experiences & Users team. The Cortana team was gutted, with Microsoft’s Javier Soltero, the company’s vice president of Cortana, leaving by the end of the year. It seemed to me merely to signal that Microsoft knew Cortana had no future as a standalone assistant.

    It didn’t. In 2020, Microsoft essentially killed it.

    But the move involved a lot more than that. It represented Nadella’s recognition that the future of AI wasn’t in smart assistants like Cortana, Siri, and Alexa. Instead, he saw the future of AI — and possibly the future of computing — would be in generative AI like what is now ChatGPT. That’s why a little more than half a year after moving Cortana out of its AI division, Microsoft invested $1 billion in OpenAI, ChatGPT’s creator. Recently, it announced a follow-on $10 billion investment.

    Nadella recently threw shade on Cortana and its entire cohort of digital assistants, saying they “Were all dumb as a rock. Whether it’s Cortana or Alexa or Google Assistant or Siri, all these just don’t work. We had a product that was supposed to be the new front-end to a lot of [information] that didn’t work.”

    Why chatbots are the best digital assistants

    Understanding the difference between how digital assistants and generative AI chatbots are developed (and how they work) goes a long way towards understanding why Nadella considers digital assistants the past — and ChatGPT the future.

    You can find a deep dive into their differences in the New York Times. But here’s the quick version: digital assistants such as Siri, Alexa and Google Assistant are command-and-control systems. They can only understand and act on specific questions and requests, such as, “What movies are playing near me?” or “What will be the weather in Rome tomorrow?”

    That makes them far more limited than chatbots like ChatGPT, which can perform an astonishing array of tasks from writing marketing copy to summarizing articles, creating graphics, writing code and much more. And even more is coming. At Microsoft’s “Future of Work” event, the company talked about the AI-powered Microsoft 365 Copilot, which can create Office documents on its own – for example, pulling together a PowerPoint presentation based entirely on a Word document, applying styles and animations throughout the presentation. Microsoft claims it can even draft a business proposal based only on meeting notes.

    Not only are digital assistants drastically underpowered compared to chatbots, but coding them to do new tasks can be even harder. Former Apple engineer John Burkey, who worked on improving Siri, told the Times that even making a simple change such as adding new phrases to Siri’s data set can take up to six weeks because it requires rebuilding the whole underlying database. Adding a feature that’s more complex, such as a new search tool, can take almost a year.

    Contrast that with the astonishing speed with which new capabilities are added to chatbots like ChatGPT on what sometimes seems like a daily basis. That’s because the chatbots are based on large language model technology.

    And that’s why Microsoft is suddenly tech’s front-runner. Saying good-bye to Cortana and focusing on AI chatbots did the trick. That isn’t to say the race is over. Competitors, particularly Google, have for years been spending considerable resources on AI. Google has recently publicly launched its Bard chatbot. Others will follow.

    Of course, AI may not prove to be as transformative as expected. It’s still early. Even the latest iteration, Chatbot GPT 4, is bedeviled by big problems. Like its predecessors, it’s prone to what researchers call “hallucinations” — what we in the real world call “bull*****ing” – making things up when it doesn’t know the answers.

    But for the moment at least, Microsoft is in an unaccustomed spot — being the talk of the tech world, and in a good way.

    Copyright © 2023 IDG Communications, Inc.

  • Master of Science in Analytics

    Master of Science in Analytics
    [] {var e,t,r={9071:(e,t,r)=>{“use strict”;r.d(t,{I:()=>n});var n=0,i=navigator.userAgent.match(/Firefox[/s](d+.d+)/);i&&(n=+i[1])},6562:(e,t,r)=>{“use strict”;r.d(t,{P_:()=>v,Mt:()=>p,C5:()=>d,DL:()=>y,OP:()=>N,lF:()=>M,Yu:()=>A,Dg:()=>h,CX:()=>f,GE:()=>w,sU:()=>R});var n={};r.r(n),r.d(n,{agent:()=>E,match:()=>D,version:()=>x});var i=r(6797),o=r(909),a=r(8610);class s{constructor(e,t){try{if(!e||”object”!=typeof e)return(0,a.Z)(“New setting a Configurable requires an object as input”);if(!t||”object”!=typeof t)return(0,a.Z)(“Setting a Configurable requires a model to set its initial properties”);Object.assign(this,t),Object.entries(e).forEach((e=>{let[t,r]=e;const n=(0,o.q)(t);n.length&&r&&”object”==typeof r&&n.forEach((e=>{e in r&&((0,a.Z)(‘”‘.concat(e,’” is a protected attribute and can not be changed in feature ‘).concat(t,”. It will have no effect.”)),delete r[e])})),this[t]=r}))}catch(e){(0,a.Z)(“An error occured while setting a Configurable”,e)}}}const c={beacon:i.ce.beacon,errorBeacon:i.ce.errorBeacon,licenseKey:void 0,applicationID:void 0,sa:void 0,queueTime:void 0,applicationTime:void 0,ttGuid:void 0,user:void 0,account:void 0,product:void 0,extra:void 0,jsAttributes:{},userAttributes:void 0,atts:void 0,transactionName:void 0,tNamePlain:void 0},u={};function d(e){if(!e)throw new Error(“All info objects require an agent identifier!”);if(!u[e])throw new Error(“Info for “.concat(e,” was never set”));return u[e]}function f(e,t){if(!e)throw new Error(“All info objects require an agent identifier!”);u[e]=new s(t,c),(0,i.Qy)(e,u[e],”info”)}const l={allow_bfcache:!0,privacy:{cookies_enabled:!0},ajax:{deny_list:void 0,enabled:!0,harvestTimeSeconds:10},distributed_tracing:{enabled:void 0,exclude_newrelic_header:void 0,cors_use_newrelic_header:void 0,cors_use_tracecontext_headers:void 0,allowed_origins:void 0},ssl:void 0,obfuscate:void 0,jserrors:{enabled:!0,harvestTimeSeconds:10},metrics:{enabled:!0},page_action:{enabled:!0,harvestTimeSeconds:30},page_view_event:{enabled:!0},page_view_timing:{enabled:!0,harvestTimeSeconds:30,long_task:!1},session_trace:{enabled:!0,harvestTimeSeconds:10},spa:{enabled:!0,harvestTimeSeconds:10}},g={};function v(e){if(!e)throw new Error(“All configuration objects require an agent identifier!”);if(!g[e])throw new Error(“Configuration for “.concat(e,” was never set”));return g[e]}function h(e,t){if(!e)throw new Error(“All configuration objects require an agent identifier!”);g[e]=new s(t,l),(0,i.Qy)(e,g[e],”config”)}function p(e,t){if(!e)throw new Error(“All configuration objects require an agent identifier!”);var r=v(e);if(r){for(var n=t.split(“.”),i=0;i{“use strict”;r.d(t,{q:()=>n});const n=”1228.PROD”},9557:(e,t,r)=>{“use strict”;r.d(t,{w:()=>o});var n=r(8610);const i={agentIdentifier:””};class o{constructor(e){try{if(“object”!=typeof e)return(0,n.Z)(“shared context requires an object as input”);this.sharedContext={},Object.assign(this.sharedContext,i),Object.entries(e).forEach((e=>{let[t,r]=e;Object.keys(i).includes(t)&&(this.sharedContext[t]=r)}))}catch(e){(0,n.Z)(“An error occured while setting SharedContext”,e)}}}},4329:(e,t,r)=>{“use strict”;r.d(t,{L:()=>d,R:()=>c});var n=r(3752),i=r(7022),o=r(4045),a=r(2325);const s={};function c(e,t){const r={staged:!1,priority:a.p[t]||0};u(e),s[e].get(t)||s[e].set(t,r)}function u(e){e&&(s[e]||(s[e]=new Map))}function d(){let e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:””,t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:”feature”;if(u(e),!e||!s[e].get(t))return a(t);s[e].get(t).staged=!0;const r=Array.from(s[e]);function a(t){const r=e?n.ee.get(e):n.ee,a=o.X.handlers;if(r.backlog&&a){var s=r.backlog[t],c=a[t];if(c){for(var u=0;s&&u{let[t,r]=e;return r.staged}))&&(r.sort(((e,t)=>e[1].priority-t[1].priority)),r.forEach((e=>{let[t]=e;a(t)})))}function f(e,t){var r=e[1];(0,i.D)(t[r],(function(t,r){var n=e[0];if(r[0]===n){var i=r[1],o=e[3],a=e[2];i.apply(o,a)}}))}},3752:(e,t,r)=>{“use strict”;r.d(t,{ee:()=>u});var n=r(6797),i=r(3916),o=r(7022),a=r(6562),s=”nr@context”;let c=(0,n.fP)();var u;function d(){}function f(){return new d}function l(){u.aborted=!0,u.backlog={}}c.ee?u=c.ee:(u=function e(t,r){var n={},c={},g={},v=!1;try{v=16===r.length&&(0,a.OP)(r).isolatedBacklog}catch(e){}var h={on:b,addEventListener:b,removeEventListener:y,emit:m,get:A,listeners:w,context:p,buffer:E,abort:l,aborted:!1,isBuffering:x,debugId:r,backlog:v?{}:t&&”object”==typeof t.backlog?t.backlog:{}};return h;function p(e){return e&&e instanceof d?e:e?(0,i.X)(e,s,f):f()}function m(e,r,n,i,o){if(!1!==o&&(o=!0),!u.aborted||i){t&&o&&t.emit(e,r,n);for(var a=p(n),s=w(e),d=s.length,f=0;fn,p:()=>i});var n=r(3752).ee.get(“handle”);function i(e,t,r,i,o){o?(o.buffer([e],i),o.emit(e,t,r)):(n.buffer([e],i),n.emit(e,t,r))}},4045:(e,t,r)=>{“use strict”;r.d(t,{X:()=>o});var n=r(9252);o.on=a;var i=o.handlers={};function o(e,t,r,o){a(o||n.E,i,e,t,r)}function a(e,t,r,i,o){o||(o=”feature”),e||(e=n.E);var a=t[o]=t[o]||{};(a[r]=a[r]||[]).push([e,i])}},8544:(e,t,r)=>{“use strict”;r.d(t,{bP:()=>s,iz:()=>c,m$:()=>a});var n=r(2374);let i=!1,o=!1;try{const e={get passive(){return i=!0,!1},get signal(){return o=!0,!1}};n._A.addEventListener(“test”,null,e),n._A.removeEventListener(“test”,null,e)}catch(e){}function a(e,t){return i||o?{capture:!!e,passive:i,signal:t}:!!e}function s(e,t){let r=arguments.length>2&&void 0!==arguments[2]&&arguments[2];window.addEventListener(e,t,a(r))}function c(e,t){let r=arguments.length>2&&void 0!==arguments[2]&&arguments[2];document.addEventListener(e,t,a(r))}},5526:(e,t,r)=>{“use strict”;r.d(t,{Rl:()=>i,ky:()=>o});var n=r(2374);function i(){var e=null,t=0,r=n._A?.crypto||n._A?.msCrypto;function i(){return e?15&e[t++]:16*Math.random()|0}r&&r.getRandomValues&&(e=r.getRandomValues(new Uint8Array(31)));for(var o,a=”xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx”,s=””,c=0;cn,z:()=>i});const n=(new Date).getTime();function i(){return Math.round(performance.now())}},8283:(e,t,r)=>{“use strict”;r.d(t,{B:()=>a,L:()=>s});var n=r(6562),i=r(2053),o={};function a(e,t,r){void 0===r&&(r=(0,i.z)()+(0,n.OP)(e).offset),o[e]=o[e]||{},o[e][t]=r}function s(e,t,r,n){const i=e.sharedContext.agentIdentifier;var a=o[i]?.[r],s=o[i]?.[n];void 0!==a&&void 0!==s&&e.store(“measures”,t,{value:s-a})}},2545:(e,t,r)=>{“use strict”;r.d(t,{L:()=>c});var n=r(9071),i=r(8544),o=r(8305),a=r(2374),s=r(6998);if(a.v6){a._A.cleanupTasks=[];const e=a._A.close;a._A.close=()=>{for(let e of a._A.cleanupTasks)e();e()}}function c(e,t){if(a.il)if(t)(0,s.N)(e,!0),(0,i.bP)(“pagehide”,e);else{var r=(0,o.Z)(e);!n.I||navigator.sendBeacon?(0,i.bP)(“pagehide”,r):(0,i.bP)(“beforeunload”,r),(0,i.bP)(“unload”,r)}else a.v6&&a._A.cleanupTasks.push(e)}},8610:(e,t,r)=>{“use strict”;function n(e,t){console&&console.warn&&”function”==typeof console.warn&&(console.warn(“New Relic: “.concat(e)),t&&console.warn(t))}r.d(t,{Z:()=>n})},3916:(e,t,r)=>{“use strict”;r.d(t,{X:()=>i});var n=Object.prototype.hasOwnProperty;function i(e,t,r){if(n.call(e,t))return e[t];var i=r();if(Object.defineProperty&&Object.keys)try{return Object.defineProperty(e,t,{value:i,writable:!0,enumerable:!1}),i}catch(e){}return e[t]=i,i}},2374:(e,t,r)=>{“use strict”;r.d(t,{_A:()=>o,il:()=>n,lW:()=>a,v6:()=>i});const n=Boolean(“undefined”!=typeof window&&window.document),i=Boolean(“undefined”!=typeof WorkerGlobalScope&&self.navigator instanceof WorkerNavigator);let o=(()=>{if(n)return window;if(i){if(“undefined”!=typeof globalThis&&globalThis instanceof WorkerGlobalScope)return globalThis;if(self instanceof WorkerGlobalScope)return self}throw new Error(‘New Relic browser agent shutting down due to error: Unable to locate global scope. This is possibly due to code redefining browser global variables like “self” and “window”.’)})();function a(){return o}},7022:(e,t,r)=>{“use strict”;r.d(t,{D:()=>i});var n=Object.prototype.hasOwnProperty;function i(e,t){var r=[],i=””,o=0;for(i in e)n.call(e,i)&&(r[o]=t(i,e[i]),o+=1);return r}},8305:(e,t,r)=>{“use strict”;r.d(t,{Z:()=>o});var n=r(8683),i=r.n(n);function o(e){var t,r=!1;return function(){return r?t:(r=!0,t=e.apply(this,i()(arguments)))}}},2438:(e,t,r)=>{“use strict”;r.d(t,{P:()=>o});var n=r(3752);const i=()=>{const e=new WeakSet;return(t,r)=>{if(“object”==typeof r&&null!==r){if(e.has(r))return;e.add(r)}return r}};function o(e){try{return JSON.stringify(e,i())}catch(e){try{n.ee.emit(“internal-error”,[e])}catch(e){}}}},2650:(e,t,r)=>{“use strict”;r.d(t,{K:()=>a,b:()=>o});var n=r(8544);function i(){return”undefined”==typeof document||”complete”===document.readyState}function o(e,t){if(i())return e();(0,n.bP)(“load”,e,t)}function a(e){if(i())return e();(0,n.iz)(“DOMContentLoaded”,e)}},6797:(e,t,r)=>{“use strict”;r.d(t,{EZ:()=>u,Qy:()=>c,ce:()=>o,fP:()=>a,gG:()=>d,mF:()=>s});var n=r(2053),i=r(2374);const o={beacon:”bam.nr-data.net”,errorBeacon:”bam.nr-data.net”};function a(){return i._A.NREUM||(i._A.NREUM={}),void 0===i._A.newrelic&&(i._A.newrelic=i._A.NREUM),i._A.NREUM}function s(){let e=a();return e.o||(e.o={ST:i._A.setTimeout,SI:i._A.setImmediate,CT:i._A.clearTimeout,XHR:i._A.XMLHttpRequest,REQ:i._A.Request,EV:i._A.Event,PR:i._A.Promise,MO:i._A.MutationObserver,FETCH:i._A.fetch}),e}function c(e,t,r){let i=a();const o=i.initializedAgents||{},s=o[e]||{};return Object.keys(s).length||(s.initializedAt={ms:(0,n.z)(),date:new Date}),i.initializedAgents={…o,[e]:{…s,[r]:t}},i}function u(e,t){a()[e]=t}function d(){return function(){let e=a();const t=e.info||{};e.info={beacon:o.beacon,errorBeacon:o.errorBeacon,…t}}(),function(){let e=a();const t=e.init||{};e.init={…t}}(),s(),function(){let e=a();const t=e.loader_config||{};e.loader_config={…t}}(),a()}},6998:(e,t,r)=>{“use strict”;r.d(t,{N:()=>i,e:()=>o});var n=r(8544);function i(e){let t=arguments.length>1&&void 0!==arguments[1]&&arguments[1];return void(0,n.iz)(“visibilitychange”,(function(){if(t){if(“hidden”!=document.visibilityState)return;e()}e(document.visibilityState)}))}function o(){return”hidden”===document.visibilityState?-1:1/0}},6034:(e,t,r)=>{“use strict”;r.d(t,{gF:()=>o,mY:()=>i,t9:()=>n,vz:()=>s,xS:()=>a});const n=r(2325).D.metrics,i=”sm”,o=”cm”,a=”storeSupportabilityMetrics”,s=”storeEventMetrics”},2484:(e,t,r)=>{“use strict”;r.d(t,{t:()=>n});const n=r(2325).D.pageViewEvent},6382:(e,t,r)=>{“use strict”;r.d(t,{t:()=>n});const n=r(2325).D.pageViewTiming},1509:(e,t,r)=>{“use strict”;r.d(t,{W:()=>s});var n=r(6562),i=r(3752),o=r(2384),a=r(6797);class s{constructor(e,t,r){this.agentIdentifier=e,this.aggregator=t,this.ee=i.ee.get(e,(0,n.OP)(this.agentIdentifier).isolatedBacklog),this.featureName=r,this.blocked=!1,this.checkConfiguration()}checkConfiguration(){if(!(0,n.lF)(this.agentIdentifier)){let e={…(0,a.gG)().info?.jsAttributes};try{e={…e,…(0,n.C5)(this.agentIdentifier)?.jsAttributes}}catch(e){}(0,o.j)(this.agentIdentifier,{…(0,a.gG)(),info:{…(0,a.gG)().info,jsAttributes:e}})}}}},2384:(e,t,r)=>{“use strict”;r.d(t,{j:()=>w});var n=r(8683),i=r.n(n),o=r(2325),a=r(6562),s=r(9252),c=r(7022),u=r(3752),d=r(2053),f=r(4329),l=r(2650),g=r(2374),v=r(8610),h=r(6034);function p(e){[“setErrorHandler”,”finished”,”addToTrace”,”inlineHit”,”addRelease”,”addPageAction”,”setCurrentRouteName”,”setPageViewName”,”setCustomAttribute”,”interaction”,”noticeError”].forEach((t=>{e[t]=function(){for(var r=arguments.length,n=new Array(r),i=0;i1?r-1:0),i=1;i{e.exposed&&e.api[t]&&e.api[t](…n)}))}(t,…n)}}))}var m=r(6797);const b={stn:[o.D.sessionTrace],err:[o.D.jserrors,o.D.metrics],ins:[o.D.pageAction],spa:[o.D.spa]};const y={};function w(e){let t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},n=arguments.length>2?arguments[2]:void 0,w=arguments.length>3?arguments[3]:void 0,{init:A,info:E,loader_config:x,runtime:T={loaderType:n},exposed:_=!0}=t;const D=(0,m.gG)();let j={};return E||(A=D.init,E=D.info,x=D.loader_config,j=D),g.v6&&(E.jsAttributes={…E.jsAttributes,isWorker:!0}),(0,a.CX)(e,E),(0,a.Dg)(e,A||{}),(0,a.GE)(e,x||{}),(0,a.sU)(e,T),function(e,t,n){n||(0,f.R)(e,”api”),p(t);var m=u.ee.get(e),b=m.get(“tracer”),y=”api-“,w=y+”ixn-“;function A(){}(0,c.D)([“setErrorHandler”,”finished”,”addToTrace”,”inlineHit”,”addRelease”],(function(e,r){t[r]=x(y,r,!0,”api”)})),t.addPageAction=x(y,”addPageAction”,!0,o.D.pageAction),t.setCurrentRouteName=x(y,”routeName”,!0,o.D.spa),t.setPageViewName=function(t,r){if(“string”==typeof t)return”/”!==t.charAt(0)&&(t=”/”+t),(0,a.OP)(e).customTransaction=(r||”http://custom.transaction”)+t,x(y,”setPageViewName”,!0,”api”)()},t.setCustomAttribute=function(t,r){const n=(0,a.C5)(e);return(0,a.CX)(e,{…n,jsAttributes:{…n.jsAttributes,[t]:r}}),x(y,”setCustomAttribute”,!0,”api”)()},t.interaction=function(){return(new A).get()};var E=A.prototype={createTracer:function(e,t){var r={},n=this,i=”function”==typeof t;return(0,s.p)(w+”tracer”,[(0,d.z)(),e,r],n,o.D.spa,m),function(){if(b.emit((i?””:”no-“)+”fn-start”,[(0,d.z)(),n,i],r),i)try{return t.apply(this,arguments)}catch(e){throw b.emit(“fn-err”,[arguments,this,”string”==typeof e?new Error(e):e],r),e}finally{b.emit(“fn-end”,[(0,d.z)()],r)}}}};function x(e,t,r,n){return function(){return(0,s.p)(h.xS,[“API/”+t+”/called”],void 0,o.D.metrics,m),(0,s.p)(e+t,[(0,d.z)()].concat(i()(arguments)),r?null:this,n,m),r?void 0:this}}function T(){r.e(439).then(r.bind(r,5692)).then((t=>{let{setAPI:r}=t;r(e),(0,f.L)(e,”api”)})).catch((()=>(0,v.Z)(“Downloading runtime APIs failed…”)))}(0,c.D)(“actionText,setName,setAttribute,save,ignore,onEnd,getContext,end,get”.split(“,”),(function(e,t){E[t]=x(w,t,void 0,o.D.spa)})),t.noticeError=function(e,t){“string”==typeof e&&(e=new Error(e)),(0,s.p)(h.xS,[“API/noticeError/called”],void 0,o.D.metrics,m),(0,s.p)(“err”,[e,(0,d.z)(),!1,t],void 0,o.D.jserrors,m)},g.v6?T():(0,l.b)((()=>T()),!0)}(e,j,w),(0,m.Qy)(e,D,”api”),(0,m.Qy)(e,_,”exposed”),(0,m.EZ)(“activatedFeatures”,y),(0,m.EZ)(“setToken”,(t=>function(e,t){var r=u.ee.get(t);e&&”object”==typeof e&&((0,c.D)(e,(function(e,t){if(!t)return(b[e]||[]).forEach((t=>{(0,s.p)(“block-“+e,[],void 0,t,r)}));y[e]||((0,s.p)(“feat-“+e,[],void 0,b[e],r),y[e]=!0)})),(0,f.L)(t,o.D.pageViewEvent))}(t,e))),j}},909:(e,t,r)=>{“use strict”;r.d(t,{Z:()=>i,q:()=>o});var n=r(2325);function i(e){switch(e){case n.D.ajax:return[n.D.jserrors];case n.D.sessionTrace:return[n.D.ajax,n.D.pageViewEvent];case n.D.pageViewTiming:return[n.D.pageViewEvent];default:return[]}}function o(e){return e===n.D.jserrors?[]:[“auto”]}},2325:(e,t,r)=>{“use strict”;r.d(t,{D:()=>n,p:()=>i});const n={ajax:”ajax”,jserrors:”jserrors”,metrics:”metrics”,pageAction:”page_action”,pageViewEvent:”page_view_event”,pageViewTiming:”page_view_timing”,sessionTrace:”session_trace”,spa:”spa”},i={[n.pageViewEvent]:1,[n.pageViewTiming]:2,[n.metrics]:3,[n.jserrors]:4,[n.ajax]:5,[n.sessionTrace]:6,[n.pageAction]:7,[n.spa]:8}},8683:e=>{e.exports=function(e,t,r){t||(t=0),void 0===r&&(r=e?e.length:0);for(var n=-1,i=r-t||0,o=Array(i<0?0:i);++n{var t=e&&e.__esModule?()=>e.default:()=>e;return i.d(t,{a:t}),t},i.d=(e,t)=>{for(var r in t)i.o(t,r)&&!i.o(e,r)&&Object.defineProperty(e,r,{enumerable:!0,get:t[r]})},i.f={},i.e=e=>Promise.all(Object.keys(i.f).reduce(((t,r)=>(i.f[r](e,t),t)),[])),i.u=e=>(({78:”page_action-aggregate”,147:”metrics-aggregate”,193:”session_trace-aggregate”,317:”jserrors-aggregate”,348:”page_view_timing-aggregate”,439:”async-api”,729:”lazy-loader”,786:”page_view_event-aggregate”,873:”spa-aggregate”,898:”ajax-aggregate”}[e]||e)+”.”+{78:”1ef08094″,147:”56d9a464″,193:”ada8b15b”,317:”64f61365″,348:”ced8c919″,439:”61caf4d9″,729:”37550b27″,786:”46b69e61″,862:”e74e95d2″,873:”7222cbb6″,898:”e6085a9a”}[e]+”-1228.min.js”),i.o=(e,t)=>Object.prototype.hasOwnProperty.call(e,t),e={},t=”NRBA:”,i.l=(r,n,o,a)=>{if(e[r])e[r].push(n);else{var s,c;if(void 0!==o)for(var u=document.getElementsByTagName(“script”),d=0;d{s.onerror=s.onload=null,clearTimeout(g);var i=e[r];if(delete e[r],s.parentNode&&s.parentNode.removeChild(s),i&&i.forEach((e=>e(n))),t)return t(n)},g=setTimeout(l.bind(null,void 0,{type:”timeout”,target:s}),12e4);s.onerror=l.bind(null,s.onerror),s.onload=l.bind(null,s.onload),c&&document.head.appendChild(s)}},i.r=e=>{“undefined”!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:”Module”}),Object.defineProperty(e,”__esModule”,{value:!0})},i.p=”https://js-agent.newrelic.com/”,(()=>{var e={653:0,227:0};i.f.j=(t,r)=>{var n=i.o(e,t)?e[t]:void 0;if(0!==n)if(n)r.push(n[2]);else{var o=new Promise(((r,i)=>n=e[t]=[r,i]));r.push(n[2]=o);var a=i.p+i.u(t),s=new Error;i.l(a,(r=>{if(i.o(e,t)&&(0!==(n=e[t])&&(e[t]=void 0),n)){var o=r&&(“load”===r.type?”missing”:r.type),a=r&&r.target&&r.target.src;s.message=”Loading chunk “+t+” failed.n(“+o+”: “+a+”)”,s.name=”ChunkLoadError”,s.type=o,s.request=a,n[1](s)}}),”chunk-“+t,t)}};var t=(t,r)=>{var n,o,[a,s,c]=r,u=0;if(a.some((t=>0!==e[t]))){for(n in s)i.o(s,n)&&(i.m[n]=s[n]);if(c)c(i)}for(t&&t(r);u{“use strict”;i.r(o);var e=i(2325),t=i(6562);const r=Object.values(e.D);function n(e){const n={};return r.forEach((r=>{n[r]=function(e,r){return!1!==(0,t.Mt)(r,””.concat(e,”.enabled”))}(r,e)})),n}var a=i(2384),s=i(909),c=i(9252),u=i(2053),d=i(8283),f=i(4329),l=i(1509),g=i(2650),v=i(2374),h=i(8610);class p extends l.W{constructor(e,t,r){let n=!(arguments.length>3&&void 0!==arguments[3])||arguments[3];super(e,t,r),this.hasAggregator=!1,this.auto=n,this.abortHandler,n&&(0,f.R)(e,r)}importAggregator(){if(this.hasAggregator||!this.auto)return;this.hasAggregator=!0;const e=async()=>{try{const{lazyLoader:e}=await i.e(729).then(i.bind(i,8110)),{Aggregate:t}=await e(this.featureName,”aggregate”);new t(this.agentIdentifier,this.aggregator)}catch(e){(0,h.Z)(“Downloading “.concat(this.featureName,” failed…”)),this.abortHandler?.()}};v.v6?e():(0,g.b)((()=>e()),!0)}}var m,b,y,w=i(2484);class A extends p{constructor(e,r){let n=!(arguments.length>2&&void 0!==arguments[2])||arguments[2];super(e,r,w.t,n),(0,d.B)(e,”starttime”,(0,t.OP)(e).offset),(0,d.B)(e,”firstbyte”,u.B),(0,g.K)((()=>this.measureDomContentLoaded())),(0,g.b)((()=>this.measureWindowLoaded()),!0),this.importAggregator()}measureWindowLoaded(){var r=(0,u.z)();(0,d.B)(this.agentIdentifier,”onload”,r+(0,t.OP)(this.agentIdentifier).offset),(0,c.p)(“timing”,[“load”,r],void 0,e.D.pageViewTiming,this.ee)}measureDomContentLoaded(){(0,d.B)(this.agentIdentifier,”domContent”,(0,u.z)()+(0,t.OP)(this.agentIdentifier).offset)}}m=A,b=”featureName”,y=w.t,(b=function(e){var t=function(e,t){if(“object”!=typeof e||null===e)return e;var r=e[Symbol.toPrimitive];if(void 0!==r){var n=r.call(e,t||”default”);if(“object”!=typeof n)return n;throw new TypeError(“@@toPrimitive must return a primitive value.”)}return(“string”===t?String:Number)(e)}(e,”string”);return”symbol”==typeof t?t:String(t)}(b))in m?Object.defineProperty(m,b,{value:y,enumerable:!0,configurable:!0,writable:!0}):m[b]=y;var E=i(9557),x=i(7022);class T extends E.w{constructor(e){super(e),this.aggregatedData={}}store(e,t,r,n,i){var o=this.getBucket(e,t,r,i);return o.metrics=function(e,t){t||(t={count:0});return t.count+=1,(0,x.D)(e,(function(e,r){t[e]=_(r,t[e])})),t}(n,o.metrics),o}merge(e,t,r,n,i){var o=this.getBucket(e,t,n,i);if(o.metrics){var a=o.metrics;a.count+=r.count,(0,x.D)(r,(function(e,t){if(“count”!==e){var n=a[e],i=r[e];i&&!i.c?a[e]=_(i.t,n):a[e]=function(e,t){if(!t)return e;t.c||(t=D(t.t));return t.min=Math.min(e.min,t.min),t.max=Math.max(e.max,t.max),t.t+=e.t,t.sos+=e.sos,t.c+=e.c,t}(i,a[e])}}))}else o.metrics=r}storeMetric(e,t,r,n){var i=this.getBucket(e,t,r);return i.stats=_(n,i.stats),i}getBucket(e,t,r,n){this.aggregatedData[e]||(this.aggregatedData[e]={});var i=this.aggregatedData[e][t];return i||(i=this.aggregatedData[e][t]={params:r||{}},n&&(i.custom=n)),i}get(e,t){return t?this.aggregatedData[e]&&this.aggregatedData[e][t]:this.aggregatedData[e]}take(e){for(var t={},r=””,n=!1,i=0;it.max&&(t.max=e),e=0?n=”back-forward-cache”:r&&(n=document.prerendering||z()>0?”prerender”:document.wasDiscarded?”restore”:r.type.replace(/_/g,”-“)),{name:e,value:void 0===t?-1:t,rating:”good”,delta:0,entries:[],id:”v3-“.concat(Date.now(),”-“).concat(Math.floor(8999999999999*Math.random())+1e12),navigationType:n}},W=function(e,t,r){try{if(PerformanceObserver.supportedEntryTypes.includes(e)){var n=new PerformanceObserver((function(e){Promise.resolve().then((function(){t(e.getEntries())}))}));return n.observe(Object.assign({type:e,buffered:!0},r||{})),n}}catch(e){}},q=function(e,t,r,n){var i,o;return function(a){t.value>=0&&(a||n)&&((o=t.value-(i||0))||void 0===i)&&(i=t.value,t.delta=o,t.rating=function(e,t){return e>t[1]?”poor”:e>t[0]?”needs-improvement”:”good”}(t.value,r),e(t))}},Z=function(e){var t=function(t){“pagehide”!==t.type&&”hidden”!==document.visibilityState||e(t)};addEventListener(“visibilitychange”,t,!0),addEventListener(“pagehide”,t,!0)},H=function(e){document.prerendering?addEventListener(“prerenderingchange”,(function(){return e()}),!0):e()},F=(new Date,0),U=1/0,G=0,X=function(e){e.forEach((function(e){e.interactionId&&(U=Math.min(U,e.interactionId),G=Math.max(G,e.interactionId),F=G?(G-U)/7+1:0)}))},Q=function(){return I?F:performance.interactionCount||0},K=function(){“interactionCount”in performance||I||(I=W(“event”,X,{type:”event”,buffered:!0,durationThreshold:0}))},Y=[200,500],J=0,$=function(){return Q()-J},ee=[],te={},re=function(e){var t=ee[ee.length-1],r=te[e.interactionId];if(r||ee.length<10||e.duration>t.latency){if(r)r.entries.push(e),r.latency=Math.max(r.latency,e.duration);else{var n={id:e.interactionId,latency:e.duration,entries:[e]};te[n.id]=n,ee.push(n)}ee.sort((function(e,t){return t.latency-e.latency})),ee.splice(10).forEach((function(e){delete te[e.id]}))}},ne=i(2545);class ie extends p{constructor(r,n){var i;let o=!(arguments.length>2&&void 0!==arguments[2])||arguments[2];if(super(r,n,R.t,o),i=this,v.il){if(this.pageHiddenTime=(0,C.e)(),this.performanceObserver,this.lcpPerformanceObserver,this.clsPerformanceObserver,this.fiRecorded=!1,”PerformanceObserver”in window&&”function”==typeof window.PerformanceObserver){this.performanceObserver=new PerformanceObserver((function(){return i.perfObserver(…arguments)}));try{this.performanceObserver.observe({entryTypes:[“paint”]})}catch(e){}this.lcpPerformanceObserver=new PerformanceObserver((function(){return i.lcpObserver(…arguments)}));try{this.lcpPerformanceObserver.observe({entryTypes:[“largest-contentful-paint”]})}catch(e){}this.clsPerformanceObserver=new PerformanceObserver((function(){return i.clsObserver(…arguments)}));try{this.clsPerformanceObserver.observe({type:”layout-shift”,buffered:!0})}catch(e){}}this.fiRecorded=!1;[“click”,”keydown”,”mousedown”,”pointerdown”,”touchstart”].forEach((e=>{(0,N.iz)(e,(function(){return i.captureInteraction(…arguments)}))})),function(e,t){t=t||{},H((function(){K();var r,n=V(“INP”),i=function(e){e.forEach((function(e){e.interactionId&&re(e),”first-input”===e.entryType&&!ee.some((function(t){return t.entries.some((function(t){return e.duration===t.duration&&e.startTime===t.startTime}))}))&&re(e)}));var t,i=(t=Math.min(ee.length-1,Math.floor($()/50)),ee[t]);i&&i.latency!==n.value&&(n.value=i.latency,n.entries=i.entries,r())},o=W(“event”,i,{durationThreshold:t.durationThreshold||40});r=q(e,n,Y,t.reportAllChanges),o&&(o.observe({type:”first-input”,buffered:!0}),Z((function(){i(o.takeRecords()),n.value<0&&$()>0&&(n.value=0,n.entries=[]),r(!0)})),B((function(){ee=[],J=Q(),n=V(“INP”),r=q(e,n,Y,t.reportAllChanges)})))}))}((t=>{let{name:r,value:n,id:i}=t;(0,c.p)(“timing”,[r.toLowerCase(),n,{metricId:i}],void 0,e.D.pageViewTiming,this.ee)})),!0===(0,t.Mt)(this.agentIdentifier,”page_view_timing.long_task”)&&(e=>{const t=t=>{t.forEach((t=>{const r={name:”LT”,value:t.duration,info:{ltFrame:t.name,ltStart:t.startTime,ltCtr:t.attribution[0].containerType}};”window”!==r.info.ltCtr&&Object.assign(r.info,{ltCtrSrc:t.attribution[0].containerSrc,ltCtrId:t.attribution[0].containerId,ltCtrName:t.attribution[0].containerName}),e(r)}))};let r;try{PerformanceObserver.supportedEntryTypes.includes(“longtask”)&&(r=new PerformanceObserver((e=>{Promise.resolve().then((()=>{t(e.getEntries())}))})),r.observe({type:”longtask”,buffered:!0}))}catch(e){}r&&(0,ne.L)((()=>{t(r.takeRecords())}),!0)})((t=>{let{name:r,value:n,info:i}=t;(0,c.p)(“timing”,[r.toLowerCase(),n,i],void 0,e.D.pageViewTiming,this.ee)})),(0,C.N)((()=>{this.pageHiddenTime=(0,u.z)(),(0,c.p)(“docHidden”,[this.pageHiddenTime],void 0,e.D.pageViewTiming,this.ee)}),!0),(0,N.bP)(“pagehide”,(()=>(0,c.p)(“winPagehide”,[(0,u.z)()],void 0,e.D.pageViewTiming,this.ee))),this.importAggregator()}}perfObserver(t,r){t.getEntries().forEach((t=>{“first-paint”===t.name?(0,c.p)(“timing”,[“fp”,Math.floor(t.startTime)],void 0,e.D.pageViewTiming,this.ee):”first-contentful-paint”===t.name&&(0,c.p)(“timing”,[“fcp”,Math.floor(t.startTime)],void 0,e.D.pageViewTiming,this.ee)}))}lcpObserver(t,r){var n=t.getEntries();if(n.length>0){var i=n[n.length-1];if(this.pageHiddenTime{t.hadRecentInput||(0,c.p)(“cls”,[t],void 0,e.D.pageViewTiming,this.ee)}))}addConnectionAttributes(e){var t=navigator.connection||navigator.mozConnection||navigator.webkitConnection;if(t)return t.type&&(e[“net-type”]=t.type),t.effectiveType&&(e[“net-etype”]=t.effectiveType),t.rtt&&(e[“net-rtt”]=t.rtt),t.downlink&&(e[“net-dlink”]=t.downlink),e}captureInteraction(r){if(r instanceof t.Yu.EV&&!this.fiRecorded){var n=Math.round(r.timeStamp),i={type:r.type};this.addConnectionAttributes(i);const o=(0,t.OP)(this.agentIdentifier).offset;no&&n2&&void 0!==arguments[2])||arguments[2];super(t,r,ue.t9,n),function(e){if(!ae){if(oe.dedicated){ae=Worker;try{v._A.Worker=r(ae,”Dedicated”)}catch(e){o(e,”Dedicated”)}if(oe.shared){se=SharedWorker;try{v._A.SharedWorker=r(se,”Shared”)}catch(e){o(e,”Shared”)}}else n(“Shared”);if(oe.service){ce=navigator.serviceWorker.register;try{v._A.navigator.serviceWorker.register=(t=ce,function(){for(var e=arguments.length,r=new Array(e),n=0;n(i(t,r[1]?.type),new e(…r))})}function n(t){v.v6||e(“Workers/”.concat(t,”/Unavailable”))}function i(t,r){e(“Workers/”.concat(t,”module”===r?”/Module”:”/Classic”))}function o(t,r){e(“Workers/”.concat(r,”/SM/Unsupported”)),(0,h.Z)(“NR Agent: Unable to capture “.concat(r,” workers.”),t)}}((t=>(0,c.p)(ue.xS,[t],void 0,e.D.metrics,this.ee))),this.importAggregator()}}!function(e,t,r){(t=function(e){var t=function(e,t){if(“object”!=typeof e||null===e)return e;var r=e[Symbol.toPrimitive];if(void 0!==r){var n=r.call(e,t||”default”);if(“object”!=typeof n)return n;throw new TypeError(“@@toPrimitive must return a primitive value.”)}return(“string”===t?String:Number)(e)}(e,”string”);return”symbol”==typeof t?t:String(t)}(t))in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r}(de,”featureName”,ue.t9),new class{constructor(e){let t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:(0,O.ky)(16);this.agentIdentifier=t,this.sharedAggregator=new T({agentIdentifier:this.agentIdentifier}),this.features={},this.desiredFeatures=new Set(e.features||[]),this.desiredFeatures.add(A),Object.assign(this,(0,a.j)(this.agentIdentifier,e,e.loaderType||”agent”)),this.start()}get config(){return{info:(0,t.C5)(this.agentIdentifier),init:(0,t.P_)(this.agentIdentifier),loader_config:(0,t.DL)(this.agentIdentifier),runtime:(0,t.OP)(this.agentIdentifier)}}start(){const t=”features”;try{const r=n(this.agentIdentifier),i=Array.from(this.desiredFeatures);i.sort(((t,r)=>e.p[t.featureName]-e.p[r.featureName])),i.forEach((t=>{if(r[t.featureName]||t.featureName===e.D.pageViewEvent){const e=(0,s.Z)(t.featureName),n=e.every((e=>r[e]));n||(0,h.Z)(“”.concat(t.featureName,” is enabled but one or more dependent features has been disabled (“).concat((0,S.P)(e),”). This may cause unintended consequences or missing data…”)),this.features[t.featureName]=new t(this.agentIdentifier,this.sharedAggregator)}})),(0,k.Qy)(this.agentIdentifier,this.features,t)}catch(e){(0,h.Z)(“Failed to initialize all enabled instrument classes (agent aborted) -“,e);for(const e in this.features)this.features[e].abortHandler?.();const r=(0,k.fP)();return delete r.initializedAgents[this.agentIdentifier]?.api,delete r.initializedAgents[this.agentIdentifier]?.[t],delete this.sharedAggregator,r.ee?.abort(),delete r.ee?.get(this.agentIdentifier),!1}}}({features:[A,ie,de],loaderType:”lite”})})(),window.NRBA=o})();]]> Master of Science in Analytics | UChicago Skip to main content

  • While OpenAI has been working on text and images, iGenius has been working on GPT for numbers

    While OpenAI has been working on text and images, iGenius has been working on GPT for numbers

    Within a week of being launched, chatGPT, the AI-powered chatbot developed by OpenAI, had over 1 million users, growing to 100 million users in the first month. The flood of attention from the press and consumers alike comes in part because of the software’s ability to offer human-like responses in everything from long-form content creation, in-depth conversations, document search, analysis and more.

    Uljan Sharka, CEO of iGenius, believes that generative AI has world-changing potential in the business world, because for the first time, data can be truly democratized. GPT stands for generative pretrained transformer, a family of language models trained with supervised and reinforcement learning techniques — in chatGPT’s case, 45 terabytes of text data powering all that content creation.

    But what if generative AI can be used to respond to essential data-related queries in the business world, not only content?

    “Up till now, data, analytics and even ‘data democratization’ has been data-centered, designed for data-skilled people,” Sharka says. “The business users are being left out, facing barriers to the information they need to make data-driven decisions. People are not about data. They want business answers. We have an opportunity today to shift the user interface toward language interfaces, and humanize data to make it people-centric.”

    But the interface is only a small percentage of what a complex system needs to perform in order to make this kind of information integrated, certified, safe, equal, and accessible for business decisions. Composite AI means bringing together data science, machine learning, and conversational AI in one single system.

    “I like to think of it as the iPhone of the category, which provides an integrated experience to make it safe and equal,” Sharka says. “That’s the only way we’ll have generative AI delivering impact in the enterprise.”

    Generative AI and the humanization of data science

    As the gap between B2C and B2B apps has grown, business users have been left behind. B2C apps put billions of dollars into creating exemplary apps that are very user friendly, operable with a few taps or a conversation. At home, users are writing research papers with the help of chatGPT, while back at work, a wealth of data stays siloed when the complex dashboards that connect data go unused.

    In organizations, generative AI can actually connect every data product anywhere in the world and index it in an organization’s “private brain.” And with algorithms, natural language processing and user-created metadata, or what iGenius calls advanced conversational AI, the complexity of data quality can be improved and elevated. Gartner has dubbed this ‘conversational analytics.’

    Virtualizing complexity unlocks unlimited potential to clean, manipulate and serve data for every use case, whether that’s cross-correlating information or just bringing it together as one single source of truth for an individual department.

    On the back end, generative AI helps scale the integration between systems, using the power of natural language to actually create what a Sharka calls an AI brain, composed of private sources of information. With no-code interfaces, integration is optimized and data science is democratized even before business users start consuming that information. It’s an innovation accelerator, which will cut costs as the time it takes to identify and develop use cases is slashed dramatically.

    On the front end, business users are literally having a conversation with data and getting business answers in plain natural language. Making the front-end user experience even more consumerized is the next step. Instead of a reactive and single task-based platform, asking text questions and getting text answers, it can become multi-modal, offering charts and creative graphs to optimize the way people understand the data. It can become a Netflix or Spotify-like experience, as the AI learns from how you consume that information to proactively serve up the knowledge a user needs.

    Generative AI and iGenius in action

    From an architectural perspective, this natural language layer is added to the applications and databases that already exists, becoming a virtual AI brain. Connecting across departments unlocks new opportunities.

    “This is not about using data more — this is about using data at the right time of delivery,” Sharka says. “If I can use data before or while I make a decision, whether I’m in marketing or sales or supply chain, HR, finance, operations — this is how we’re going to make an impact.”

    For instance, connecting marketing data and sales data means not only monitoring campaigns in real time, but correlating results with transactions, conversions and sales cycles to offer clear performance KPIs and see the direct impact of the campaign in real time. A user can even ask the AI to adapt campaigns in real time. At the same time, the interface surfaces further questions and areas of inquiry that the user might want to pursue next, to deepen their understanding of a situation.

    At Enel, Italy’s leading energy company now focused on sustainability, engineers consume real-time IOT information, mixing finance data with data coming from the production plants, having conversations with that data in real time. Whenever their teams need to perform preventative maintenance or plan activities in the plant, or need to measure how actual results compare to budgets, asking the interface for the synthesized information needed unlocks powerful operational analytics that can be reacted on immediately.

    The future of generative AI

    ChatGPT has sparked a massive interest in generative AI, but iGenius and OpenAI (which both launched in 2015) long ago realized they were headed in different directions, Sharka says. OpenAI built the GPT for text, while iGenius has built the GPT for numbers, a product called Crystal. Its private AI brain connects proprietary information into its machine learning model, allowing users to start training it from scratch. It uses more sustainable small and wide language models, instead of large language models to give organizations control over their IP.

    It also enables large-scale collaboration, in which companies can leverage expertise and knowledge workers to certify the data used to train models and the information generated to reduce bias at scale, and provide more localized and hyper-personalized experiences. It also means you don’t need to be a prompt engineer to safely work with or apply the data these algorithms provide to produce high-quality actionable information.

    “I’ve always believed that this is going to be a human-machine collaboration,” Sharka says. “If we can leverage the knowledge that we already have in people or in traditional IT systems, where you have lots of semantic layers and certified use cases, then you can reduce bias exponentially, because you’re narrowing it down to quality. With generative AI, and a system that’s certified on an ongoing basis, we can achieve large-scale automation and be able to reduce bias, make it safe, make it equal, and keep pushing this idea of virtual copilots in the world.”

    This is a VB Lab Insight article presented by iGenius. VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

  • OpenAI turns ChatGPT into a platform overnight with addition of plugins

    OpenAI turns ChatGPT into a platform overnight with addition of plugins

    Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

    OpenAI today announced its support of new third-party plugins for ChatGPT, and it already has Twitter buzzing about the company’s potential platform play.

    In a blog post, the company stated that the plugins are “tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.”

    A sign of OpenAI’s accelerating dominance

    The announcement was quickly received by the public as a signal of OpenAI‘s ambitions to further its dominance by turning ChatGPT into a developer platform.

    “OpenAI is seeing ChatGPT as a platform play,” tweeted Marco Mascorro, cofounder of Fellow AI.


    Transform 2023

    Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


    Register Now

    And @gregmushen tweeted: “I think the introduction of plugins to ChatGPT is a threat to the App Store. It creates a new platform with new monetization methods.”

    In sharing the announcement, OpenAI CEO Sam Altman tweeted: “We are starting our rollout of ChatGPT plugins. you can install plugins to help with a wide variety of tasks. we are excited to see what developers create!”

    OpenAI, he said, is offering a web browsing plugin and a code execution plugin. He added that the company is open-sourcing the code for a retrieval plugin.

    The plugins, he said, are “very experimental still,” but maintained that “we think there’s something great in this direction; it’s been a heavily requested feature.”

    ChatGPT plugins: Major milestone in development of AI chat

    OpenAI announced that plugin developers who have been invited off the company’s waitlist can use its documentation to build a plugin for ChatGPT. The first plugins have already been created by companies including Expedia, Instacart, Kayak, OpenTable and Zapier.

    According to Expedia, their new plugin simplifies trip planning for ChatGPT users. “Until now, ChatGPT could identify what to do and where to stay, but it couldn’t help travelers shop and book,” said a press representative in an email.

    Now, once a traveler enables the Expedia plugin, they can bring a trip itinerary created through a conversation with ChatGPT “to life” with information powered by Expedia’s travel data including real-time availability and pricing of flights, hotels, vacation rentals, activities and car rentals. When ready to book, they’ll be sent to Expedia, where they can log in to see options personalized to what they prefer, as well as member discounts, loyalty rewards and more.

    The update represents a major milestone in the development of AI chat as a platform for accessing and interacting with the internet. ChatGPT is not only providing a service, it is creating an ecosystem where developers can create and distribute their own plugins for the benefit of users. This is similar to how Apple’s App Store revolutionized the mobile industry by allowing third-party apps to flourish on its devices. ChatGPT’s plugin feature could potentially open up new possibilities and markets for AI chat in the future.

    OpenAI said they would begin extending plugin alpha access to users and developers from its waitlist and plan to roll out larger-scale access “over time.”

    VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

  • OpenAI connects ChatGPT to the internet

    OpenAI connects ChatGPT to the internet

    OpenAI’s viral AI-powered chatbot, ChatGPT, can now browse the internet — in certain cases.

    OpenAI today launched plugins for ChatGPT, which extend the bot’s functionality by granting it access to third-party knowledge sources and databases, including the web. Available in alpha to ChatGPT users and developers on the waitlist, OpenAI says that it’ll initially prioritize a small number of developers and subscribers to its premium ChatGPT Plus plan before rolling out larger-scale and API access.

    Easily the most intriguing plugin is OpenAI’s first-party web-browsing plugin, which allows ChatGPT to draw data from around the web to answer the various questions posed to it. (Previously, ChatGPT’s knowledge was limited to dates, events and people prior to around September 2021.) The plugin retrieves content from the web using the Bing search API and shows any websites it visited in crafting an answer, citing its sources in ChatGPT’s responses.

    A chatbot with web access is a risky prospect, as OpenAI’s own research has found. An experimental system built in 2021 by the AI startup, called WebGPT, sometimes quoted from unreliable sources and was incentivized to cherry-pick data from sites it expected users would find convincing — even if those sources weren’t objectively the strongest. Meta’s since-disbanded BlenderBot 3.0 had access to the web, too, and quickly went off the rails, delving into conspiracy theories and offensive content when prompted with certain text.

    OpenAI ChatGPT

    Image Credits: OpenAI

    The live web is less curated than a static training dataset and — by implication — less filtered, of course. Search engines like Google and Bing use their own safety mechanisms to reduce the chances unreliable content rises to the top of results, but these results can be gamed. They also aren’t necessarily representative of the totality of the web. As a piece in The New Yorker notes, Google’s algorithm prioritizes websites that use modern web technologies like encryption, mobile support and schema markup. Many websites with otherwise quality content get lost in the shuffle as a result.

    This gives search engines a lot of power over the data that might inform web-connected language models’ answers. Google has been found to prioritize its own services in Search by, for example, answering a travel query with data from Google Places instead of a richer, more social source like TripAdvisor. At the same time, the algorithmic approach to search opens the door to bad actors. In 2020, Pinterest leveraged a quirk of Google’s image search algorithm to surface more of its content in Google Image searches, according to The New Yorker.

    OpenAI admits that a web-enabled ChatGPT might perform all types of undesirable behaviors, like sending fraudulent and spam emails, bypassing safety restrictions and generally “increasing the capabilities of bad actors who would defraud, mislead or abuse others.” But the company also says that it’s “implemented several safeguards” informed by internal and external red teams to prevent this. Time will tell whether they’re sufficient.

    Beyond the web plugin, OpenAI released a code interpreter for ChatGPT that provides the chatbot with a working Python interpreter in a sandboxed, firewalled environment along with disk space. It supports uploading files to ChatGPT and downloading the results; OpenAI says it’s particularly useful for solving mathematical problems, doing data analysis and visualization and converting files between formats.

    OpenAI ChatGPT

    Image Credits: OpenAI

    A host of early collaborators built plugins for ChatGPT to join OpenAI’s own, including Expedia, FiscalNote, Instacart, Kayak, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram and Zapier.

    They’re largely self-explanatory. The OpenTable plugin allows the chatbot to search across restaurants for available bookings, for example, while the Instacart plugin lets ChatGPT place orders from local stores. By far the most extensible of the bunch, Zapier connects with apps like Google Sheets, Trello and Gmail to trigger a range of productivity tasks.

    To foster the creation of new plugins, OpenAI has open sourced a “retrieval” plugin that enables ChatGPT to access snippets of documents from data sources like files, notes, emails or public documentation by asking questions in natural language.

    “We’re working to develop plugins and bring them to a broader audience,” OpenAI wrote in a blog post. “We have a lot to learn, and with the help of everyone, we hope to build something that is both useful and safe.”

    Plugins are a curious addition to the timeline of ChatGPT’s development. Once limited to the information within its training data, ChatGPT is, with plugins, suddenly far more capable — and perhaps at less legal risk. Some experts accuse OpenAI of profiting from the unlicensed work on which ChatGPT was trained; ChatGPT’s dataset contains a wide variety of public websites. But plugins potentially address that issue by allowing companies to retain full control over their data.

  • Must work well with ChatGPT: Employers are posting more jobs involving AI tools

    Must work well with ChatGPT: Employers are posting more jobs involving AI tools

    Serge Osaulenko recently created a job posting for his brokerage, Everlane Realty, using ChatGPT. The role: a real estate agent who will also use ChatGPT for daily tasks.

    “Not every agent is good at writing property descriptions,” said Osaulenko, but he expects ChatGPT will help change that.

    Osaulenko, who is based in St. Petersburg, Florida, is among the growing number of employers seeking new hires who can use artificial intelligence technology in their daily job functions — or at least be willing to give it a try.

    When the software maker OpenAI launched ChatGPT in November, it kicked off an arms race of investment and rivalry, along with debates from the ethical and the philosophical to the geopolitical. Rather than wait to see how they shake out, some employers are racing to advertise roles that interact with AI.

    LinkedIn postings mentioning “GPT” — or “generative processing transformer,” a technical term for AI-driven tools that produce written content on command — ballooned by 51% from 2021 to 2022, according to data the company provided.

    The employment website Indeed said it saw a more than 140% jump in mentions of “language model,” another term related to AI writing systems, across its job postings from February 2020 to last month.

    The postings are some of the earliest signs of how an influx of AI tools could change potentially millions of people’s jobs. A study released this month by OpenAI and University of Pennsylvania researchers found that 80% of the U.S. workforce could have at least 10% of their work affected by GPTs and that nearly one-fifth could have half of their tasks affected.

    When Osaulenko tested ChatGPT a few months ago, he said, all he had to do was enter a property’s address to generate an almost-ready-to-publish description for a home listing — complete with acreage, square footage and the number of bedrooms and bathrooms.

    Include with your application a ChatGPT-generated story a 7-year-old might enjoy, and the prompt you used to generate it.

    — A sample candidate assessment posted on ziprecruiter

    Since then, he has started using AI tools to help with writing blog posts and scripts for marketing content to cutting videos and more. He said he wants the next real estate agent he hires to learn to do the same.

    AI-related tasks are becoming embedded in a small but growing number of jobs that aren’t directly tied to tech functions, ZipRecruiter Chief Economist Julia Pollak said. About half the platform’s listings that mention ChatGPT are for AI-related engineering positions, she said, while the other half are skewed toward marketing or content generation.

    Pollak said the second type of listing typically seeks candidates with the “willingness to learn and adapt to new technologies, e.g. ChatGPT,” or says, “You are excited to figure out how we can use ChatGPT and other AI tools to do work better and faster.”

    That suggests enthusiasm to experiment with such tools is as much a qualification as technical knowledge — hardly a surprise given that ChatGPT’s user-friendly design operates on plain language commands rather than code.

    “It is also being used in job candidate skills assessments,” Pollak said, with recent prompts on ZipRecruiter asking applicants to “describe how you think tools like ChatGPT might affect your role over time” or to “include with your application a ChatGPT-generated story a 7-year-old might enjoy, and the prompt you used to generate it.”

    Nation, a New York-based business that operates a fundraising platform, was recently hiring for a marketing leader who has “already begun experimenting with generative AI” and is excited to grow and scale using AI tools.

    Almost every employee at the 11-person tech company has already been using ChatGPT at least weekly since the end of January, founder Ryan Shea said. It isn’t perfect, and it doesn’t replace “knowledge workers,” he said, but it can aid the creative process.

    “It’s the technological story as old as time,” he said. “If you were a horse buggy driver and you didn’t embrace cars, things didn’t work out for you.”

    Some schools have already banned ChatGPT from classrooms over cheating concerns, while some others are embracing it. A handful of recent job postings for history and philosophy instructors in the New York area say teachers will use ChatGPT as a learning tool for secondary and high school students.

    Wellput, an email marketing company, is looking for an ad sales coordinator who will use AI programs like ChatGPT or Jasper.ai to draft templates for use in email campaigns, among other tasks. The company said it has been using the tools for about three months.

    There’s no guarantee the enthusiasm will keep climbing. OpenAI this month released GPT-4, its latest iteration, drawing concerns that have added to a rising chorus of AI experts and ethicists who are calling for greater regulation and controls as the technology barrels ahead.

    Recent history shows some buzzy tech can trigger a hiring frenzy that fizzles out. After they surged last year in a flurry of media coverage, mentions of NFTs in job postings on Indeed peaked at 226 per million listings in April, according to data provided by the platform, before they fell to single digits in December as the market for nonfungible tokens faded.

    From February 2022 to last month, NFT-related listings on Indeed fell by 64%.

    “Some people are going to call it gimmicky and [say] this is not going to last,” Osaulenko said of the emerging AI tools. But he remains bullish, comparing their impact to the advent of the internet itself.

    Shea was more circumspect. “If things peter off, I mean, ChatGPT will still remain a big part of our workflow,” he said. “I think it will change some things, but I don’t think it’s going to completely revolutionize things overnight. You know, there are still industries in the world that don’t use email properly.”