Artificial Intelligence
Artificial intelligence (AI) is a broad term that encompasses all fields of computer science that enable machines to accomplish tasks that would normally require human intelligence. Machine learning and generative AI are two subcategories of AI.
A simple way to think about AI is as a series of nested or derivative concepts that have emerged over more than 70 years (Image courtesy IBM):
Machine Learning
Machine learning (ML) is a subset of AI that focuses on creating algorithms that can learn from data. Machine learning algorithms are trained on a set of data, and then they can use that data to make predictions or decisions about new data.
Deep Learning
Deep learning is a subset of machine learning that uses multilayered neural networks, called deep neural networks, that more closely simulate the complex decision-making power of the human brain.
Large Language Model
A large language model (LLM) is a type of AI program that uses machine learning to perform natural language processing (NLP) tasks. LLMs are trained on large data sets to understand, summarize, generate, and predict new content.
Generative AI (Gen AI)
Generative AI is a type of machine learning that focuses on creating new data. Often, this relies on the use of large language models to perform the tasks needed to create the new data.
How is AI helpful?
Virtual assistants like Siri, Alexa and Google Assistant understand verbal commands and questions.
AI powers product recommendations on Amazon and Netflix to suggest purchases and shows based on your browsing history.
AI aids doctors by analyzing medical images to detect tumors, abnormalities, and diseases.
AI can replace mundane tasks that used to be obstacles for people, like drafting appeals letters to insurance claims denials
AI can help workers analyze pay equity within workforces or across similar industries.
Chatbots provide customer service through conversational interfaces.
AI tools offer thorough and fast analytics of complicated data, including opposition research in a competitive marketplace.
AI challenges and risks
Businesses are scrambling to take advantage of the latest AI technologies and capitalize on AI's many benefits. This rapid adoption is necessary, but understanding the challenges, and risks, for mitigations is important.
Data risks
AI systems rely on input data sets that might be vulnerable to data poisoning, data tampering, data bias or cyberattacks that can lead to data breaches. Protecting data integrity and implementing security by AI vendors is crucial.
Model risks
Like with other IT systems, threat actors can target AI models for theft, reverse engineering or unauthorized manipulation.
Operational risks
Like all technologies, models are susceptible to operational risks such as model drift, bias, and breakdowns in the governance structure. Left unaddressed, these risks can lead to system failures and cybersecurity vulnerabilities that threat actors can use.
Ethics and legal risks
If AI vendors and businesses using AI don’t prioritize safety and ethics when developing and deploying AI systems, they risk committing privacy violations and producing biased outcomes.
OpenAI
OpenAI is a research and deployment company that focuses on artificial intelligence (AI). The company has made significant advancements in AI, particularly in language models, and got popular with its launch of ChatGPT.
ChatGPT
OpenAI is the parent company of ChatGPT, a language model that has garnered significant attention for its language generation capabilities.