Generative AI 101

Please note: Links to non-VCC content will open in a new tab or window.

Traditional AI focuses on specific tasks with given inputs (e.g., Siri, Alexa, search engines, Netflix recommendations, Google Maps). In contrast, generative AI (GenAI) creates new content by learning patterns from large datasets, producing text, images, music, videos, tables, code, etc.

GenAI is expanding possibilities in education for teaching and learning but also raises concerns about ethics, bias, academic integrity, data privacy, and content accuracy. With each new version of large language models (LLMs), capabilities are growing, and new models like Meta Llama, Inflection Pi, and X Grok are emerging.

Here is a quick overview of 4 of the frontier large language models (June 2024, Ethan Mollick):

There are many specialized apps and tools built on generative AI models. Examples include Elicit, Consensus, and Research Rabbit, which connect ChatGPT with Semantic Scholar’s database of papers12. Writing tools like Grammarly also use AI to enhance text. Custom GPTs are becoming more popular. These are tailored chatbots designed for specific tasks, such as a YouTube video analyzer that can summarize videos and generate questions based on your instructions

Watch Professor Daniel Liu (University of Sydney) demonstrate some multimodal capabilities of these tools (Nov 2023) and it’s only growing. 


Other pages in this section: